How to run an interview loop with AI

People & HR3 AI tools7 steps6 friction points

An interview loop is the sequence of structured conversations a company runs to evaluate a candidate from first screen to final decision — typically covering a recruiter call, technical or skills assessment, panel interviews with team members, and a debrief where everyone compares notes. For operators running lean teams, coordinating this process means wrangling availability across six calendars, writing consistent evaluation criteria, keeping candidates warm, and making sure every interviewer actually shows up prepared.

The workflow looks like a natural fit for AI because most of it is language work: writing job-specific interview guides, summarizing candidate responses, synthesizing feedback from multiple reviewers, and drafting offer or rejection emails. None of this requires proprietary data to get started — a model that knows how to structure a behavioral interview question or score a rubric can meaningfully accelerate the process without any integration work.

ChatGPT, Claude, and Gemini can all contribute here today. They're genuinely good at generating role-specific question banks, building scoring rubrics, and drafting candidate communications quickly. You'll get the most out of them if you treat them as a drafting and structuring assistant — paste in a job description and ask for a guide, then edit the output rather than writing from scratch. The constraint is that each task is a separate, stateless conversation.

People & HR3 AI tools7 steps6 friction points
AI walkthrough

How to do it with AI today

A practical walkthrough using ChatGPT, Claude, and other off-the-shelf LLMs — what they're good at, what you'll have to do by hand.

Tools that work for this
ChatGPTClaudeGemini
Step-by-step
1 Start in Claude or ChatGPT: paste your job description and ask the model to generate a structured interview guide covering the role's core competencies — include a behavioral question, a situational question, and a skills question for each competency.
2 Ask the same model to produce a scoring rubric for each question: what does a 1, 3, and 5 look like? Copy this into a Google Doc or Notion page that every interviewer will reference before their session.
3 When a candidate completes a phone screen, paste your raw notes into ChatGPT or Claude and prompt it to extract a structured summary: strengths observed, concerns flagged, and a recommended next-step decision with rationale.
4 Before each panel interview, use the model to draft a brief interviewer prep note: what the previous stage found, what this interviewer should probe given their function, and the two or three questions they should prioritize.
5 After panel interviews, collect written feedback from each interviewer (even a few sentences), paste all of it into Claude, and ask it to synthesize a debrief summary — areas of agreement, disagreement, and the unresolved questions the panel should discuss.
6 Use ChatGPT or Gemini to draft candidate-facing emails at each stage: screening confirmation, interview scheduling, stage-progression message, and offer or rejection templates. Edit for tone and specifics before sending.
7 Track everything in a shared doc or spreadsheet manually — the model has no memory between sessions, so you are the system of record.
Prompts you can copy
You are a hiring expert. Here is the job description for a Head of Growth: [paste]. Generate a structured interview guide with 3 competencies, 2 questions each, plus a scoring rubric (1/3/5 anchors) for each question.
Here are my raw notes from a 30-minute phone screen with a candidate for a senior operations role: [paste notes]. Summarize strengths, concerns, and recommend whether to advance to the panel. Keep it under 150 words.
I have feedback from three interviewers after a panel interview. Here it is: [paste]. Synthesize the key themes, flag where interviewers disagreed, and list the top two open questions the debrief should resolve.
Write a candidate email confirming their panel interview on Thursday at 2pm ET. Include who they'll meet (Sarah, ops lead; Marcus, finance), a rough agenda, and a line asking if they have questions beforehand. Keep it friendly and under 100 words.
Based on this offer decision and candidate background [paste], draft a verbal offer email covering role title, start date, comp range, and two sentences on why we're excited to have them. Avoid corporate filler.
Reality check

Where this gets hard

The walkthrough above works — until your numbers change, the LLM hallucinates, or you have to re-paste everything next month.

No persistent candidate record — every session starts blank. You re-paste the job description and context every single time you open a new chat.
Interview notes and feedback live in different places: one interviewer's Notion, another's email, your own Google Doc. Aggregating them for debrief requires manual copy-paste before the model can help.
Scheduling coordination happens entirely outside the LLM — you're still going back and forth over email or Calendly to find time for a six-person panel, then manually linking the invite to the candidate profile.
Rubric and question sets drift between roles. The scoring guide you carefully built for one hire lives in a doc that new interviewers may never find, so you rebuild versions from scratch for the next search.
There's no alert system — if a candidate hasn't heard back in five days or an interviewer hasn't submitted feedback, nothing nudges anyone. The loop stalls silently until someone notices.
Debrief synthesis is only as good as the notes interviewers actually write. With no structured collection mechanism, you're often working from thin, inconsistent inputs when you ask the model to help you decide.

Tired of the friction?

Starch runs the whole workflow on live data — no copy-paste, no hallucinated numbers, no re-prompting next month.

See the Starch version →
Starch alternative

The same workflow on Starch

Starch is an agentic operating system — for this workflow, that means an agent builds the persistent interview loop infrastructure your team actually uses: scheduling that syncs to real calendars, communication drafts that go out from your real inbox, notes that get captured automatically, and tasks that track who owes what.

Connect Google Calendar through Starch's scheduling-sync integration and use the Scheduling app to give candidates a direct booking link with the right meeting type pre-configured — no back-and-forth emails to find panel time.
The Meeting Notes app transcribes every interview in real time and auto-extracts action items and key decisions. When it's time to debrief, every interviewer's session is already archived and searchable — not scattered across inboxes.
Use the Email Agent app to draft and queue candidate communications at each stage — screening confirmations, stage-progression notes, offer letters — connected to your real Gmail or Outlook so they go out from your actual address.
Describe your interview process in plain English — 'build me a candidate tracker with stages for phone screen, panel, and offer, plus a feedback form each interviewer fills out before debrief' — and Starch builds it as a persistent app your team logs into.
The Task Manager app tracks interviewer feedback submissions and follow-up deadlines, and flags what's overdue — so a candidate doesn't wait five days in silence because one reviewer forgot to write up their notes.
Tell Starch: 'After each completed panel, pull feedback from the candidate tracker and draft a debrief summary for the hiring manager.' That automation runs against live data every time — no re-prompting, no manual assembly.
Get closed-beta access →
Toolkit

Starch apps for this workflow

Pick your role

See this workflow by operator

Run run an interview loop on Starch

You're on the list! We'll be in touch soon.