How to synthesize customer research interviews with AI

Strategy & Planning3 AI tools7 steps6 friction points

Synthesizing customer research interviews means taking a pile of raw transcripts — often 10 to 30+ calls, each 30 to 60 minutes long — and turning them into something a team can actually act on: themes, ranked pain points, representative quotes, and a clear picture of what customers said versus what they meant. It lands on operators' plates after every product sprint, sales cycle, or fundraise where someone ran discovery calls and promised to 'share what we learned.'

The workflow looks like an AI problem because the hard part is pattern recognition across unstructured text. A human reading 20 transcripts sequentially will anchor on the last three calls, miss quiet-but-consistent signals, and spend four hours doing it. An LLM can read all 20 in one pass, hold the whole corpus in context, and surface themes without the cognitive fatigue. That's a real advantage — not hype, just a genuine fit between what the task requires and what the tools do well.

ChatGPT, Claude, and Gemini can all do meaningful work here today. Claude's longer context window makes it the practical first choice for pasting full transcripts. ChatGPT handles structured extraction well with the right system prompt. Gemini 1.5 Pro's 1M token context can fit the entire corpus in one shot if your transcripts are clean text. All three can identify themes, extract quotes, and draft summaries — the ceiling is more about process and persistence than model capability.

Strategy & Planning3 AI tools7 steps6 friction points
AI walkthrough

How to do it with AI today

A practical walkthrough using ChatGPT, Claude, and other off-the-shelf LLMs — what they're good at, what you'll have to do by hand.

Tools that work for this
ClaudeChatGPTGemini
Step-by-step
1 Export your interview transcripts to plain text. If you recorded on Zoom or Google Meet, export the auto-transcript or paste in a Notion or Google Doc. Strip timestamps and speaker labels down to 'Interviewer:' and 'Participant:' for cleaner extraction.
2 Open Claude (Sonnet or Opus) and paste in 3-5 transcripts at once. Ask it to identify the top recurring themes, flag any novel insights that appeared only once but seem high-signal, and extract 2-3 verbatim quotes per theme. Start small before pasting the full corpus.
3 Once you've validated the theme list on a subset, feed in the remaining transcripts in batches and ask Claude to reconcile new themes against the ones it already identified. Explicitly tell it to flag when a new theme contradicts a prior one — otherwise it will silently paper over conflicts.
4 Take the merged theme list into ChatGPT with a structured prompt asking it to rank themes by frequency (how many participants mentioned it), intensity (how much of the conversation it occupied), and recency (did it come up more in recent calls). This produces a prioritized output you can defend in a meeting.
5 For each top theme, go back to Claude and ask for the 5 best supporting quotes across all transcripts, along with the participant archetype each quote came from — job title, company size, or whatever segmentation is relevant. This saves the manual 'find me that thing one person said' work.
6 Draft your synthesis document in whatever format you'll actually share it: a Notion page, a slide deck, a PDF. Paste the structured output from Claude and ChatGPT and write the narrative connective tissue yourself. The LLM gives you the skeleton; the framing and so-what are still yours.
7 Run a final pass asking the LLM to identify what's missing — questions you didn't ask, segments you didn't interview, or assumptions embedded in your themes that haven't been tested. This is genuinely useful for spotting blind spots before you present.
Prompts you can copy
Here are 4 customer interview transcripts. Identify the top 5 themes across all of them. For each theme, list: how many participants mentioned it, 2 verbatim quotes, and whether it appeared more with early-stage or growth-stage companies.
Given this list of themes from 15 interviews, rank them by frequency and apparent urgency based on the language participants used. Flag any themes that seem contradictory to each other and explain the contradiction.
From these transcripts, extract every mention of our competitors — what the participant said, which competitor they named, and whether the sentiment was positive, negative, or neutral. Format as a table.
Here are themes from my customer interviews. Play devil's advocate: what important questions did I fail to ask, and what participant segments am I likely missing based on who I interviewed?
Summarize these 6 interview transcripts into a single 400-word synthesis a non-technical founder could read in 3 minutes. Lead with the most surprising finding, then the top 3 patterns, then what we should do next.
Reality check

Where this gets hard

The walkthrough above works — until your numbers change, the LLM hallucinates, or you have to re-paste everything next month.

No persistent storage — every session starts cold. Next sprint, you paste the same transcripts in again and hope your prompts from last time are still in a doc somewhere.
Context windows hit real limits at scale. 20 hour-long transcripts can exceed even Claude's window, forcing batching that introduces inconsistency across runs.
Themes drift between runs. The five categories Claude surfaced last Tuesday are not guaranteed to match the five it surfaces today on the same input, making longitudinal comparison unreliable.
No connection to your actual customer data. The LLM can't cross-reference what a participant said against their HubSpot deal stage, Stripe subscription tier, or support ticket volume — you do that join manually.
Output format is whatever the LLM decided today. Keeping quotes, themes, and summaries in a consistent structure across multiple synthesis runs requires re-prompting every time and still produces variation.
Nothing routes forward automatically. Action items extracted from interviews live in a chat window, not in your task manager or CRM, until you copy them there yourself.

Tired of the friction?

Starch runs the whole workflow on live data — no copy-paste, no hallucinated numbers, no re-prompting next month.

See the Starch version →
Starch alternative

The same workflow on Starch

Starch is an agentic operating system — the layer where an agent builds and runs the persistent software your research workflow depends on. Instead of re-running prompts against pasted transcripts, you describe what you want once and Starch builds an app that holds your research, connects to your live customer data, and keeps outputs structured and searchable across every sprint.

Connect Notion once — Starch syncs your pages and databases on a schedule, so interview notes and transcripts stored there are always available to the agent without a copy-paste step.
Tell Starch to build a research synthesis app: 'Create an app that reads my Notion interview notes, extracts themes and quotes by participant segment, and shows me a ranked view updated each time new notes are added.' The agent builds it and runs it continuously.
Use the Knowledge Management starter app as your research archive — AI-powered search across every transcript, auto-categorized by theme, with staleness detection so you know which findings are from calls six months ago versus last week.
Cross-reference what participants said against live business data: connect HubSpot from Starch's integration catalog and the agent can tag each theme with the deal stages or customer tiers where it appeared most — a join that's impossible in a raw LLM session.
The Growth Analyst app connects to PostHog and emails you a weekly digest — if your research surfaces a conversion friction point, you can ask Starch to build an automation that flags when that metric moves, so the interview insight routes into something you'll actually see.
Action items extracted from interviews can route directly into a task manager or project management app built in Starch — not sitting in a chat window waiting to be copied somewhere.
Get closed-beta access →
Toolkit

Starch apps for this workflow

Pick your role

See this workflow by operator

Run synthesize customer research interviews on Starch

You're on the list! We'll be in touch soon.