How to synthesize customer research interviews on Starch

Strategy & Planning10 roles covered4 Starch apps

Customer research synthesis is the work that happens after the interviews: turning 8 to 20 conversations full of anecdotes, contradictions, and half-formed ideas into something you can actually act on — a clear picture of what customers want, what's blocking them, and which patterns are signal versus noise. Most operators know they should be doing this more rigorously. Most also know that a stack of Otter.ai transcripts sitting in a Google Drive folder is not the same thing as a synthesis.

What this looks like in practice depends on your context — a B2B SaaS founder coding job-to-be-done themes is doing something different from a consumer brand founder looking for language patterns, who is doing something different from a services operator trying to understand churn. The workflow shares the same shape, but the inputs, the outputs, and what 'done' looks like vary.

On Starch, you end up with a searchable knowledge base where every interview lives alongside the themes extracted from it, a recurring digest that surfaces new patterns as more conversations come in, and a presentation-ready synthesis you can drop into a board update or a product review — without spending a weekend manually tagging transcripts. Describe what you want: 'a running summary of customer interviews organized by pain theme, updated each time I add a new transcript.' That's what you get.

Strategy & Planning10 roles covered4 Starch apps
Context

Why it matters

Why this is hard today

Unanalyzed research is just expensive listening. When synthesis doesn't happen, teams ship based on the last loud customer instead of the pattern across twenty. Insights rot fast — an interview from six months ago lands differently than one from last week, but without a system, they blur together. Getting this right means product decisions have evidence behind them, positioning reflects what customers actually say, and you stop rediscovering the same insight every quarter because someone finally wrote it down.

Watch out for

Common pitfalls

Where this usually goes wrong

The most common mistakes: treating the transcript as the deliverable and never abstracting up to themes; synthesizing once after a research sprint and never updating the picture as new conversations happen; conflating what customers say they want with what they describe as their actual problem; and keeping insights in a format — a long doc, a slide deck, a Notion page nobody revisits — that makes them impossible to query when you need them six weeks later.

Toolkit

Starch apps used

See this running on Starch

Connect your tools, describe what you want, and the agent builds it. Closed beta is free.

Try it on Starch →
Pick your role

Choose your operator

A version of this guide tailored to your role — same recipe, different starting context.

Run synthesize customer research interviews on Starch

You're on the list! We'll be in touch soon.