How to synthesize customer research interviews as Solo Media and Creator Founders

Strategy & PlanningFor Solo Media and Creator Founders2 apps10 steps~20 min to set up

You do 6-10 listener or reader interviews every quarter trying to figure out why people subscribe, churn, or never convert to paid. The raw files sit in Riverside or Otter.ai. You paste chunks into ChatGPT one at a time, write a messy summary doc in Notion, and then forget to actually use it when planning next quarter's content. Two months later you're making editorial calls based on vibes because nobody synthesized anything — not because the research wasn't done, but because there was no system to pull it into the workflow. The insight that 'sponsors care about click attribution, not open rates' was in interview 4. You never acted on it.

Strategy & PlanningFor Solo Media and Creator Founders2 apps10 steps~20 min to set up
Outcome

What you'll set up

A Notion-connected synthesis app that ingests raw interview transcripts, tags them by theme, and surfaces the top 3 patterns across all interviews — automatically, not manually.
A structured research output you can actually hand to a sponsor pitch or use in an editorial planning session, with direct quotes pulled per theme.
An ongoing system that adds new interviews as you do them, so your customer research compounds instead of going stale in a folder you never revisit.
The Starch recipe

Apps, data, and prompts

The combination of Starch apps, the data sources they pull from, and the prompts you use to drive them.

Data sources & config

Starch connects directly to Notion (scheduled sync) to read your interview transcript database and any linked research pages. PostHog is connected from Starch's integration catalog; the agent queries it live when the growth analysis runs. Riverside and Otter.ai transcripts that live as text files can be pushed into Notion first, or Starch automates the export from those sites through your browser — no API needed.

Prompts to copy
Read all the interview transcripts in my Notion database tagged 'listener-research-Q2-2026'. Group the responses by theme — monetization frustrations, content format preferences, reasons for upgrading to paid, reasons for churning. For each theme, pull the 3 most representative direct quotes and write a 2-sentence summary of what listeners actually said. Flag any theme that contradicts what I currently believe about my audience.
Compare the themes from my Q2 listener interviews against my PostHog signup and upgrade data from the last 90 days. Tell me which stated listener preferences match what people actually do — and where there's a gap I should investigate.
Run these in Starch → or paste them into your favorite agent
Walkthrough

Step-by-step

1 Connect Notion via scheduled sync in Starch. Point it at the database or folder where you dump interview transcripts — whether those are Otter.ai exports you pasted in, Riverside auto-transcripts, or your own notes from calls.
2 Set a consistent tagging convention in Notion for new interviews: interviewee type (subscriber, churned, never-converted), date, and medium (newsletter, podcast, YouTube). Starch will use these tags to slice the synthesis.
3 Open the Knowledge Management app in Starch and describe your synthesis goal: 'I have 8 listener interviews in my Notion database tagged Q2-2026. Synthesize them by theme: why people subscribe, why they upgrade to paid, why they churn, and what content they say they want more of. Pull direct quotes for each theme.'
4 Review the draft synthesis. Starch will surface themes you expected and some you didn't. Flag any theme as 'contradicts my assumption' so you can track where your editorial instincts are off.
5 Ask Starch to generate a one-page research brief you can actually use: 'Turn this synthesis into a 400-word brief I can paste into a sponsor pitch or share with a guest I'm courting — written for someone who doesn't know my audience yet.'
6 Connect PostHog from Starch's integration catalog (the agent queries it live). Run a cross-check prompt: 'Compare what listeners said about why they upgrade against my actual upgrade events in PostHog over the last 90 days. Where does stated behavior match real behavior, and where does it diverge?'
7 Use the Growth Analyst app to schedule a recurring sanity check: 'Every time I add three or more new interview transcripts to Notion tagged with listener-research, regenerate the synthesis summary and email me a diff — what themes got stronger, what's new, what dropped off.'
8 Build a standing prompt for sponsor-facing claims: 'From my Q2 listener research, extract every quote where someone mentioned why they click or don't click sponsor links. I want specific language I can use when a sponsor asks why my audience converts.' This becomes your negotiation ammo.
9 Set up a lightweight editorial feedback loop: 'After each new episode publishes, check if there are listener replies in Gmail that mention the topics from my research themes. Surface any that confirm or contradict the synthesis.' Starch syncs Gmail on a schedule and can run this check weekly.
10 Archive each quarter's synthesis as a Notion page with a consistent title format (Q2-2026-listener-synthesis). The Knowledge Management app indexes these automatically, so six months from now you can ask 'What did listeners say about paid tiers across all my research?' and get an answer across all quarters, not just the most recent one.

See this running on Starch

Connect your tools, describe what you want, and the agent builds it. Closed beta is free.

Try it on Starch →
Worked example

Q2 2026 Listener Synthesis — 9 interviews, newsletter + podcast

Sample numbers from a real run
Interviews completed9
Transcripts in Notion (avg. 4,200 words each)37,800
Themes surfaced by Starch6
Contradictions flagged vs. prior assumptions2
Sponsor-facing quotes extracted14
Hours spent on synthesis vs. prior quarter (manual)1.5

You did 9 interviews in April and May — 5 active paid subscribers, 2 people who churned after 3 months, and 2 who opened every email but never converted. The raw transcripts totaled about 38,000 words sitting in Notion. In prior quarters you'd spend a Saturday afternoon skimming them and writing a 600-word doc you barely re-read. This time you ran the Knowledge Management synthesis in Starch. Six themes came back: content depth vs. frequency tension, sponsor fatigue from non-endemic brands, the specific episode format (solo vs. interview) that drove upgrade decisions, a recurring complaint about inconsistent publish cadence, and two you didn't expect — listeners wanted a private community more than you realized, and three churned subscribers mentioned price wasn't the issue, discoverability of back episodes was. Starch also flagged a contradiction: you'd assumed the 'deep dive' format was your strongest retention driver, but the synthesis showed that 4 of the 5 paid subscribers said they upgraded after a short solo episode, not a long interview. That single flag changed how you planned the next 8 weeks of content. The sponsor-facing extract pulled 14 quotes about why listeners click or skip sponsor segments, which you used word-for-word in a pitch to a new sponsor who wanted proof of audience intent. Total time: about 90 minutes from transcript dump to usable brief, versus a full Saturday the quarter before.

Measurement

How you'll know it's working

Hours from last interview completed to synthesis doc ready to use
Number of editorial decisions per quarter that cite research (vs. gut)
Sponsor pitches that include direct audience quotes (and close rate on those pitches vs. pitches without)
Research themes that get confirmed or contradicted by PostHog behavior data
Quarters before the research knowledge base becomes searchable across all past cohorts
Comparison

What this replaces

The other ways teams handle this today, and how the Starch version compares.

ChatGPT + manual copy-paste
Works for one interview at a time but doesn't aggregate across a cohort, doesn't connect to Notion or PostHog, and produces a one-off output you have to store somewhere — the synthesis doesn't compound.
Dovetail or Condens
Purpose-built qualitative research tools with strong tagging and affinity mapping, but they're designed for research teams, cost $30-80/seat/month, and don't connect to your publishing stack or financial data — the insight stays siloed in the research tool.
Notion AI
Useful for summarizing a single page, but can't run a cross-database synthesis, doesn't connect to PostHog or Gmail, and has no way to trigger a re-synthesis when new interviews are added.
Otter.ai Summary + Google Docs
Otter summarizes individual calls well, but rolling up 9 interviews into thematic synthesis, extracting quotes by category, and comparing to behavioral data is entirely manual work you still have to do yourself.
On Starch RECOMMENDED

One platform — knowledge management, growth analyst all running on connected data. Setup in plain English; numbers stay current via scheduled syncs and live agent queries.

Try it on Starch →
FAQ

Frequently asked questions

My transcripts aren't in Notion — they're in a folder in Google Drive or exported from Riverside. Can Starch still reach them?
Google Drive is in Starch's integration catalog; the agent queries it live. If your transcripts are Riverside exports or Otter.ai files, the easiest path is to paste them into a Notion database first — takes a few minutes — and then Starch syncs Notion on a schedule. Alternatively, Starch can automate the export from Riverside through your browser if you want to skip the manual step.
I only do interviews twice a year. Is this worth setting up for a small research volume?
Yes, specifically because small volume is where manual synthesis is most likely to get skipped. Six interviews a year still produce 20,000+ words of signal. The real value is that Starch indexes every synthesis permanently, so when you're doing your fourth cohort two years from now, you can ask 'what have listeners consistently said about pricing across all my research?' and get an answer without re-reading anything.
Can Starch pull in comments from my Beehiiv or YouTube as additional research input?
Beehiiv and YouTube Studio are browser-reachable, so Starch can automate pulling comment threads or subscriber reply data through your browser — no API needed. You'd describe what you want: 'Go to my Beehiiv dashboard, pull the 50 most recent reply emails from paid subscribers, and add them to my Q3 research Notion database.' That becomes an additional data source alongside your formal interviews.
Will Starch store all my interview transcripts? I have some sensitive conversations with subscribers.
Starch stores the data that powers your apps and synthesis. If that's a concern for specific interviews, you can selectively tag which Notion pages Starch should read — you don't have to give it access to your entire Notion workspace, just the databases you designate. Worth knowing: Starch is not currently SOC 2 Type II certified, so if you're handling research under strict data agreements, that's an honest constraint to factor in.
How is this different from just asking ChatGPT to summarize my interviews?
ChatGPT gives you a one-off answer on whatever you paste. Starch builds a persistent system: new interviews get added to the same Notion database, the synthesis re-runs, you get a diff of what changed, and the research connects to your actual behavioral data in PostHog. The output compounds instead of living in a chat window you'll never find again.
Can Starch write my actual content strategy doc from the research, or just the synthesis?
Both. Once the synthesis exists, you can prompt Starch to go further: 'Based on this listener research, write a 6-week content calendar with episode themes that address the top 3 things listeners said they want more of.' The synthesis is the input; what you build on top of it is up to you.

Ready to run synthesize customer research interviews on Starch?

Request closed-beta access. Everything is free during beta.

You're on the list! We'll be in touch soon.