How to synthesize customer research interviews as Small Marketing Teams

Strategy & PlanningFor Small Marketing Teams3 apps11 steps~22 min to set up

You did six customer interviews last month. The recordings are in Loom, the notes are split across three Notion pages, and the actual synthesis — the part where you figure out what the three recurring objections are, which messaging is landing, and what the content calendar should reflect — hasn't happened yet. Your team of three is too busy pulling together the HubSpot-to-GA4 pipeline report to sit down for a two-hour synthesis session. So the interviews rot. You write the next campaign brief from gut feel anyway, and six weeks later you're wondering why the nurture sequence isn't converting. The research existed. You just never turned it into anything.

Strategy & PlanningFor Small Marketing Teams3 apps11 steps~22 min to set up
Outcome

What you'll set up

A Notion-connected research synthesizer that ingests interview notes and meeting transcripts, clusters recurring themes, and surfaces the top objections and messaging signals your team actually heard — not what you remember hearing.
An automated digest that ties synthesis findings back to live HubSpot deal stage data and Gmail threads, so you can see whether the objections coming up in interviews match what's stalling deals in your pipeline.
A shareable summary artifact — ready to paste into a campaign brief or hand to the CEO — that took 20 minutes to produce instead of a half-day workshop.
The Starch recipe

Apps, data, and prompts

The combination of Starch apps, the data sources they pull from, and the prompts you use to drive them.

Data sources & config

Starch syncs your Notion data on a schedule — pages, databases, and tags flow in automatically so the agent can read and cluster research notes without manual export. Starch connects directly to HubSpot on a schedule, pulling deal stages, lost reasons, and contact notes so synthesis can be cross-referenced against pipeline reality. Gmail is synced on a schedule for relevant customer email threads. Loom transcripts are pulled through browser automation — no Loom API needed. For weekly digests, the automation runs on a schedule and posts to Slack, which is connected via Starch's integration catalog and queried live when the automation fires.

Prompts to copy
Read all interview notes in my Notion database tagged 'customer-research' from the last 90 days. Cluster them by recurring theme — objections, feature requests, messaging that resonated, and reasons for buying. For each cluster, pull the three most representative direct quotes and give me a confidence count: how many interviews mentioned this theme.
I have six Loom transcripts from customer discovery calls this quarter. Pull the transcripts, identify the top five objections to our pricing page, and tell me which ones also appear as lost-deal reasons in HubSpot.
Every Friday at 9am, summarize any new customer interview notes added to Notion this week. Flag any theme that appeared in three or more interviews and didn't appear in last week's summary. Email the digest to the marketing team Slack channel.
Run these in Starch → or paste them into your favorite agent
Walkthrough

Step-by-step

1 Connect Notion from Starch's scheduled-sync integration. Point it at the database or folder where your team dumps customer interview notes, call summaries, and research docs. Starch syncs the pages on a schedule — no manual export, no copy-paste.
2 Connect HubSpot via Starch's scheduled-sync integration. Starch pulls deals, contacts, lost-deal reasons, and owner notes automatically. This is the data that lets you check whether interview themes actually show up in your pipeline.
3 Connect Gmail via Starch's scheduled-sync integration so the agent can cross-reference customer emails against interview themes — useful for catching objections that come up in follow-up threads but never made it into a Notion note.
4 For any interviews recorded in Loom or stored as video elsewhere, Starch automates the transcript extraction through your browser — no Loom API needed. Paste the Loom URLs into the app and the agent pulls the transcripts and adds them to the synthesis pool.
5 Start with the Knowledge Management app from the App Store. It's already wired for Notion and gives you AI-powered search across all your research docs. Fork it to add a 'customer research' view with filters for interview date, interviewee role, and deal stage at time of interview.
6 Tell Starch: 'Read all notes in my customer-research Notion database from the last 90 days. Cluster by theme — objections, messaging resonance, feature gaps, competitive mentions. For each cluster, show a confidence count and pull three direct quotes.' The agent returns a structured synthesis table, not a wall of bullet points.
7 Run the cross-reference step: 'Compare the top five objection themes from customer interviews against the lost-deal reasons in HubSpot for deals closed-lost in the same period. Tell me where they overlap and where they diverge.' This is the step that separates a research synthesis from a research archive.
8 Use the Meeting Notes app to handle any interviews happening going forward. It transcribes in real time, extracts action items, and archives to searchable history. New interview transcripts feed back into the same Notion database your synthesis agent is already reading.
9 Build the weekly automation: 'Every Friday at 9am, check for new customer research notes added to Notion this week. If any new theme appears in three or more entries, flag it. Email a three-paragraph summary to me and post it to the #marketing Slack channel.' This runs without anyone remembering to do it.
10 Wire the Growth Analyst app to run alongside the synthesis cycle. It connects to PostHog and emails a weekly digest covering traffic and conversion changes. Cross-reading the research synthesis against the Growth Analyst digest tells you whether the messaging gaps your customers are naming actually show up in your funnel numbers.
11 When synthesis is done, tell Starch: 'Turn the top three themes from this week's customer research synthesis into a one-page campaign brief. Format it as: problem statement, supporting quotes, recommended message angle, and one content idea for each channel — email, paid, and organic.' Paste the output directly into your campaign planning doc.

See this running on Starch

Connect your tools, describe what you want, and the agent builds it. Closed beta is free.

Try it on Starch →
Worked example

Q1 2026 Nurture Sequence Rebuild — March 2026

Sample numbers from a real run
Customer interviews synthesized14
Recurring objection themes identified4
HubSpot closed-lost deals cross-referenced37
Overlapping objection themes (interviews + pipeline)2
Hours saved vs. manual synthesis workshop6

The marketing team had 14 customer interviews sitting in Notion from January through March — a mix of win/loss calls, onboarding check-ins, and one round of messaging research a contractor ran in February. Nobody had synthesized them. The team ran the Starch synthesis prompt against the Notion database and got back four clusters: 'integration complexity' (9 of 14 interviews), 'unclear ROI in first 30 days' (7 of 14), 'competitor X comparison' (5 of 14), and 'onboarding time' (4 of 14). They then ran the HubSpot cross-reference: of 37 closed-lost deals in the same period, 'integration concerns' appeared as a lost-deal reason in 22 of them, and 'time to value' in 18. The overlap between interview themes and pipeline data was immediate and specific — the nurture sequence had been leading with feature depth, not time-to-first-value. The team rebuilt the first three emails in the Customer.io sequence around the 30-day ROI narrative that Friday. They didn't need a synthesis workshop. They needed 40 minutes and the right prompt.

Measurement

How you'll know it's working

Time from interview completion to usable synthesis artifact (target: same week, not same quarter)
Percentage of closed-lost deal reasons that appear as top-3 interview objection themes (measures whether research and pipeline are telling the same story)
Number of campaign briefs citing specific customer quotes vs. briefs written from assumption
MQL-to-SQL conversion rate on nurture sequences informed by research synthesis vs. prior quarter
Research coverage: percentage of ICPs interviewed per quarter with notes in the synthesis database
Comparison

What this replaces

The other ways teams handle this today, and how the Starch version compares.

Dovetail or Aurelius
Purpose-built for UX research synthesis and genuinely excellent at tagging and theming, but they don't connect to HubSpot, Gmail, or your Notion campaign docs — so the synthesis stays siloed from your pipeline and content workflow, and someone still has to translate findings into a brief manually.
Notion AI on your existing notes database
Already in your stack and fine for summarizing a single page, but it won't cluster across 14 interviews, cross-reference against HubSpot lost-deal data, or run a weekly automated digest — you're still doing the connective work yourself.
ChatGPT with manual copy-paste
Works for a one-off synthesis if you have time to export and paste everything, but there's no scheduled automation, no HubSpot integration, and no persistent connection to new notes added next week — it's a session, not a system.
Manual synthesis workshop (2-hour team session)
High-quality output when it happens, but a 3-person team running demand gen, content, and lifecycle simultaneously will reschedule this meeting four times before it occurs, and by then the research is stale.
On Starch RECOMMENDED

One platform — knowledge management, meeting notes, growth analyst all running on connected data. Setup in plain English; numbers stay current via scheduled syncs and live agent queries.

Try it on Starch →
FAQ

Frequently asked questions

Our interview notes are inconsistently formatted — some are bullet points, some are paragraphs, some are half-filled templates. Can Starch still synthesize across them?
Yes. The agent reads Notion pages as text regardless of formatting. Inconsistent structure means the clustering output might be slightly less precise on a first pass, but you can prompt Starch to normalize: 'Before clustering, read each note and rewrite it as: interviewee role, top concern mentioned, key quote, outcome.' The agent runs that normalization step first, then synthesizes across the cleaned versions.
We record interviews in Zoom, not Loom. Does that work?
If you export Zoom transcripts and save them to Notion or Google Drive, the agent reads them through the Notion sync or via browser automation on Drive. Starch can also automate through the Zoom web interface directly — no Zoom API required. The practical path most teams take is exporting the .vtt or .txt transcript file into the same Notion database where other notes live, so everything is in one place for synthesis.
Will Starch store my customer interview transcripts? We have some sensitivity concerns about what customers said on those calls.
Starch is not SOC 2 Type II certified today. That's worth knowing before you put sensitive customer data in. If your interviews contain PII or confidential customer information, check with your legal or security team before connecting transcript data. For many small marketing teams this is a non-issue; for teams in regulated industries or with enterprise data agreements, it's a real constraint and we'd rather you know upfront.
Can this replace the analyst we've been asking the CEO to approve headcount for?
It replaces the synthesis and cross-referencing work that analyst would spend the first two weeks doing — clustering themes, connecting interview data to pipeline data, producing the brief. It doesn't replace strategic judgment about which findings to act on, which customers to discount, or what the right message shift should be. Think of it as doing the prep work so the 45 minutes you spend on research actually produce something, rather than three hours of highlighting and rearranging sticky notes.
We use Customer.io for nurture, not HubSpot. Can Starch pull the sequence performance data too?
Yes. Connect Customer.io from Starch's integration catalog — the agent queries it live when your app runs. You can pull campaign open rates, click rates, and conversion events from Customer.io and include them in the brief alongside the research synthesis. A prompt like 'Show me open rates for each email in the Q1 nurture sequence and flag any step where open rate dropped more than 15 points from the previous email' gives you the performance side to pair with the qualitative research.
How is this different from just asking ChatGPT to synthesize my notes after I paste them in?
Three things: First, there's no manual export — Starch reads from your Notion database directly on a schedule. Second, the cross-reference against HubSpot deal data is something a standalone chat session can't do without you building and pasting that export too. Third, the weekly automated digest means synthesis happens continuously, not only when someone remembers to run it. If new interviews go into Notion this week, the Friday digest picks them up automatically.

Ready to run synthesize customer research interviews on Starch?

Request closed-beta access. Everything is free during beta.

You're on the list! We'll be in touch soon.