How to synthesize customer research interviews as Small RevOps Teams

Strategy & PlanningFor Small RevOps Teams3 apps12 steps~24 min to set up

After every customer discovery call, your notes live in three places: a Zoom transcript nobody indexed, a Gong clip the AE hasn't shared yet, and a Slack message you typed to yourself on the way to the next meeting. Before the CRO asks 'what are customers actually saying about pricing friction?' you're manually skimming 15 call recordings, copy-pasting quotes into a Google Doc, and trying to remember which deal stage those conversations came from. Cross-referencing what customers said with what's actually in HubSpot — deal stage, company size, segment — takes an afternoon you don't have. The synthesis never gets done properly, so the same objections keep surprising your reps.

Strategy & PlanningFor Small RevOps Teams3 apps12 steps~24 min to set up
Outcome

What you'll set up

A synthesis app that pulls interview notes, call transcripts, and CRM context together and surfaces themes, objections, and rep-ready talk tracks — without you building a spreadsheet every time
An auto-updating knowledge base where every customer insight is tagged by segment, deal stage, and topic so you can answer 'what did enterprise prospects say about onboarding?' in seconds instead of minutes
A recurring digest that flags new interview patterns and connects them to pipeline movement, so the CRO gets data-backed answers before the QBR, not a vague 'customers mentioned pricing a lot'
The Starch recipe

Apps, data, and prompts

The combination of Starch apps, the data sources they pull from, and the prompts you use to drive them.

Data sources & config

Starch syncs your HubSpot data on a schedule (contacts, companies, deals, owners) and syncs your Gmail messages on a schedule so interview threads are accessible. Notion pages and databases are also synced on a schedule, so call notes your team files there feed directly into the knowledge base. Apollo.io contacts are synced on a schedule for sequence-level context. For call transcripts stored on external platforms your team can log into, Starch automates retrieval through your browser — no API needed.

Prompts to copy
Build me a customer research repository that ingests interview notes from Notion and Gmail threads, tags each insight by customer segment, deal stage, and topic category (pricing, onboarding, integrations, support), and lets me search across everything in plain English. When I ask 'what are enterprise customers saying about onboarding friction?' it should return exact quotes with the source and the HubSpot deal it came from.
Every Friday morning, send me a digest of the three most-repeated themes from customer interviews added this week, which segments they came from, and whether any of those themes correlate with deals that stalled or closed. Pull the deal outcome data from HubSpot.
Build me a view inside the Sales Agent CRM app that shows, for each open opportunity, what that account's contacts have said in past interviews — pulled from the knowledge base — so reps walking into a call can see the relevant context without asking me for a research summary.
Run these in Starch → or paste them into your favorite agent
Walkthrough

Step-by-step

1 Connect HubSpot — Starch syncs contacts, companies, deals, and owners on a schedule. This is the backbone: every insight you capture will be linkable back to a real deal and a real segment.
2 Connect Notion — Starch syncs your pages and databases on a schedule. If your team files call notes or research docs in Notion, they're immediately available as source material for synthesis.
3 Connect Gmail — Starch syncs messages on a schedule. Customer emails and follow-up threads from discovery calls contain signal that never makes it into formal notes; now it does.
4 For any call recordings or transcripts living on a platform with a web UI (Gong, Chorus, Fireflies, or your own folder structure), Starch automates extraction through your browser — no API needed. Describe the source and Starch handles the retrieval.
5 Start the Knowledge Management app and describe your tagging schema in plain English: segment (SMB / mid-market / enterprise), deal stage at time of interview, and topic category. Starch auto-categorizes incoming content against that schema.
6 Describe the synthesis view you want: 'Show me all pricing-related quotes from mid-market prospects who were in late-stage deals at time of interview, sorted by most recent.' Starch builds that as a saved view you can return to anytime.
7 Wire the weekly digest: tell Starch 'Every Friday at 8am, identify the top three recurring themes from interviews ingested this week, note which segments they came from, and email me a summary with supporting quotes.' It runs automatically.
8 Add the HubSpot deal linkage: tell Starch 'When a new interview note is added, find the matching HubSpot deal or company by name or domain and attach the deal stage and owner to the record.' Now every insight has pipeline context.
9 Build the rep-facing surface inside the Sales Agent CRM app: 'For each open opportunity in HubSpot, show a panel with relevant quotes from past customer interviews with contacts at that account or similar accounts in the same segment.' Reps see research without asking you for it.
10 Set up an alert for new objection patterns: 'If three or more interviews in a two-week window mention the same topic category, send me a Slack message with the quotes and the deal context.' This catches emerging issues before they show up in win/loss data.
11 Before the next QBR, type: 'Summarize what customers said about [topic] this quarter, broken down by segment and deal outcome. Include representative quotes.' Starch generates the slide-ready brief from your own indexed research.
12 Review and refine the tagging schema quarterly — tell Starch to add or rename categories as your product and sales motion evolves. No schema migration, no spreadsheet rebuild.

See this running on Starch

Connect your tools, describe what you want, and the agent builds it. Closed beta is free.

Try it on Starch →
Worked example

Q1 2026 Pricing Friction Analysis — March Synthesis Sprint

Sample numbers from a real run
Interviews ingested (Notion + Gmail)34
Unique themes auto-tagged11
Quotes tagged 'pricing / packaging'47
Deals linked to those quotes (HubSpot)19
Deals that stalled at proposal stage8
Hours saved vs. manual synthesis6

Going into the March QBR, the CRO wanted to know whether pricing confusion was a real blocker or just noise from one vocal prospect. In previous quarters, answering that took you a half-day: digging through Notion folders, searching Gmail for follow-ups, cross-referencing deal IDs. This time, you typed 'Show me all quotes tagged pricing or packaging from Q1, filtered to mid-market deals, and tell me what stage those deals were in when the interview happened' into Starch. In under two minutes it returned 47 quotes across 34 interviews, linked to 19 HubSpot deals. Eight of those deals stalled at proposal stage. The pattern was clear: prospects weren't confused about price — they were confused about which tier included the feature they'd been demoed. That's a packaging and enablement problem, not a price problem. You walked into the QBR with a two-paragraph brief and 12 supporting quotes instead of a shrug. The CRO updated the proposal template the following week.

Measurement

How you'll know it's working

Time from last interview to synthesized insight brief (target: same day, not same week)
% of open opportunities in HubSpot with at least one tagged customer insight attached
Number of recurring objection themes identified per quarter vs. themes that surfaced as surprises in win/loss
CRO / VP Sales questions answered from the research base without manual lookup (tracked informally as 'Slack requests closed from Starch')
Deal stage distribution of interviews — are you capturing research at the right moments, or only post-close?
Comparison

What this replaces

The other ways teams handle this today, and how the Starch version compares.

Gong + manual Google Docs synthesis
Gong surfaces call-level signal well but the synthesis step — connecting themes to pipeline data and making it searchable across all accounts — still falls to you manually every time someone asks a cross-deal question.
Dovetail or Grain for research repositories
Purpose-built for UX research workflows; doesn't connect to your CRM, so insights live in a silo disconnected from deal stage, segment, or rep context unless you build that bridge yourself.
HubSpot notes + custom deal properties
HubSpot is the right system of record for deals, not for synthesizing what 34 different contacts said across 11 theme categories — the search and cross-account analysis just isn't built for that.
Notion database with manual tagging
Works fine when it's just you maintaining it; breaks down when you want to query across segments, correlate with deal outcomes, or automatically flag a pattern that's appeared five times this month.
On Starch RECOMMENDED

One platform — knowledge management, growth analyst, sales agent crm all running on connected data. Setup in plain English; numbers stay current via scheduled syncs and live agent queries.

Try it on Starch →
FAQ

Frequently asked questions

Our call recordings live in Gong. Can Starch get to them?
If your team can log into Gong through a browser, Starch can automate retrieval through your browser — no Gong API required. You describe what you want pulled (transcript text, call date, account name) and Starch handles the navigation. That said, if you already export transcripts to Notion or email summaries through Gmail, those routes are cleaner because Starch syncs both on a schedule.
Will this replace actually talking to customers and writing up insights?
No. Starch synthesizes what's already been captured — it doesn't do the interviews or write the original notes. The value is that once your team files a note in Notion or sends a follow-up in Gmail, Starch makes it findable, linkable to a deal, and queryable across all your research without you doing anything extra.
Our interview notes aren't consistent — different reps write different amounts. Does the tagging still work?
Yes. You describe the tagging schema in plain English and Starch applies it to whatever text exists. A two-sentence note and a five-paragraph debrief both get tagged. The thinner the note, the fewer quotes it contributes, but it won't break the system — it'll just show up as a lower-confidence data point.
Is Starch SOC 2 certified? We're cautious about what touches customer conversation data.
Starch is not SOC 2 Type II certified today. If your security or legal team requires SOC 2 Type II for any system that processes customer conversation data, that's a real constraint to evaluate. Worth raising early in your evaluation.
We use Salesforce, not HubSpot. Can this still work?
Yes. Connect Salesforce from Starch's integration catalog; the agent queries it live when your synthesis app needs deal context. The scheduled-sync depth is deeper for HubSpot (Starch syncs HubSpot data on a schedule), but Salesforce deal and contact data is fully reachable for linking insights to pipeline.
How is this different from just building a Notion database with a good template?
A Notion database requires someone to manually tag every entry, and it can't answer 'which of these themes correlate with deals that stalled?' without you building a formula or exporting to Sheets. Starch auto-tags incoming content, links it to HubSpot deal data, and lets you query across everything in plain English. The maintenance burden is much lower once it's set up.

Ready to run synthesize customer research interviews on Starch?

Request closed-beta access. Everything is free during beta.

You're on the list! We'll be in touch soon.