How to run a performance review cycle as Small Customer Success Teams

People & HRFor Small Customer Success Teams4 apps12 steps~24 min to set up

Your three-person CS team is overdue for performance reviews and nobody has time to run them properly. You're trying to evaluate account ownership, renewal outcomes, QBR quality, and response times — but the data lives in HubSpot deals, Intercom conversation logs, Gmail threads, and a spreadsheet someone built last quarter. Gainsight or Lattice would solve this if you had a CS-ops person and $80K to spend. Instead you're cobbling together a Google Form, a shared doc, and a manager gut check. You don't have clear metrics on who owns what or how accounts are trending. The review ends up being vibes-based, which means your best CS rep doesn't get recognized and the real gaps don't get fixed.

People & HRFor Small Customer Success Teams4 apps12 steps~24 min to set up
Outcome

What you'll set up

A structured performance review app that pulls actual account data — renewals closed, response time, expansion signals — from HubSpot and Gmail so reviews are grounded in evidence, not memory
Automated review prompts and self-assessment forms sent to each CS team member on a cadence you define, with responses collected and summarized before your 1:1
A living scorecard per rep that tracks QBR completion rate, renewal rate, ticket volume handled, and portfolio health over time — all without a CS-ops hire
The Starch recipe

Apps, data, and prompts

The combination of Starch apps, the data sources they pull from, and the prompts you use to drive them.

Data sources & config

Starch syncs your HubSpot data on a schedule (deals, contacts, owners) and syncs your Gmail on a schedule (messages and thread activity). Connect Intercom from Starch's integration catalog — the agent queries it live when your review app needs ticket volume or response-time data per rep. Connect Google Calendar via scheduled sync to pull 1:1 and QBR meeting history. All rep scorecards and rubric docs live in the Knowledge Management app; meeting summaries from 1:1s are captured in Meeting Notes and linked back to each rep's review record.

Prompts to copy
Build me a performance review app for a 3-person customer success team. Pull each rep's HubSpot deal data — renewals won, expansions, churn — and their Gmail thread activity by account. Show me a scorecard per rep with those metrics, plus a self-assessment section they fill out before our 1:1.
Create a knowledge base that stores our CS review rubric — what good looks like for QBR delivery, response time, and account health — so every rep can see the standard and we can reference it in reviews.
After each performance 1:1, transcribe and summarize the conversation. Extract agreed action items, tag them to the rep, and add due dates so we can track follow-through at the next review.
Create a task for me to send Q2 self-assessment forms to Jordan and Priya by April 18, urgent, and remind me again on April 16 if I haven't done it.
Run these in Starch → or paste them into your favorite agent
Walkthrough

Step-by-step

1 Connect HubSpot via scheduled sync so Starch has your deal and contact data — who owns which accounts, which renewals closed, which deals are at risk — without you exporting anything manually.
2 Connect Gmail via scheduled sync so Starch can measure email response patterns and account communication volume per rep across your 250 accounts.
3 Connect Intercom from Starch's integration catalog so the agent can query live ticket volume, CSAT scores, and first-response time broken down by CS rep for the review period.
4 Tell Starch: 'Build me a performance review app for a 3-person CS team. Pull each rep's HubSpot renewal rate, expansion deals, and churn events for Q1. Pull their Gmail activity by account. Pull their Intercom ticket volume and average response time. Show me a scorecard per rep.' Starch builds the app.
5 Add your review rubric to the Knowledge Management app — what a strong QBR looks like, your response-time SLA, how you weight renewals vs. expansion — so the standard is documented and every rep can see it before self-assessment.
6 Set up an automation: 'Two weeks before each quarterly review cycle, send each CS rep a self-assessment form with five questions based on our review rubric. Collect their responses and attach them to their scorecard in the review app.' Starch schedules this to repeat quarterly.
7 Before each 1:1, open the rep's scorecard in the Starch app. The renewal rate, expansion pipeline, Intercom response time, and QBR completion count are pulled from live data — you walk in knowing the numbers already.
8 Run the 1:1 with Meeting Notes active. It transcribes in real time, generates a summary of key decisions after the call, and extracts action items with the rep's name attached.
9 Review the extracted action items from Meeting Notes. Use Task Manager to assign follow-through tasks — 'Jordan to build account health template by May 15' — so commitments made in the review don't evaporate.
10 After all three reviews are done, tell Starch: 'Summarize the Q1 performance review cycle. Show me each rep's scores, the action items we agreed on, and flag any patterns — like if response time is down across the board — that I should address as a team issue, not a rep issue.'
11 Store the completed review summaries in Knowledge Management so you have a documented record for comp conversations, promotions, or if you need to make a harder call later.
12 Set a quarterly automation to remind you to kick off the next review cycle 30 days in advance — so this doesn't become a once-a-year scramble at the end of the fiscal year.

See this running on Starch

Connect your tools, describe what you want, and the agent builds it. Closed beta is free.

Try it on Starch →
Worked example

Q1 2026 CS Performance Review — 3-Rep Team

Sample numbers from a real run
Jordan — Renewals closed (Q1)8
Jordan — Expansion deals sourced2
Jordan — Avg. Intercom first-response time4
Priya — Renewals closed (Q1)6
Priya — Expansion deals sourced4
Priya — Avg. Intercom first-response time7
Marcus — Renewals closed (Q1)5
Marcus — Expansion deals sourced1
Marcus — Avg. Intercom first-response time3

Before building this in Starch, your Q1 review for Jordan, Priya, and Marcus took three days to prepare — pulling HubSpot deal exports into a spreadsheet, estimating response times from memory, and asking each rep to email you a self-summary the night before. With the Starch review app in place, you open the scorecard the morning of each 1:1 and the data is already there. Jordan closed 8 renewals and sourced 2 expansion deals — solid renewal performance, but her 4-hour average first-response time in Intercom is creeping up. Priya sourced 4 expansion deals (best on the team) but only closed 6 renewals and her response time is 7 hours — a pattern the Starch summary flagged as potentially indicating she's spending too much time prospecting and not enough on at-risk accounts. Marcus has the fastest response time at 3 hours but sourced only 1 expansion deal all quarter. None of these patterns were visible before. The Knowledge Management rubric gave each rep a concrete standard to self-assess against, and the Meeting Notes summaries from each 1:1 gave you a written record of what you agreed on — including Marcus committing to identify 3 expansion candidates by June 1. The whole prep cycle dropped from 3 days to about 45 minutes.

Measurement

How you'll know it's working

Renewal rate per rep (renewals closed / renewals due, by quarter)
Expansion deals sourced per rep
Average first-response time in Intercom by rep
QBR completion rate (QBRs delivered on time / QBRs scheduled per quarter)
Action-item follow-through rate from prior review cycle
Comparison

What this replaces

The other ways teams handle this today, and how the Starch version compares.

Gainsight or ChurnZero
Purpose-built for CS performance and health scoring, but costs six figures and requires a CS-ops person to configure — not realistic for a 3-person team without dedicated ops support.
Lattice or Leapsome
Good performance review workflow, but it doesn't connect to your HubSpot deal data or Intercom metrics — you still have to manually pull the CS-specific numbers that make the review meaningful.
Google Sheets + Google Forms
Free and flexible, but every review cycle you're rebuilding the scorecard from scratch and manually importing data from HubSpot; no automation and no institutional memory between cycles.
HubSpot reporting + 1:1 docs in Notion
You're probably already doing this and it works okay for deal data, but it doesn't capture Intercom response times, meeting transcripts, or follow-through tasks in one place — review prep is still a manual assembly job.
On Starch RECOMMENDED

One platform — crm, knowledge management, meeting notes all running on connected data. Setup in plain English; numbers stay current via scheduled syncs and live agent queries.

Try it on Starch →
FAQ

Frequently asked questions

Does Starch store the performance review data, or does it just query it live each time?
Data from your scheduled-sync connections — HubSpot, Gmail, Google Calendar — is synced into Starch on a schedule and stored there. That means your rep scorecards are built from a stable snapshot, not a live query that might return different numbers each time you open the app. Intercom data is queried live from Starch's integration catalog when the app runs — so it reflects the current state at the time of each review. The review summaries and rubric docs you store in Knowledge Management are always there.
Can Starch measure things like QBR quality, not just renewal numbers?
Renewal rates and response times come straight from data. QBR quality is more qualitative — Starch can capture what was said in each QBR 1:1 via Meeting Notes, track whether QBRs were delivered on schedule (using Calendar data), and store your rubric for what a good QBR looks like in Knowledge Management. You'd be rating QBR quality yourself based on that context; Starch surfaces the evidence and stores the record, but it doesn't auto-score judgment calls.
What if our CS team uses Zendesk instead of Intercom?
Zendesk is reachable from Starch's integration catalog — connect it and the agent queries ticket volume and response-time data live when your review app runs. Same pattern as Intercom, different connection.
Is this secure enough for performance data on real employees?
Starch is not SOC 2 Type II certified yet — that's worth knowing. If your company has strict data-handling requirements for HR records, check with your IT or legal team before storing formal performance documentation in Starch. For small teams using this as an internal ops tool rather than an HR system of record, most founders find it fits fine.
Can I set this up to run automatically each quarter without rebuilding it?
Yes. Once you've described the review workflow, Starch saves it and runs it on whatever cadence you set. The self-assessment form goes out automatically, the scorecard refreshes from live data, and you get a reminder to kick off the cycle. You're not rebuilding anything — you're just showing up to the 1:1 with the data already in front of you.
What if one rep owns very different account types than another — enterprise vs. SMB? Can the scorecard account for that?
Tell Starch that when you describe the app: 'Jordan owns our 5 enterprise accounts; Priya and Marcus own the SMB portfolio. Weight renewal rate differently for enterprise — one lost renewal is a bigger deal. Show me separate benchmarks by segment.' The app will reflect that. Natural-language authoring means you can specify the nuances that a generic HR tool would never know to ask about.

Ready to run run a performance review cycle on Starch?

Request closed-beta access. Everything is free during beta.

You're on the list! We'll be in touch soon.