How to run a performance review cycle as Chief of Staff and Founder's Office

People & HRFor Chief of Staff and Founder's Office3 apps10 steps~20 min to set up

You're the one running performance reviews for a 150-person company, but you didn't build the system — you inherited spreadsheets, a Notion doc someone made in 2023, and a Lattice or Leapsome instance that half the managers actually use. Every cycle, you're chasing 22 managers for self-reviews, manually compiling ratings into a master sheet, cross-referencing comp bands from a locked HR file, and synthesizing patterns for the CEO before the calibration meeting. The whole thing takes four to six weeks and burns 15+ hours of your personal time on coordination overhead that shouldn't require a human. You also have no single view of who's behind, who's flagged, and what the distribution looks like across functions.

People & HRFor Chief of Staff and Founder's Office3 apps10 steps~20 min to set up
Outcome

What you'll set up

A centralized performance review tracker that aggregates submission status, ratings, and calibration notes across all managers and functions — no more chasing a spreadsheet
An automated reminder and follow-up system that pings managers via Slack or email when their self-reviews or peer feedback submissions are overdue, without you sending those messages manually
A real-time calibration dashboard that surfaces rating distributions by department, flags outliers, and generates a summary brief the CEO can read in five minutes before the calibration session
The Starch recipe

Apps, data, and prompts

The combination of Starch apps, the data sources they pull from, and the prompts you use to drive them.

Data sources & config

Starch syncs your Notion data on a schedule (for any existing review docs or templates already living there), syncs your Slack data on a schedule (for sending reminders and logging confirmations), and syncs your Google Calendar data on a schedule (to anchor reminder timing to the review window milestones you've set). HR headcount data from Paylocity or ADP syncs on a schedule to pull the current employee roster into the tracker automatically. Any performance platform like Lattice or Leapsome that doesn't have a direct sync is reachable from Starch's integration catalog; the agent queries it live when the tracker needs submission status. If your current perf tool has no API, Starch automates it through your browser — no API needed.

Prompts to copy
Build me a performance review cycle tracker with columns for employee name, manager, department, self-review submitted (yes/no), peer feedback submitted (yes/no), manager rating (1–5), calibration status (pending/calibrated/approved), and comp recommendation. Group rows by department and flag any row where self-review is not submitted and today is past the deadline.
Set up an automation that runs every weekday morning during the review window: check the tracker for any manager whose team has outstanding self-reviews, draft a Slack message to that manager with the list of missing submissions, and log that the reminder was sent.
Build me a calibration summary view that shows the rating distribution across all departments as a histogram, flags any department where the average rating is above 4.2 or below 3.0, and generates a two-paragraph narrative summary I can paste into the board prep doc.
Run these in Starch → or paste them into your favorite agent
Walkthrough

Step-by-step

1 Connect Paylocity or ADP so Starch syncs your current employee roster — this becomes the source of truth for who should appear in the review tracker, eliminating the manual step of copying a headcount list into a spreadsheet.
2 Connect Notion so Starch syncs your existing review templates, competency frameworks, and prior-cycle calibration notes — the agent can reference these when generating summaries or flagging if this cycle's ratings diverge significantly from last cycle.
3 Connect Slack and Google Calendar so Starch can anchor automated reminders to your review-window milestones and send them directly into the channels or DMs where your managers actually live.
4 Tell Starch: 'Build me a performance review cycle tracker with columns for employee name, manager, department, self-review submitted, peer feedback submitted, manager rating, calibration status, and comp recommendation. Pull the employee list from Paylocity and group by department.' Starch builds the app — you don't configure a schema manually.
5 Add your review window dates by typing: 'Set the self-review deadline to May 9, peer feedback deadline to May 16, and calibration session to May 23. Flag any row that's overdue relative to today.' The app becomes deadline-aware without you maintaining a separate timeline doc.
6 Activate the daily reminder automation: 'Every weekday at 9 AM during the review window, check for managers with outstanding submissions. Draft a Slack DM to each manager listing which team members haven't submitted, and log the reminder in the tracker.' You approve the message template once; it runs without you.
7 As managers submit ratings, the tracker updates. Tell Starch: 'Show me a calibration view with rating distribution by department and flag any department where average rating is above 4.2 or below 3.0.' You now have the outlier analysis that used to take a Saturday morning in Excel.
8 Before the calibration meeting, type: 'Generate a two-paragraph narrative summary of this cycle's rating distribution, note which departments skew high or low versus last cycle's Notion notes, and list the five employees flagged for promotion discussion.' Paste directly into the CEO brief.
9 Run the calibration session using the live tracker as the working doc — managers see the same view, ratings get updated in real time, and calibration decisions are logged in the same row as the original submission data.
10 After calibration closes, type: 'Mark all calibrated rows as approved, generate a CSV of final ratings and comp recommendations sorted by department, and archive this cycle's tracker to Notion with a summary of participation rates and timeline adherence.' The cycle closes cleanly, with an audit trail.

See this running on Starch

Connect your tools, describe what you want, and the agent builds it. Closed beta is free.

Try it on Starch →
Worked example

May 2026 Mid-Year Review Cycle — 150-person company

Sample numbers from a real run
Employees in scope148
Managers accountable for submissions22
Self-reviews outstanding at Day 5 (caught by automation)31
Reminder Slack DMs sent without CoS involvement19
Departments flagged for calibration (avg rating > 4.2)3
Hours saved vs. prior manual cycle14

It's May 9, the self-review deadline. In prior cycles you would have spent the morning opening a spreadsheet, cross-referencing a Notion roster, and drafting individual Slack messages to the 11 managers whose teams were behind. This cycle, Starch's morning automation ran at 9 AM and found 31 outstanding self-reviews across 19 managers. It drafted and sent DMs to each manager listing the specific direct reports who hadn't submitted — you saw the Slack log in the tracker and approved nothing, because you'd already approved the message template on Day 1. By the calibration session on May 23, the tracker showed a rating distribution across all 148 employees. Engineering skewed high — average 4.3 — while the new GTM team averaged 3.1, its first full cycle. Starch flagged both automatically. You typed: 'Generate a calibration brief for the CEO that notes the Engineering and GTM distribution outliers, compares to last cycle's averages from the Notion archive, and lists the seven employees flagged for promotion.' The CEO read a five-paragraph brief before walking into the room. The whole cycle ran in 18 days instead of 28, and your personal coordination hours dropped from roughly 16 to under 3.

Measurement

How you'll know it's working

Submission completion rate by deadline (self-review %, peer feedback %) per department
Days from cycle kick-off to calibration session complete
Rating distribution spread by department (standard deviation, flagged outliers vs. company average)
CoS hours spent on coordination overhead per review cycle
Number of calibration exceptions requiring CEO or CHRO escalation
Comparison

What this replaces

The other ways teams handle this today, and how the Starch version compares.

Lattice or Leapsome standalone
Purpose-built for performance workflows but lives in a silo — you still manually export data to build the calibration view and board-prep narrative the CEO needs, and it doesn't connect to your Notion, Slack, or headcount system without custom integration work.
Notion + Google Sheets manual stack
Free and flexible, but you are the integration layer — every reminder, every distribution analysis, and every summary brief is produced by a human (usually you) assembling data from multiple tabs.
15Five
Good for continuous feedback and OKR tracking, but calibration exports are flat CSVs and the tool doesn't know what's in your Notion, doesn't send Slack reminders based on your specific deadlines, and can't generate a narrative brief from your own company's data.
HR team running it in ADP or Paylocity directly
Keeps everything in the HRIS, which is clean for compliance, but offers almost no flexibility for the CoS to build a cross-functional calibration view or automate the manager communication layer.
On Starch RECOMMENDED

One platform — project management, task manager, knowledge management all running on connected data. Setup in plain English; numbers stay current via scheduled syncs and live agent queries.

Try it on Starch →
FAQ

Frequently asked questions

We don't have a formal performance platform — reviews happen in Google Forms and a shared Notion doc. Can Starch still work?
Yes. Starch syncs your Notion data on a schedule and can read Google Sheets or Docs through its integration catalog. The tracker Starch builds becomes your performance platform for the cycle — you don't need to buy and configure Lattice first.
Can Starch send the Slack reminders automatically or does someone have to trigger them?
Fully automatic. You describe the logic once — 'every weekday morning during the review window, check for outstanding submissions and DM the responsible manager' — and Starch runs it on schedule. You can also trigger it manually anytime from the app if you want an off-cycle nudge.
Does Starch store all the employee ratings data? Is that a security concern?
Starch is not SOC 2 Type II certified today. If your company's InfoSec policy requires SOC 2 Type II for any system that handles performance data, you'll want to confirm that before storing ratings in Starch. That's an honest limit worth naming. Many growth-stage companies at 150 people aren't yet enforcing that bar internally — but you should make the call with your HR and legal leads.
We use Workday for HR. Can Starch pull the employee roster from there?
Workday is available through Starch's integration catalog — the agent queries it live when your tracker needs the employee list. It's not a scheduled sync like Paylocity or ADP, so it queries on demand rather than keeping a stored snapshot. For a roster pull at cycle kick-off, that works fine.
Can Starch run the calibration session itself — like facilitate the conversation?
Starch builds the calibration view and generates the summary brief, but it doesn't run the meeting. What it does do: give you and your managers a live, shared tracker during the session where rating updates are reflected immediately, so you're not working from a static slide deck that's out of date by slide three.
What if a manager wants to submit feedback through a form rather than directly into the tracker?
Tell Starch to build a form view that maps to the tracker fields — managers fill out the form, and submissions populate the same rows the calibration dashboard reads. You can describe this in natural language and Starch builds it; no form tool configuration required.

Ready to run run a performance review cycle on Starch?

Request closed-beta access. Everything is free during beta.

You're on the list! We'll be in touch soon.