How to run a performance review cycle as Chief of Staff and Founder's Office
You're the one running performance reviews for a 150-person company, but you didn't build the system — you inherited spreadsheets, a Notion doc someone made in 2023, and a Lattice or Leapsome instance that half the managers actually use. Every cycle, you're chasing 22 managers for self-reviews, manually compiling ratings into a master sheet, cross-referencing comp bands from a locked HR file, and synthesizing patterns for the CEO before the calibration meeting. The whole thing takes four to six weeks and burns 15+ hours of your personal time on coordination overhead that shouldn't require a human. You also have no single view of who's behind, who's flagged, and what the distribution looks like across functions.
What you'll set up
Apps, data, and prompts
The combination of Starch apps, the data sources they pull from, and the prompts you use to drive them.
Starch syncs your Notion data on a schedule (for any existing review docs or templates already living there), syncs your Slack data on a schedule (for sending reminders and logging confirmations), and syncs your Google Calendar data on a schedule (to anchor reminder timing to the review window milestones you've set). HR headcount data from Paylocity or ADP syncs on a schedule to pull the current employee roster into the tracker automatically. Any performance platform like Lattice or Leapsome that doesn't have a direct sync is reachable from Starch's integration catalog; the agent queries it live when the tracker needs submission status. If your current perf tool has no API, Starch automates it through your browser — no API needed.
Step-by-step
See this running on Starch
Connect your tools, describe what you want, and the agent builds it. Closed beta is free.
May 2026 Mid-Year Review Cycle — 150-person company
| Employees in scope | 148 |
| Managers accountable for submissions | 22 |
| Self-reviews outstanding at Day 5 (caught by automation) | 31 |
| Reminder Slack DMs sent without CoS involvement | 19 |
| Departments flagged for calibration (avg rating > 4.2) | 3 |
| Hours saved vs. prior manual cycle | 14 |
It's May 9, the self-review deadline. In prior cycles you would have spent the morning opening a spreadsheet, cross-referencing a Notion roster, and drafting individual Slack messages to the 11 managers whose teams were behind. This cycle, Starch's morning automation ran at 9 AM and found 31 outstanding self-reviews across 19 managers. It drafted and sent DMs to each manager listing the specific direct reports who hadn't submitted — you saw the Slack log in the tracker and approved nothing, because you'd already approved the message template on Day 1. By the calibration session on May 23, the tracker showed a rating distribution across all 148 employees. Engineering skewed high — average 4.3 — while the new GTM team averaged 3.1, its first full cycle. Starch flagged both automatically. You typed: 'Generate a calibration brief for the CEO that notes the Engineering and GTM distribution outliers, compares to last cycle's averages from the Notion archive, and lists the seven employees flagged for promotion.' The CEO read a five-paragraph brief before walking into the room. The whole cycle ran in 18 days instead of 28, and your personal coordination hours dropped from roughly 16 to under 3.
How you'll know it's working
What this replaces
The other ways teams handle this today, and how the Starch version compares.
One platform — project management, task manager, knowledge management all running on connected data. Setup in plain English; numbers stay current via scheduled syncs and live agent queries.
Try it on Starch →Frequently asked questions
We don't have a formal performance platform — reviews happen in Google Forms and a shared Notion doc. Can Starch still work?
Can Starch send the Slack reminders automatically or does someone have to trigger them?
Does Starch store all the employee ratings data? Is that a security concern?
We use Workday for HR. Can Starch pull the employee roster from there?
Can Starch run the calibration session itself — like facilitate the conversation?
What if a manager wants to submit feedback through a form rather than directly into the tracker?
Related guides for Chief of Staff and Founder's Office
Vendor and category spend analysis means knowing, at any point in time, where your money is actually going — which vendors are getting paid, how much, how often, and whether that number is creeping up or down relative to last month.
Read guide →Investor Q&A and info requests are the administrative tax on raising capital and maintaining LP relationships.
Read guide →A 13-week cash flow forecast is a rolling, week-by-week view of what hits your account and what leaves it — covering roughly one quarter ahead.
Read guide →An annual operating budget is a forward-looking plan that maps expected revenue against planned spending for the next 12 months, broken into categories you'll actually track — payroll, software, marketing, COGS, facilities.
Read guide →Run a Performance Review Cycle for other operators
The AI stack built for small HR teams.
Read guide →The AI stack built for boutique professional services firms.
Read guide →The AI stack built for small law and accounting practices.
Read guide →The AI stack built for independent clinic owner-operators.
Read guide →Ready to run run a performance review cycle on Starch?
Request closed-beta access. Everything is free during beta.