How to run nps and csat surveys with AI

Customer Support3 AI tools7 steps6 friction points

NPS and CSAT surveys sit at the center of most customer feedback loops. You send them after purchases, support resolutions, onboarding milestones, or on a rolling basis to a slice of your customer base. The goal is simple: get a number that tells you whether customers are happy and trending in the right direction. The operational reality is messier — writing questions, distributing surveys, collecting responses, aggregating scores, and actually doing something with the results across your team.

The workflow feels like a natural fit for AI because so much of it is language work. Writing neutral, unbiased survey questions is harder than it looks. Summarizing 200 open-ended responses into themes you can act on is exactly the kind of task where pattern-matching at scale beats human attention. Drafting follow-up emails based on response segments — detractors get one message, promoters get another — is templated enough that an LLM can produce a solid first draft faster than you can.

ChatGPT, Claude, and Gemini can contribute meaningfully at each of those language-heavy steps. You can paste in raw responses and get theme summaries, score distributions, or draft communications in seconds. Where they fall short is the operational layer — they don't connect to your survey tool, your CRM, or your inbox. Every run is a manual copy-paste exercise, and nothing persists between sessions.

Customer Support3 AI tools7 steps6 friction points
AI walkthrough

How to do it with AI today

A practical walkthrough using ChatGPT, Claude, and other off-the-shelf LLMs — what they're good at, what you'll have to do by hand.

Tools that work for this
ChatGPTClaudeGemini
Step-by-step
1 Open Claude or ChatGPT and prompt it to write your NPS or CSAT survey questions. Describe your product type, the trigger event (e.g., post-support resolution, post-onboarding), and any question constraints — it will produce a clean 3-5 question survey you can drop into Typeform, Google Forms, or whatever tool you use.
2 Paste your survey questions back into the LLM and ask it to generate follow-up question variants for different segments — one set for detractors (score 0-6), one for passives (7-8), one for promoters (9-10). This gives you conditional logic copy before you build the survey.
3 After responses come in, export your results as a CSV from your survey tool. Paste the raw responses directly into Claude or ChatGPT — typically 50-150 rows before context limits become a problem — and prompt it to summarize themes by score band.
4 Ask the LLM to calculate your NPS score from the pasted data (percent promoters minus percent detractors) or your average CSAT score. Cross-check this against your survey tool's built-in reporting; LLMs occasionally miscount on large paste-ins.
5 Use the LLM to draft segmented follow-up emails: one for detractors asking what went wrong, one for passives with a specific ask, one for promoters requesting a review or referral. Give it your brand voice by pasting in a sample email you've written before.
6 Paste the output drafts into your email tool manually, or copy the summary into a Notion doc or Slack message to share with your team. There's no automated handoff — you're the integration layer between the LLM and your actual systems.
7 Repeat this entire sequence next month. Nothing from this session carries forward — the prompts, the response data, the score history, the themes you identified — it all lives in a browser tab until you close it.
Prompts you can copy
Write a 4-question NPS survey for a B2B SaaS product sent 14 days after a customer completes onboarding. Use neutral language and include one open-ended follow-up question for scores under 7.
Here are 120 CSAT survey responses from our support team, scored 1-5 with open-ended comments. Summarize the top 3 themes for scores of 1-2 and the top 3 themes for scores of 4-5. Format as a table.
Calculate the NPS score from this data: [paste response counts]. Then write a 150-word Slack summary I can share with my team that explains what the number means and what we should follow up on.
Draft three email templates: one for NPS detractors (0-6) asking what we could have done better, one for passives (7-8) asking what would make us a 10, and one for promoters (9-10) asking for a G2 review. Keep each under 100 words.
I have 80 open-ended responses to the question 'What's the one thing we could improve?' Group them into themes and rank the themes by frequency. Flag any responses that suggest churn risk.
Reality check

Where this gets hard

The walkthrough above works — until your numbers change, the LLM hallucinates, or you have to re-paste everything next month.

No live connection to your survey tool, CRM, or inbox — every analysis session starts with a manual CSV export and a paste.
Context window limits cap how many responses you can analyze at once; beyond ~100-150 rows, you're either truncating data or splitting it across multiple sessions and stitching results together by hand.
Score calculations (NPS math, CSAT averages) are correct most of the time but occasionally wrong on large datasets — you need to verify against your survey tool's native numbers before trusting the output.
Nothing persists between sessions. The themes you identified last quarter, the score trend over time, the follow-up email templates you tuned — all of it disappears when you close the tab.
There's no feedback loop into your customer records. A detractor's response never makes it into their contact record; a promoter flagged for a referral ask doesn't get tagged in your CRM without you doing it manually.
Distribution and follow-up are entirely on you. The LLM writes the emails; you send them one by one, or paste them into a bulk send tool yourself, with no scheduling or segmentation logic connecting the two.

Tired of the friction?

Starch runs the whole workflow on live data — no copy-paste, no hallucinated numbers, no re-prompting next month.

See the Starch version →
Starch alternative

The same workflow on Starch

Starch is an agentic operating system — an agent builds and runs the persistent software your NPS and CSAT workflow actually needs, connected to your live customer data and inbox, so you're not repeating the same manual sequence every survey cycle.

Connect Gmail or Outlook from Starch's integration catalog; the agent handles sending segmented survey follow-ups to detractors, passives, and promoters automatically — no manual copy-paste into a bulk email tool.
Tell Starch in plain English: 'Build me a dashboard that tracks NPS and CSAT scores over time, tags customers by score band, and shows me open-ended response themes by month.' The agent builds it and keeps it current.
Starch's CRM app stores contact records enriched with survey scores and response history — so when you ask 'which customers gave us a detractor score in the last 60 days and haven't been followed up with?' you get a real answer, not a manual spreadsheet join.
Connect your survey data through Starch's integration catalog alongside Gmail; describe the automation you want — 'when a detractor response comes in, log it to the CRM contact and draft a follow-up email for my review' — and the agent builds that workflow once, then runs it continuously.
Customer Support Agent (coming soon) will close the loop further — correlating support ticket resolution with CSAT scores automatically, so you can see which ticket categories are driving your score down without pulling data from two tools by hand.
Score history, response themes, and follow-up status all persist in Starch across survey cycles. Next quarter's analysis starts from where this quarter ended, not from a blank prompt.
Get closed-beta access →
Toolkit

Starch apps for this workflow

Pick your role

See this workflow by operator

Run run nps and csat surveys on Starch

You're on the list! We'll be in touch soon.