How to run a retrospective or post-mortem with AI

Strategy & Planning3 AI tools7 steps6 friction points

Retrospectives and post-mortems are structured conversations about what just happened — a sprint, a product launch, a customer churn event, a production incident. The goal is honest diagnosis: what worked, what didn't, what changes before the next cycle. Most operators run them quarterly at best, after a major event, or when something breaks badly enough that ignoring it stops being an option.

The workflow looks like an AI problem because it's fundamentally a pattern-matching and synthesis task. You have raw material — notes, Slack threads, tickets, timelines — and you need to turn that into a structured narrative with root causes and action items. That's exactly what LLMs are good at: taking unstructured text and producing organized, readable output without the facilitator having to stare at a blank doc.

ChatGPT, Claude, and Gemini can contribute meaningfully here. Paste in your incident timeline and they'll draft a five-whys analysis. Feed them a sprint's worth of standup notes and they'll identify recurring blockers. Ask them to structure a blame-free retrospective in a specific format — Start/Stop/Continue, DACI, 4Ls — and the output is usable within seconds. The limitation isn't the quality of the reasoning. It's everything around the prompt.

Strategy & Planning3 AI tools7 steps6 friction points
AI walkthrough

How to do it with AI today

A practical walkthrough using ChatGPT, Claude, and other off-the-shelf LLMs — what they're good at, what you'll have to do by hand.

Tools that work for this
ClaudeChatGPTGemini
Step-by-step
1 Collect your raw material manually: export Slack thread history, copy-paste meeting notes, pull incident tickets from Jira or Linear, and grab whatever timeline you have into a single document.
2 Open Claude or ChatGPT and paste the raw material into the context window along with a prompt specifying the retrospective format you want — Start/Stop/Continue, 5 Whys, or a post-mortem template with sections for timeline, impact, root cause, and action items.
3 Ask the model to extract a factual timeline of events first, then separately ask for a root cause analysis. Running these as separate prompts usually produces cleaner output than asking for everything at once.
4 Review the draft critically. LLMs will produce plausible-sounding root causes that may not reflect what actually happened — you need someone who was in the room to validate each claim before it goes into any document people will act on.
5 Use a follow-up prompt to generate action items with clear owners and due dates based on the root causes identified. Ask the model to format these as a table: owner, action, deadline, success metric.
6 Copy the structured output into Notion, Google Docs, or wherever your team stores documentation. This step is entirely manual — nothing transfers automatically.
7 Next quarter, repeat every step from scratch. The model has no memory of last quarter's retro, so pattern-spotting across multiple cycles requires you to manually bring in prior outputs each time.
Prompts you can copy
Here is a Slack thread and incident timeline from our production outage on April 10. Draft a post-mortem with sections: executive summary, timeline, root cause analysis (use 5 Whys), customer impact, and action items with suggested owners.
Here are standup notes from the last two-week sprint. Identify the top 3 recurring blockers, classify each as process, tooling, or communication issues, and draft a Start/Stop/Continue retrospective summary the team can review asynchronously.
Here is our previous post-mortem from January and the one from March. Compare the action items from January — which ones appear to still be unresolved based on the March report? List any repeated root causes across both.
Draft a blame-free post-mortem template for a SaaS team that experienced a 4-hour API outage. Include placeholders for timeline, contributing factors, what we did well, what we'd do differently, and 3-5 specific action items.
Given these action items from our last retrospective, write a 5-minute verbal summary I can open the next retro with — covering what we committed to, what we shipped, and what's still open.
Reality check

Where this gets hard

The walkthrough above works — until your numbers change, the LLM hallucinates, or you have to re-paste everything next month.

No connection to your actual incident data — you copy-paste from Jira, Linear, PagerDuty, or Slack manually before every session, and any ticket opened after you exported is missing.
Cross-retro pattern analysis requires you to manually feed prior documents into each new session; the model has no memory, so spotting recurring root causes across quarters is a manual archaeology project.
Action items live in the chat window and nowhere else — you copy them into Notion or a doc, they get separated from the retro that generated them, and ownership fades by the following week.
Output structure drifts between runs. The five-whys format you carefully prompted in March looks different in June unless you re-paste your exact system prompt every time.
Large incident timelines with many Slack messages and tickets can strain context windows, forcing you to summarize before pasting — which means you're pre-processing data that ideally the model would process.
Nothing about the workflow persists or improves automatically. There's no app tracking whether last quarter's action items were closed before the next retro runs.

Tired of the friction?

Starch runs the whole workflow on live data — no copy-paste, no hallucinated numbers, no re-prompting next month.

See the Starch version →
Starch alternative

The same workflow on Starch

Starch is an agentic operating system — an agent builds and runs the software your team actually needs, connected to your live data, so retrospectives become a persistent process rather than a recurring copy-paste session.

Use the Meeting Notes app to capture your retrospective session in real time — transcription, decisions, and action items extracted automatically, then archived in searchable history so last quarter's retro is one search away.
Connect Notion, Jira, Linear, or Slack from Starch's integration catalog; the agent queries live when the retro runs, so your incident tickets and standup notes are already in context — no manual export required.
Use the Knowledge Management app as the persistent home for every retrospective output — auto-categorized, stale-detection enabled, searchable across retros so you can ask 'what root causes have we logged more than twice this year' and get an answer.
Describe the retro workflow you want in plain English — 'after each sprint, pull open and closed tickets from Linear, extract blockers, compare against last sprint's action items, and draft a Start/Stop/Continue summary' — and Starch builds the automation that runs it on schedule.
Action items flow into the Project Management app directly — assigned to the right person with a due date, tracked on a kanban board, and visible in the next retro so you start every session knowing what you committed to last time and whether it shipped.
Starch connects to 3,000+ apps through its integration catalog, plus any website through browser automation — so whether your team runs on Jira, Asana, ClickUp, or something bespoke, the retro agent reaches the data where it actually lives.
Get closed-beta access →
Toolkit

Starch apps for this workflow

Pick your role

See this workflow by operator

Run run a retrospective or post-mortem on Starch

You're on the list! We'll be in touch soon.