How to create a sales enablement content library as Small RevOps Teams

Sales & CRMFor Small RevOps Teams3 apps12 steps~24 min to set up

You're a two-person RevOps team supporting 30 reps, and your 'sales enablement content library' is a Notion page nobody updates, a Google Drive folder with 47 versions of the same deck, and a Slack channel where reps ask you where the latest one-pager is — every single week. You spend real time triaging those requests instead of working on the quota model or attribution. When a rep asks for a battlecard on a new competitor, you're pulling data from HubSpot deal notes, Apollo sequences, and Gmail threads manually, then building something in Google Slides before the forecast call. There's no single source of truth for what content exists, what's stale, and what actually moves deals.

Sales & CRMFor Small RevOps Teams3 apps12 steps~24 min to set up
Outcome

What you'll set up

A living content library connected to HubSpot, Apollo, and Gmail so every piece of sales collateral — battlecards, one-pagers, email sequences, objection guides — is findable in one place, tagged by deal stage and persona, and flagged automatically when it goes stale
A content-to-pipeline attribution view that tells you which collateral is being used at which deal stages, so you can stop guessing which one-pager actually helps close and start building more of what works
A lightweight automation that surfaces the right content to the right rep at the right deal stage, pulling context from live HubSpot deal data and Apollo sequence activity instead of requiring you to manually field 'can you send me the X deck' Slack requests all day
The Starch recipe

Apps, data, and prompts

The combination of Starch apps, the data sources they pull from, and the prompts you use to drive them.

Data sources & config

Starch syncs your HubSpot data on a schedule (contacts, companies, deals, owners) and syncs your Gmail data on a schedule (message threads, labels) for deal-level email context. Connect Apollo.io from Starch's integration catalog — the agent queries sequence and contact activity live when attribution runs. Connect Slack from Starch's integration catalog for rep-facing content delivery. Connect Notion from Starch's scheduled sync to pull in any existing documentation or wiki pages as seed content for the library.

Prompts to copy
Build me a sales content library where every asset — battlecards, one-pagers, email templates, objection guides — is tagged by deal stage (Discovery, Technical Eval, Legal, Closed Won), competitor, and buyer persona. Surface which content was last updated and flag anything older than 90 days. Let reps search by deal stage or persona and get the top 3 most relevant assets back.
Build me a content attribution dashboard that cross-references which assets reps shared (from Gmail threads) with the deal stage in HubSpot at the time of sharing, and shows which assets appear in Closed Won vs. Closed Lost deals over the last 6 months.
Every Monday at 7am, scan HubSpot for deals that moved to Technical Eval or Legal in the last 7 days, identify the rep owner, and send them a Slack message with the 3 most relevant content assets for that stage — pulling from the knowledge base.
Run these in Starch → or paste them into your favorite agent
Walkthrough

Step-by-step

1 Connect HubSpot to Starch — Starch syncs your deals, contacts, companies, and owner data on a schedule. This is the deal-stage backbone the content library keys off of.
2 Connect Gmail to Starch — Starch syncs your message threads on a schedule so the attribution layer can detect when a rep forwarded a one-pager or shared a deck link inside an active deal thread.
3 Connect Apollo from Starch's integration catalog so the agent can query sequence enrollment and step-level activity live, adding outbound touchpoints to the attribution picture.
4 Start the Knowledge Management app and describe your content taxonomy: 'Organize sales assets by deal stage (Discovery, Technical Eval, Legal, Close), buyer persona (IT buyer, CFO, End User), and asset type (battlecard, one-pager, email template, objection guide). Flag anything not updated in 90 days. Pull seed content from our Notion wiki.'
5 Upload or paste your existing sales assets into the knowledge base — battlecards, deck links, email templates — and let Starch auto-categorize them against the taxonomy you described. Fix anything that lands in the wrong bucket.
6 Build the attribution dashboard by telling Starch: 'Show me which content assets from the library were shared by reps in Gmail threads, cross-referenced with HubSpot deal stage at time of send and final deal outcome (Closed Won / Closed Lost) for the last 6 months.'
7 Review the attribution output with your CRO or sales leader — identify two or three assets that consistently appear in Closed Won deals at Technical Eval and two or three that don't move the needle. This becomes your 'double down / deprecate' list.
8 Set up the Monday content-delivery automation: 'Every Monday at 7am, pull all HubSpot deals that moved to Technical Eval or Legal in the past 7 days. For each deal, look up the rep owner and send them a Slack DM with the top 3 assets tagged for that stage, formatted as bullet links.'
9 Add a staleness automation: 'Every first of the month, scan the content library for assets tagged as last updated more than 90 days ago. Post a summary in our RevOps Slack channel with asset name, owner, and days since last update.'
10 Wire a lightweight intake form — describe it to Starch as: 'Build me a form where any rep can request a new content asset. Fields: asset type, target persona, deal stage it's needed for, any competitive context, and urgency. When submitted, create a task in the project management app assigned to RevOps.' This replaces the ad-hoc Slack requests.
11 At the end of each quarter, run the attribution report again and have Starch generate a summary: 'Show me which asset categories had the highest share rate by deal stage this quarter, and which stages have the weakest content coverage based on deal volume vs. asset count.' Bring that to your QBR instead of a gut-feel answer.
12 Publish the content library access to all 30 reps through the Starch app — they search by deal stage or persona and get results without pinging you on Slack.

See this running on Starch

Connect your tools, describe what you want, and the agent builds it. Closed beta is free.

Try it on Starch →
Worked example

Q1 2026 Technical Eval content audit and rep rollout

Sample numbers from a real run
Assets imported from Notion wiki and Google Drive34
Assets auto-categorized correctly by Starch on first pass28
Assets flagged as stale (>90 days old)11
Deals in Closed Won last 6 months with attributable content share (via Gmail)19
Unique assets appearing in ≥3 Closed Won deals at Technical Eval stage4
Hours/week saved on 'can you send me X' Slack requests (est.)3

In January 2026, the RevOps team imports 34 existing sales assets from Notion and Google Drive into the Starch knowledge base. Starch auto-categorizes 28 of them correctly on the first pass — the 6 it got wrong were ambiguously named files like 'final_FINAL_v3.pdf' that needed manual tagging anyway. The staleness scan flags 11 assets as not updated since before Q3 2025, including the primary security one-pager that reps are still sending to enterprise prospects. The attribution dashboard, pulling from 6 months of Gmail thread data cross-referenced with HubSpot deal stages, shows that 4 specific assets — a technical architecture diagram, a competitor comparison matrix, a pricing FAQ, and an ROI calculator template — appear in 3 or more Closed Won deals at the Technical Eval stage. None of them were prominently surfaced in the old Notion page. The Monday automation goes live in week two: 30 reps start receiving stage-specific content in Slack at the start of each week. By the end of Q1, the team estimates 3 hours per week saved on inbound Slack content requests, which they redirect to rebuilding the territory model for the CRO's Q2 re-org.

Measurement

How you'll know it's working

Content-to-pipeline attribution rate: % of Closed Won deals with at least one tracked content share in Gmail at the relevant deal stage
Content staleness rate: % of active assets last updated >90 days ago, tracked monthly
Rep content adoption rate: % of reps who accessed the library at least once per week (pulled from Starch app activity)
Intake request volume: number of ad-hoc 'send me X' Slack requests per week, as a proxy for library findability
Asset coverage by deal stage: number of tagged assets per stage relative to average deal volume in that stage — identifies coverage gaps
Comparison

What this replaces

The other ways teams handle this today, and how the Starch version compares.

Notion + Google Drive + Slack (current state)
Free and already in place, but there's no deal-stage tagging, no staleness detection, no attribution, and no automation — just a folder someone has to maintain manually and reps have to know to check.
Highspot or Seismic
Purpose-built for sales content management and have strong analytics, but they cost $20k–$50k+/year, require a dedicated admin to set up governance, and don't connect to your HubSpot or Apollo data to do attribution without a separate integration project.
Guru
Good for knowledge management and has a browser extension reps actually use, but attribution to deal outcomes requires manual tagging discipline from reps — it won't pull that from HubSpot or Gmail automatically.
HubSpot Documents + Sales Hub
Native HubSpot document tracking gives you link-open data per contact, but you need Sales Hub at a tier that most teams your size don't pay for, and it doesn't aggregate across the whole library or flag stale content.
On Starch RECOMMENDED

One platform — knowledge management, crm, sales agent crm all running on connected data. Setup in plain English; numbers stay current via scheduled syncs and live agent queries.

Try it on Starch →
FAQ

Frequently asked questions

We use Salesforce, not HubSpot. Can Starch still pull deal-stage data for attribution?
Yes. Connect Salesforce from Starch's integration catalog — the agent queries your deals, opportunities, and stage data live when the attribution app runs. The scheduled-sync depth is different from HubSpot (which syncs automatically on a schedule), but for a content attribution use case where you're running queries periodically rather than streaming live, live queries from the catalog work well.
Will Starch actually read the content of our PDFs and decks to categorize them, or does it just use the file name?
The Knowledge Management app uses AI to read and categorize document content, not just file names. That said, the quality of auto-categorization depends on how clearly you describe your taxonomy upfront. The more specific your prompt — 'tag assets by deal stage, persona, and asset type using these exact labels' — the better the first-pass accuracy. You'll still want to do a quick review pass on the initial import.
We don't have formal HubSpot email integration — reps use Gmail but log calls manually. Will attribution still work?
The attribution layer reads directly from Gmail threads synced by Starch, not from HubSpot's logged activity. So if your reps are sending assets via Gmail (which they almost certainly are), Starch can detect those shares from the raw thread data even if they're never logged in HubSpot. You'll miss phone calls and in-person shares, but email-based attribution will work regardless of how disciplined reps are about CRM logging.
Is this going to require me to migrate reps to a new tool they won't use?
Reps don't need to log into Starch to get value from this. The Monday automation pushes relevant content to them in Slack, which is where they already are. The library search is available if they want it, but the push model means the workflow mostly runs in the background without asking reps to change habits.
We're not SOC 2 certified — is Starch?
Starch is not currently SOC 2 Type II certified. If your company has strict security review requirements before adding a new tool to a workflow that touches HubSpot, Gmail, and Apollo data, that's worth flagging to your security team upfront. It's on Starch's roadmap.
What happens to our content library if an asset lives in Google Drive and the link breaks?
Starch stores metadata, tags, and search context for your assets — it doesn't duplicate all file contents into a separate vault. If a Google Drive link goes stale, the entry in the library will still exist but the link will be dead. The staleness automation can flag this if you include 'check for broken links' in the automation description, but the underlying file management still lives wherever you store your files today.

Ready to run create a sales enablement content library on Starch?

Request closed-beta access. Everything is free during beta.

You're on the list! We'll be in touch soon.