How to set up pipeline attribution as Small Marketing Teams

Marketing & GrowthFor Small Marketing Teams2 apps11 steps~22 min to set up

Your HubSpot deals live in one tab, GA4 sessions in another, and Meta Ads spend in a third. Every Monday you spend 90 minutes manually joining these in a Google Sheet — matching UTM sources to deal stages, guessing at which LinkedIn campaign touched the enterprise lead that closed last week, and hoping the formula you wrote two months ago still works. When the CEO asks why MQL volume dropped 18% in March, you don't have a clean answer because your attribution model is a spreadsheet held together with VLOOKUP and optimism. You have no BI tool budget, no data engineer, and three people covering five channels simultaneously.

Marketing & GrowthFor Small Marketing Teams2 apps11 steps~22 min to set up
Outcome

What you'll set up

A live pipeline attribution dashboard that joins HubSpot deal stages against ad spend from Google, Meta, and LinkedIn — so you always know which channels are contributing to pipeline, not just clicks
A weekly automated report that surfaces MQL-to-SQL conversion by source, cost-per-opportunity by channel, and any deals that entered or stalled in the pipeline since last Monday — delivered without you building it by hand
A reusable attribution model you describe once and can interrogate in plain English — 'which campaign sourced the most enterprise deals this quarter?' — instead of rebuilding the pivot table every time someone asks
The Starch recipe

Apps, data, and prompts

The combination of Starch apps, the data sources they pull from, and the prompts you use to drive them.

Data sources & config

Starch syncs your HubSpot data on a schedule — contacts, companies, deals, and owners — so deal-stage history and source fields are always current. Google Ads, Meta Ads, and LinkedIn Ads connect from Starch's integration catalog; the agent queries them live when the dashboard or weekly automation runs. Gmail connects on a scheduled sync to capture any deal-related email context. Slack connects from Starch's integration catalog to deliver the weekly digest. No BI tool, no manual export, no ETL pipeline to maintain.

Prompts to copy
Build me a pipeline attribution dashboard that pulls HubSpot deals — including deal stage, close date, deal source, and owner — and joins them against Google Ads, Meta Ads, and LinkedIn Ads spend by UTM campaign. Show me cost-per-opportunity and cost-per-MQL by channel, updated weekly.
Every Monday at 8am, send me a Slack message summarizing: new MQLs by source this week, MQL-to-SQL conversion rate by channel vs. the prior 4-week average, which campaigns had the highest cost-per-opportunity, and any deals that moved from MQL to SQL or went dark.
Add a view that shows the last 90 days of deal source data broken down by company size — under 50 employees, 50-200, and 200-plus — so I can see which channels are actually bringing in our ICP vs. everyone else.
Run these in Starch → or paste them into your favorite agent
Walkthrough

Step-by-step

1 Connect HubSpot — Starch syncs your deals, contacts, companies, and owners on a schedule. Make sure your deal records include the 'Deal Source' or UTM-origin field you're already using; Starch will discover the schema automatically.
2 Connect Google Ads, Meta Ads, and LinkedIn Ads from Starch's integration catalog. The agent queries spend, impressions, clicks, and conversions live when your attribution app runs — no export needed.
3 Connect Slack from Starch's integration catalog so the weekly attribution digest can post directly to your #marketing or #revenue channel.
4 Open Starch and describe your attribution dashboard in plain English: deal stages, source fields, ad spend dimensions, and the time window you care about (last 30 days, last quarter, rolling 90 days).
5 Starch builds the app. Review the initial output — check that deal source values from HubSpot are mapping correctly to the campaign names coming from your ad platforms. If a UTM is inconsistent, tell Starch how to normalize it.
6 Add a cost-per-opportunity calculation: tell Starch to divide total ad spend per campaign by the number of deals where that campaign appears as the deal source or first-touch UTM.
7 Build the MQL-to-SQL conversion view. Describe the deal stages that represent MQL and SQL in your HubSpot setup — Starch maps them and calculates the conversion rate by source over whatever window you specify.
8 Set up the Monday 8am automation. Describe what you want in the digest — new MQLs by source, conversion rate vs. prior 4-week average, top and bottom campaigns by cost-per-opportunity, and deals that went stale.
9 Add the ICP segmentation layer: tell Starch to break down deal source performance by company size using the HubSpot company field. This tells you whether a channel is bringing in 500-person enterprises or 10-person startups.
10 Share the dashboard URL with your CEO and revenue ops lead. When they ask a follow-up question — 'what happened to LinkedIn MQLs in March?' — you can answer by querying the app directly instead of rebuilding a spreadsheet.
11 After two weeks, review which deal sources are inconsistently tagged in HubSpot. Use Starch to flag deals missing a source field so you can clean upstream data and keep the attribution model accurate going forward.

See this running on Starch

Connect your tools, describe what you want, and the agent builds it. Closed beta is free.

Try it on Starch →
Worked example

Q1 2026 pipeline attribution review — March close

Sample numbers from a real run
Google Ads spend (Q1)18,400
Meta Ads spend (Q1)9,200
LinkedIn Ads spend (Q1)14,600
Pipeline sourced — Google Ads312,000
Pipeline sourced — Meta Ads87,000
Pipeline sourced — LinkedIn Ads241,000
Cost per opportunity — Google Ads920
Cost per opportunity — LinkedIn Ads1,460
Cost per opportunity — Meta Ads1,533

Going into the Q1 board deck, the team needed to explain why they'd shifted $5,000 from Meta to LinkedIn in February. Before Starch, this answer lived across three exports and a two-hour Google Sheets session. With the attribution dashboard running, the answer was already there: Meta's $9,200 in Q1 spend produced $87,000 in pipeline at a cost-per-opportunity of $1,533 — almost double LinkedIn's $1,460 and well above Google's $920. LinkedIn's 241,000 in sourced pipeline also skewed heavily toward companies in the 100-500 employee range, which matched the ICP. The Monday digest on March 4th flagged that 6 deals sourced from Meta had stalled at MQL for more than 21 days, which prompted the team to move budget before the quarter closed rather than discovering it in the postmortem.

Measurement

How you'll know it's working

Cost per MQL by channel (Google Ads, Meta Ads, LinkedIn Ads) — week over week
MQL-to-SQL conversion rate by deal source — rolling 4-week average vs. current week
Pipeline sourced by channel per quarter — dollar value of deals where that channel appears as first-touch or deal source
Deal source coverage rate — percentage of open HubSpot deals with a populated, normalized source field
Days from MQL to SQL by source — to identify which channels produce leads that actually convert quickly vs. drag through the pipeline
Comparison

What this replaces

The other ways teams handle this today, and how the Starch version compares.

HubSpot Attribution Reports (native)
HubSpot's built-in attribution only sees what happens inside HubSpot — it can't join against actual ad spend from Meta or Google, so you can't calculate true cost-per-opportunity without a separate export.
Google Looker Studio (free BI)
Looker Studio can pull GA4 and Google Ads natively but connecting HubSpot requires a third-party connector, LinkedIn Ads data is patchy, and you still have to build and maintain the blended data source yourself — which is the hour you're trying to get back.
Supermetrics + Google Sheets
Supermetrics pulls ad data cleanly, but you're still maintaining the Sheet logic, the HubSpot join is manual, and you pay ~$500-800/year for a tool that doesn't answer questions — it just moves data.
Segment + dbt + Metabase (full stack)
Genuinely powerful if you have a data engineer; if you're a 3-person marketing team without one, you'll spend 6 weeks standing it up and still need someone to write the SQL every time the CEO asks a new question.
On Starch RECOMMENDED

One platform — sales agent crm, growth analyst all running on connected data. Setup in plain English; numbers stay current via scheduled syncs and live agent queries.

Try it on Starch →
FAQ

Frequently asked questions

We don't have consistent UTM tagging across all our campaigns. Will the attribution model still work?
Partially, and Starch will make the gap visible rather than hiding it. HubSpot's deal source field and any UTM data that did get captured will feed the model. Starch can flag deals missing source data so you know exactly where your coverage holes are — which is more useful than a clean-looking report built on incomplete inputs. Fixing the upstream tagging discipline is still on you, but at least you'll see the scope of the problem.
Does Starch store our ad spend data, or does it query it fresh each time?
Google Ads, Meta Ads, and LinkedIn Ads connect from Starch's integration catalog and are queried live when your dashboard or automation runs — the data isn't stored in Starch's database between runs. HubSpot deal and contact data does sync on a schedule and lives in Starch, so deal history is always available. For most attribution use cases this combination works well; if you need a multi-year archived data warehouse, that's outside what Starch does today.
Can Starch handle attribution across both paid and organic sources — like organic search or direct referrals from content?
Yes, if the source data exists somewhere Starch can reach. HubSpot's deal source field captures organic, referral, and direct alongside paid. If you're using GA4 or Amplitude for session-level source data, those connect from Starch's integration catalog too. Tell Starch which source categories you care about and it builds the attribution view to include them — you're not limited to paid channels.
Is Starch SOC 2 certified? We have a security review before adding any new tools to our stack.
Not yet — Starch is not SOC 2 Type II certified as of today. If that's a hard requirement for your company's vendor approval process, it's worth flagging before you start the setup. It's on the roadmap.
Can we build one attribution view for the CEO and a more detailed one for the marketing team, without duplicating all the setup?
Yes. You build the underlying connections once — HubSpot, ad platforms, Slack — and then describe two different surfaces on top of that data. Tell Starch: 'build a one-page executive summary showing total pipeline sourced and cost-per-opportunity by channel this quarter' and separately, 'build a detailed attribution breakdown by campaign, deal stage, and company size for the marketing team.' Both pull from the same connected data; they're just different views you describe in plain English.

Ready to run set up pipeline attribution on Starch?

Request closed-beta access. Everything is free during beta.

You're on the list! We'll be in touch soon.