A growth-stage consumer brand was about to pull $800K out of TV. Their attribution data said it wasn't working. Their MMM said the opposite.
Here's what happened.
The Situation
A DTC brand spending across TV, paid social, paid search, and email had been running for three years. Revenue was growing, but margins were tightening. The CFO wanted to find $1M in marketing cuts before the next financial year.
The marketing team pulled up their attribution dashboard. The numbers were clear: TV showed the lowest tracked ROAS of any channel. Paid search and paid social were driving the overwhelming majority of attributed conversions.
The recommendation on the table: cut TV by 80%, redirect to paid search and paid social.
Before they pulled the trigger, they decided to run an MMM.
What Attribution Was Showing
| Channel | Attributed Revenue | Reported ROAS |
|---|---|---|
| Paid Search | $4.2M | 8.1x |
| Paid Social | $2.8M | 4.3x |
| $1.1M | 12.4x | |
| TV | $0.3M | 0.4x |
On paper, TV looked like a clear cut. Low ROAS, high spend, easy saving.
What the Bayesian MMM Found
We built a Bayesian Media Mix Model using PyMC-Marketing — with Geometric adstock to capture carryover effects and Hill saturation to model diminishing returns. The model told a completely different story.
TV was driving branded search. Every TV flight correlated with a measurable spike in branded search volume — typically 18–25% above baseline in the two weeks following a campaign. Paid search was capturing that demand and getting the attribution credit. TV created it.
Paid search ROAS was inflated. Once the model removed the TV-driven demand from paid search attribution, paid search ROAS dropped from 8.1x to 4.9x. Still good — but not as dominant as it appeared.
TV had a long payback window. TV's revenue contribution wasn't showing up in the week of the campaign. It was showing up 3–6 weeks later, through brand recall and delayed purchase behaviour. The adstock parameters in our model captured this lag — last-click attribution missed the connection entirely.
The true MMM decomposition:
| Channel | MMM-attributed Revenue | True Contribution |
|---|---|---|
| Paid Search | $2.6M | 26% |
| TV | $2.4M | 24% |
| Paid Social | $2.1M | 21% |
| Baseline | $1.8M | 18% |
| $1.1M | 11% |
TV wasn't the weakest channel. It was the second-largest revenue driver in the business — completely invisible in the attribution data.
What the Budget Optimizer Showed
Using the ResponseOptix dashboard — our budget scenario optimizer — we modelled what would happen under three scenarios:
- Cut TV by 80%, redirect to paid search → projected revenue decline of 11% as branded search demand dried up over two quarters
- Maintain current allocation → flat revenue, margins improve modestly
- Optimised reallocation (no TV cuts) → $620K in savings from reducing saturated paid search and underperforming paid social, revenue holds flat
The board approved scenario 3.
What They Did Instead
The Outcome
- Reduced branded paid search spend by 30% — saturation curve showed significant diminishing returns
- Maintained TV at current levels — shifted flights to align with high-conversion periods
- Cut underperforming paid social placements where the model showed near-zero incremental contribution
- $620K in savings achieved without cutting TV — revenue held flat in the following quarter
The Lesson
The brand didn't have a TV problem. They had a measurement problem.
Their attribution tool was doing exactly what it was designed to do: report on trackable, last-touch conversions. The problem is that TV doesn't work that way. Brand-building channels rarely do.
When your measurement system can't see a channel's contribution, it looks like the channel isn't working. So you cut it. And then you wonder why the channels that were supposedly working suddenly start underperforming.
This is the cycle that bad measurement creates. And it plays out in marketing teams everywhere, every quarter.
What Good Measurement Changes
With a Bayesian MMM and the ResponseOptix optimizer in place, budget decisions look different:
- Cuts are made based on true incremental contribution — not last-touch attribution
- Channel interdependencies are modelled before spend is shifted
- Budget scenarios are stress-tested before they're executed
- The team can defend recommendations with evidence, not just dashboards
The brand now runs their model quarterly. Their team updates it with new data. They own the platform — it runs in their environment, not a vendor's.
Is Your Attribution Data Telling the Full Story?
In 45 minutes, we'll walk through your current measurement setup and show you where the gaps are costing you.
Book a Free MMM Strategy Session