Science to Speculation Featured Image

From Science to Speculation: 4 Reasons Performance Models Don’t Work for TV

Marketers have always craved performance – instantaneous results – it’s what makes us tick. No matter how much the landscape shifts—new channels, evolving technology, changing consumer habits—the pressure to deliver quantifiable results never fades. And when ad budgets are on the line, those results need to be measurable. A recent study confirms what most marketers already know: they’re willing to spend, but only if they can prove it’s working. That’s why Return on Ad Spend (ROAS) has become the top metric for evaluating marketing investments this year. So when a vendor promises to deliver results with “radical transparency” – and only charges you for a pre-defined set of business outcomes – it’s only natural for marketers to find this to be an appealing model that aligns to their objectives.

But the desire for clear, quantifiable results runs into a major problem—attribution is messy, often misleading, and easily manipulated. Fraud? In ad tech? You don't say? Performance advertising operates in a world of self-serving metrics, where numbers are stretched, methodologies are opaque, and few stop to question the math. Marketers, lacking the time or expertise to dig deeper, take reports at face value and declare victory. Vendors, of course, have a vested interest in maintaining the illusion of success: it’s how they get paid. The result? Advertisers are being fed results built on inflated conversions, black-box algorithms, and statistical sleight of hand. It’s time to face the truth: performance advertising is far from a hard science.

Take the concept of the pay-for-performance approach in the connected and streaming TV space. In theory, these pay-for-performance pricing models promise advertisers that they’ll only pay when a specific, measurable action (or “a Tangible Business Outcome”)—like a purchase or sign-up—occurs as a result of their TV ad airing. The model seems sound enough: instead of paying for every impression, advertisers invest in real outcomes, with advanced data tracking linking exposures to consumer behaviors. This model blends the wide reach of streaming TV with the precision and accountability of digital marketing, promising that ad spend directly correlates with tangible results. But does it deliver? Not really. I’ve shared my thoughts on why there are many glaring issues with the pay-for-performance model when it comes to TV without advertisers knowing the full details of how a vendor is attributing results. But this has not stopped advertisers from being blinded by the promise of only paying for performance. After all, it works so well on Google and Meta, why not TV? Let’s take a closer look at the issues with this approach once and for all.


1. Device Graph Issues

The accuracy of TV attribution hinges on how ad exposures are linked to consumer actions. Many providers rely on device graphs that use IP-matching to connect ad viewers to purchases, but without careful filtering of communal and commercial IPs, attribution numbers can be wildly inflated.

There are several situations where a group of different responders may be served CTV/Streaming TV impressions under the same IP address, also known as communal IPs. The figure above illustrates some of these examples.

In some cases, improperly handled IP matching has overstated results by as much as 10x. Mobile networks and certain ISPs assign the same IP address to multiple users, meaning a purchase attributed to a TV ad may have come from someone who never saw the ad. And when view-through windows extend over longer periods, the likelihood of mismatches grows exponentially. Agencies that benefit from these loose methodologies have little incentive to correct them.

The above figure shows an example of how a responder may be falsely attributed as a TV responder. Person A sees an ad served from a communal IP. Person B responds through the same communal IP later despite never seeing the original ad, being falsely attributed as incremental by a system that relies purely on IP matching.

2. View-through vs. Incrementality

Tracking view-through conversions in streaming TV is possible, but it doesn’t automatically prove that the ad drove the sale. The key question is incrementality—how many of those conversions would have happened anyway? Without rigorous controls, advertisers risk paying for results that were influenced by other marketing efforts or conversions that would have happened organically, not just their TV spend.

3. Transparency—or Lack Thereof

Multi-touch attribution (MTA) models that assign fractional credit to CTV ads rely on a series of assumptions and data-cleaning decisions, many of which are hidden from advertisers. If a vendor is charging on a pay-per-conversion basis but won’t fully disclose how it attributes sales, that should be a major red flag. Promises of “radical transparency” are only valid if the vendor fully discloses data and measurement methodologies.

4. Cross-Device Challenges

Consumers often engage with content on multiple devices—TV screens, smartphones, tablets, and desktops—while encountering ads across various channels. This makes it difficult for an MTA model to pinpoint which exposure truly influenced a conversion. For example, a user might hear a terrestrial radio ad while driving, search for the product on their desktop an hour later, then encounter a retargeting TV ad on a streaming device before purchasing on their phone. The initial terrestrial radio touchpoint may be ignored entirely. Instead, the model could incorrectly credit the more trackable TV ad. This highlights how cross-device and cross-channel interactions can lead to false precision and obscure the true drivers of performance.

Conclusion

At the core of the challenge is the fact that many vendors don’t see any reason to change what they’re doing. Why would they? After all, pay for performance is easy, and unless CEOs and boards demand detail, easily to defend. Vendors charging per conversion have little reason to establish a true baseline for conversions that would have happened anyway. In fact, they may have a financial incentive to underestimate that baseline, making the impact of TV ads seem larger than it really is. If you’re paying the originally quoted price per conversion, there’s a good chance you’re overpaying. That sounds less like science and more like deception.

Learn how Tatari measures the true incremental impact of your TV campaigns and how we optimize based on your KPIs.


philip-inghelbrecht-headshot

Philip Inghelbrecht

I'm CEO at Tatari. I love getting things done.

Related

Cutting Costs or Cutting Corners? Why Sacrificing Channel Expertise Can Lead To A Vicious Cycle Of Diminished Performance

Cutting Costs or Cutting Corners? Why Sacrificing Channel Expertise Can Lead To A Vicious Cycle Of Diminished Performance

See why consolidating external expertise to cut costs can lead to diminished performance, especially in high-investment channels like TV.

Read more

7 Essential Considerations When Evaluating a Potential MMM Partner

7 Essential Considerations When Evaluating a Potential MMM Partner

Marketing Mix Models (MMM) helps companies measure their true marketing impact, but selecting the right partner requires careful evaluation. Here are key factors to consider in your MMM assessment.

Read more

Finding the Sweet Spot: Optimizing Ad Frequency for Maximum Impact

Finding the Sweet Spot: Optimizing Ad Frequency for Maximum Impact

In TV advertising, hitting the right frequency is key. Tatari’s data-driven tools pinpoint this sweet spot to maximize effectiveness, boost ROI, and avoid waste. Discover how to fine-tune your ad strategy for optimal impact.

Read more