Home On TV & Video Setting Up Effective Incrementality Testing For TV and Radio Campaigns

Setting Up Effective Incrementality Testing For TV and Radio Campaigns

SHARE:
Devin McGaughey, Chief Solutions Officer, Streaming+ at Tinuiti

TV and audio incrementality have traditionally been difficult to measure, but technological shifts have produced a strong playbook for testing incrementality across these mediums. 

Incrementality testing helps attribute conversions to TV and audio ads by isolating two audiences – a treatment group that receives the brand’s ad and a control that doesn’t – and comparing the behavior of each audience via either discrete timed tests or in an always-on fashion. 

With either approach, there are a few things to keep in mind.

For one thing, if the treatment group for a sneakers brand is 18-24-year-olds, and the control group only contains people above 65, you’ll see divergence in response rates regardless of the ad. Ensure the audiences are consistent across both groups.

Some products also have longer consideration cycles (pizza delivery vs. mattresses), and some marketing channels have longer response cycles (brand search vs. billboards). Incrementality assessments should use appropriate lag windows. 

Marketers must also decide if they are interested in total campaign incrementality or the incrementality of individual placements within a campaign. To evaluate the incrementality of TV overall, brands should ensure the control group remains unexposed to the entire TV campaign.

Finally, direct IO media bought through a network sales team and delivered via network SSP has different constraints than DSP media, which is transacted via automated systems and programmatic. Direct IO’s manual processes limit targeting and provide less control over group construction at scale.

While there are many ways to evaluate incrementality, there are five basic techniques suited to TV and audio.

1. Geo holdouts: Run media in one set of geographies (treatment), withhold media from another (control) and compare the overall change in volume.

While this can capture “all-in” channel effects (like spillover effects with other media), watch out for control group bias. DMAs vary in demographic makeup and baseline response. With only 200 DMAs, random chance drives treatment and control imbalances. Consult an econometrician to study, model and predict causal outcomes.

2. Audience holdouts – PSA: Isolate treatment and control audiences within a network placement (the network or DSP typically creates the audiences), serve the brand’s ad to the treatment group and another ad (e.g., an unrelated PSA) to the control group. Then, compare differences in response rates between the exposed and unexposed audiences.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

Large sample sizes reduce chance imbalances between treatment and control. They produce quicker, more cost effective, apples-to-apples comparisons than geo holdouts.

At the same time, be wary of PSA cost and network capabilities. Most network-led audience holdouts require the brand to buy impressions for the control group because they are replacing other ads. Networks also set up the audiences, so vet their methodology.

3. Audience holdouts – Ghost bidding: An ad server delivers true campaign ads to 70% of winning bids (the treatment group) and withholds the true ad from the remaining 30% of winning bids. It also records the end user ID to which the ad would have gone (the control group). Instead of the true campaign ad, the control group sees the “next best ad” that wins the auction. The brand then compares the response rates of the two groups.

This approach provides the most reliable control groups. If a brand is targeting plumbers in Minnesota, the DSP will use the exact same audience to construct the control group. Viewers are routed to the control or treatment arm only after all targeting criteria is met.

However, results might be a better gauge on how a specific audience segment performs rather than of the streaming campaign’s overall incrementality. Work with a DSP that can randomize ad serving at the campaign vs. placement level or a partner that can scrub the data after test completion to create a synthetic campaign-level control group.

4. Always-on synthetic incrementality: Use impression delivery records and household graph data to construct synthetic control groups. For example, use impressions records from other similarly targeted campaigns, randomly sample to create a control group, and then compare the response rates.

This is the only way to achieve always-on incrementality measurement for direct IO media. But this is not a randomized experiment, so the synthetic control group should be behaviorally matched to be as similar as possible to the treatment groups. Match the groups on geography, impression timing and audience targeting. Work with an agency with a robust econometric team and deep experience in bias elimination.

5. Portfolio modeling: Ingest a brand’s marketing portfolio data to build a regression-based time series model. A marketing mix model (MMM) can illuminate how changes in channel investment relate to KPI changes while accounting for lags, saturation, seasonality and other factors.

This marketing-channel-level view of TV or audio is a necessary complement to placement-level analysis, which is typically so granular that it doesn’t account for longer response lag times or external factors like seasonality.

To ensure effectiveness, create robust and automated data pipelines to ingest data from marketing platforms (e.g., Facebook, Google). Your MMM should include priors from channel incrementality tests to help distinguish a channel’s incremental signal from observational noise.

Incrementality testing can help advertisers allocate marketing dollars to the most efficient channels. The potential downside? Poor data quality and measurement strategy. To avoid these pitfalls, work with partners that are skilled at placement-level and total-portfolio incrementality measurement – and confirm they offer both discrete and always-on solutions.

On TV & Video” is a column exploring opportunities and challenges in advanced TV and video. 

Follow Tinuiti and AdExchanger on LinkedIn.

Must Read

LG Electronics

Alphonso Shareholders Win Their Suit Against LG Electronics Over Corporate Board Drama

After being summarily booted from the board of LG Ads in late 2022, Alphonso’s founding team has won its lawsuit against LG Electronics.

Bye-Bye Sizmek! Amazon Advances Flashtalking And Smartly As Alternatives In Advance Of The Shutdown

According to emails seen by AdExchanger that were sent to Amazon customers this week, Amazon is officially naming integration partners to offload clients of the Sizmek ad suite, now the Amazon Ad Server.

2024 Promises More Premium Inventory – And Bigger Budgets – For In-Game Ads

Given the deprecation of third-party cookies and the reemergence of contextual targeting, 2024 could be a big year for in-game ads – so long as game publishers position themselves as a source of premium inventory.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

AdExchanger’s Top 3 Connected TV Newsletter Issues Of 2023

This was such a busy year in CTV land that we had to launch a dedicated newsletter just to keep up with all the trends, from measurement, currency, targeting and attribution to streaming data, identity, supply-path optimization and new ad formats – just to name a few.

M&A 2023: Ad Tech Deals Were Muted, But That Could Be A Mark Of Maturity

Who got bought in 2023, and who did the buying? Here’s a non-exhaustive list of some of the most notable ad tech M&A activity from this past year (with a few media and agency deals tossed in for good measure).

Comic: The Great Data Lakes

Snowflake Acquires Data Clean Room Startup Samooha

Snowflake has acquired Samooha, a startup that develops software to make clean room technology accessible to marketers who aren’t necessarily SQL wizards or data scientists.