TV and audio incrementality have traditionally been difficult to measure, but technological shifts have produced a strong playbook for testing incrementality across these mediums.
Incrementality testing helps attribute conversions to TV and audio ads by isolating two audiences – a treatment group that receives the brand’s ad and a control that doesn’t – and comparing the behavior of each audience via either discrete timed tests or in an always-on fashion.
With either approach, there are a few things to keep in mind.
For one thing, if the treatment group for a sneakers brand is 18-24-year-olds, and the control group only contains people above 65, you’ll see divergence in response rates regardless of the ad. Ensure the audiences are consistent across both groups.
Some products also have longer consideration cycles (pizza delivery vs. mattresses), and some marketing channels have longer response cycles (brand search vs. billboards). Incrementality assessments should use appropriate lag windows.
Marketers must also decide if they are interested in total campaign incrementality or the incrementality of individual placements within a campaign. To evaluate the incrementality of TV overall, brands should ensure the control group remains unexposed to the entire TV campaign.
Finally, direct IO media bought through a network sales team and delivered via network SSP has different constraints than DSP media, which is transacted via automated systems and programmatic. Direct IO’s manual processes limit targeting and provide less control over group construction at scale.
While there are many ways to evaluate incrementality, there are five basic techniques suited to TV and audio.
1. Geo holdouts: Run media in one set of geographies (treatment), withhold media from another (control) and compare the overall change in volume.
While this can capture “all-in” channel effects (like spillover effects with other media), watch out for control group bias. DMAs vary in demographic makeup and baseline response. With only 200 DMAs, random chance drives treatment and control imbalances. Consult an econometrician to study, model and predict causal outcomes.
2. Audience holdouts – PSA: Isolate treatment and control audiences within a network placement (the network or DSP typically creates the audiences), serve the brand’s ad to the treatment group and another ad (e.g., an unrelated PSA) to the control group. Then, compare differences in response rates between the exposed and unexposed audiences.
Large sample sizes reduce chance imbalances between treatment and control. They produce quicker, more cost effective, apples-to-apples comparisons than geo holdouts.
At the same time, be wary of PSA cost and network capabilities. Most network-led audience holdouts require the brand to buy impressions for the control group because they are replacing other ads. Networks also set up the audiences, so vet their methodology.
3. Audience holdouts – Ghost bidding: An ad server delivers true campaign ads to 70% of winning bids (the treatment group) and withholds the true ad from the remaining 30% of winning bids. It also records the end user ID to which the ad would have gone (the control group). Instead of the true campaign ad, the control group sees the “next best ad” that wins the auction. The brand then compares the response rates of the two groups.
This approach provides the most reliable control groups. If a brand is targeting plumbers in Minnesota, the DSP will use the exact same audience to construct the control group. Viewers are routed to the control or treatment arm only after all targeting criteria is met.
However, results might be a better gauge on how a specific audience segment performs rather than of the streaming campaign’s overall incrementality. Work with a DSP that can randomize ad serving at the campaign vs. placement level or a partner that can scrub the data after test completion to create a synthetic campaign-level control group.
4. Always-on synthetic incrementality: Use impression delivery records and household graph data to construct synthetic control groups. For example, use impressions records from other similarly targeted campaigns, randomly sample to create a control group, and then compare the response rates.
This is the only way to achieve always-on incrementality measurement for direct IO media. But this is not a randomized experiment, so the synthetic control group should be behaviorally matched to be as similar as possible to the treatment groups. Match the groups on geography, impression timing and audience targeting. Work with an agency with a robust econometric team and deep experience in bias elimination.
5. Portfolio modeling: Ingest a brand’s marketing portfolio data to build a regression-based time series model. A marketing mix model (MMM) can illuminate how changes in channel investment relate to KPI changes while accounting for lags, saturation, seasonality and other factors.
This marketing-channel-level view of TV or audio is a necessary complement to placement-level analysis, which is typically so granular that it doesn’t account for longer response lag times or external factors like seasonality.
To ensure effectiveness, create robust and automated data pipelines to ingest data from marketing platforms (e.g., Facebook, Google). Your MMM should include priors from channel incrementality tests to help distinguish a channel’s incremental signal from observational noise.
Incrementality testing can help advertisers allocate marketing dollars to the most efficient channels. The potential downside? Poor data quality and measurement strategy. To avoid these pitfalls, work with partners that are skilled at placement-level and total-portfolio incrementality measurement – and confirm they offer both discrete and always-on solutions.
“On TV & Video” is a column exploring opportunities and challenges in advanced TV and video.
Follow Tinuiti and AdExchanger on LinkedIn.