Most advertising dashboards show positive results. ROAS is above target. Conversions are tracking. Cost per acquisition looks efficient. And yet, when the fundamental question is asked directly, most marketing teams cannot answer it with confidence: would those customers have purchased without the advertising?
This is the incrementality question. It is the only question that measures whether advertising is generating new revenue or simply claiming credit for revenue that was already happening.
Incrementality testing is the measurement methodology that answers it. For any brand making significant advertising investment decisions, understanding what incrementality testing is, how it works, and why standard attribution cannot replace it is essential to making those decisions accurately.
Incrementality testing measures the causal impact of advertising by comparing conversion outcomes between an audience that was exposed to a campaign and an audience that was not.
The exposed group sees the advertising as normal. The holdout group, a defined segment of the target audience, is withheld from seeing the campaign during the test period. All other conditions remain constant. At the end of the measurement window, the conversion rates of both groups are compared. The difference is the incremental lift: the proportion of conversions that were caused by the advertising rather than occurring organically.
The output of an incrementality test is not a ROAS number or an attributed conversion count. It is a lift percentage that represents the true causal contribution of the advertising to conversion behavior.
Attribution models are designed to assign credit to advertising touchpoints within a customer’s conversion journey. Last-click attribution assigns full credit to the final touchpoint before conversion. First-click assigns it to the first. Data-driven models use algorithms to distribute credit across multiple touchpoints based on historical patterns.
Every attribution model answers the same question: which touchpoints were present when the customer converted? None of them answer the question that determines whether advertising is genuinely effective: which touchpoints caused the conversion?
A customer who has already decided to purchase, who has visited a product page multiple times, added items to a cart, and searched the brand directly, is going to convert. When a retargeting ad appears in that customer’s journey before they complete the purchase, the attribution model records the ad as a conversion driver. The customer was already converting. The ad was present. The attribution model does not distinguish between the two.
This structural limitation means that every attribution model, regardless of its sophistication, consistently over-credits campaigns that reach high-intent audiences already in the conversion funnel. Retargeting campaigns and branded search campaigns show strong attributed ROAS not because they are generating the most incremental revenue, but because they are efficiently accompanying audiences who were already converting.
When a single customer is reached by campaigns across multiple platforms before converting, each platform claims credit for the conversion independently. A customer exposed to a Meta campaign on Monday, a Google display ad on Wednesday, and a branded search result on Friday generates three separate attributed conversions across three platform dashboards, all from a single purchase event.
When total attributed conversions across all active platforms are compared against actual backend conversions for the same period, the combined platform total almost always exceeds actual conversions by a significant margin. Every budget allocation decision built on platform-reported attribution is built on a figure that has been inflated by simultaneous credit claims across competing platforms.
The standard incrementality test uses a holdout group methodology:
An incrementality test produces one of three outcomes:
Positive incremental lift: The exposed group converts at a meaningfully higher rate than the holdout. The campaign is generating conversions that would not have occurred organically. The advertising is working in the truest sense.
Minimal incremental lift: The conversion rates between exposed and holdout groups are similar. The campaign is reaching audiences who were already converting. The attributed ROAS is overstating actual impact. Budget reallocation is indicated.
Negative incremental lift: In rare cases, the holdout group outperforms the exposed group, indicating that the advertising is actively interfering with organic conversion behavior. This typically occurs when high-frequency campaigns create negative brand associations among audiences already predisposed to convert.
For an incrementality test to produce statistically reliable results, four conditions must be met:
Retargeting campaigns consistently show the largest gap between attributed performance and incremental performance. Because retargeting targets audiences already in the purchase funnel, a significant proportion of attributed conversions occur among users who would have converted without the retargeting exposure.
Incrementality tests on retargeting campaigns frequently reveal that 20 to 40 percent of attributed conversions are genuinely incremental. The remaining 60 to 80 percent represent organic conversions that the retargeting campaign was present for but did not cause. Brands allocating significant budget to retargeting based on attributed ROAS are often over-investing in the campaign category with the lowest actual incremental impact.
Branded search campaigns target users who have already searched for the brand by name, indicating strong existing purchase intent. Like retargeting, branded search frequently shows strong attributed metrics and weak incremental lift, because the users being reached were already actively seeking the brand before the ad appeared.
Incrementality testing on branded search often reveals that organic search results would have captured a high proportion of these conversions at zero additional cost. The incremental contribution of the paid branded search investment is frequently lower than attributed metrics suggest.
Prospecting campaigns targeting audiences with no prior brand exposure typically show weaker attributed metrics but stronger incremental lift. These campaigns are reaching audiences who would not have converted organically within the measurement window. The conversion rate is lower. The attribution model gives them less credit. The incrementality test reveals that a higher proportion of their attributed conversions represent genuine new revenue generation.
Brands optimizing purely toward attributed ROAS systematically under-invest in the campaign categories producing the highest actual incremental impact.
Incrementality testing and A/B testing are both controlled measurement methodologies, but they answer different questions.
A/B testing compares the performance of two variants, such as two creative executions or two landing pages, against each other. It answers the question: which variant performs better?
Incrementality testing compares outcomes between an exposed group and a holdout group with no exposure. It answers the question: does the advertising produce conversions that would not have occurred organically?
Both methodologies are valuable. A/B testing optimizes campaign elements within an active campaign. Incrementality testing validates whether the campaign itself is generating genuine revenue impact.
The campaigns showing the strongest attributed ROAS are the most important candidates for incrementality testing, because they carry the highest risk of over-attribution. Begin with retargeting and branded search campaigns, where the gap between attributed and incremental performance is structurally most likely to be significant.
A single incrementality test produces a single data point. Integrating incrementality measurement into a regular quarterly testing cadence produces a longitudinal view of which campaigns are generating genuine impact over time, accounting for seasonal variation and evolving audience behavior.
Incrementality test results should directly inform budget allocation decisions. Campaigns with high attributed ROAS and low incremental lift are candidates for budget reduction. Campaigns with lower attributed metrics and high incremental lift are candidates for increased investment. The goal is to shift spend toward campaigns generating the highest proportion of genuinely new revenue.
Combine With Marketing Mix Modeling for a Complete View
Incrementality testing operates at the campaign level, measuring the lift from specific campaigns over defined time periods. Marketing mix modeling operates at the portfolio level, measuring the contribution of all marketing activity to overall business outcomes over extended periods. Used together, the two methodologies provide a comprehensive and accurate picture of advertising effectiveness across the full marketing investment.
For multi-location brands running advertising across geographic markets, incrementality testing offers an additional strategic application: geographic holdout testing.
By withholding advertising from specific geographic markets while maintaining campaigns in comparable markets, multi-location brands can measure the incremental impact of advertising at the market level. This approach is particularly valuable for brands with large numbers of physical locations, where online advertising impact on in-store conversion behavior is difficult to measure through standard digital attribution.
Geographic holdout tests can also reveal significant variation in advertising incrementality across markets, identifying regions where campaigns are generating strong genuine impact and regions where organic conversion behavior is strong enough that advertising investment produces minimal additional lift.
Most clients see measurable improvements within 30 days, with full 25% CAC reduction by month 2.
No. Our platform works behind the scenes. Managers get simple dashboards showing their performance.
We complement or replace based on your needs. Many clients consolidate to reduce vendor complexity.
Our system dynamically allocates budget based on opportunity and performance, maximizing overall ROI. We also provide an option to set fixed budget per location.
Yes, we support complex fund structures, co-op advertising, and multi-tier budget management.
All major CRMs, booking systems, call tracking, and POS platforms. Technical integration takes days, not months.
We're a complete marketing orchestration platform, not just listings management. We handle paid media, creative, and optimization.