The Incrementality Glossary

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

2-cell test

Copy link

An experiment structure where test regions are divided into two groups: a treatment (exposed) group that receives marketing exposure, and a control (holdout) group that does not. By comparing outcomes between the two, you can isolate the true incremental impact of the marketing activity.

3-cell test

Copy link

An experiment structure that divides test regions into three groups: treatment A, treatment B, and a control/holdout. This lets you compare two different tactics against each other and against a no-marketing baseline simultaneously — useful for testing two creative approaches, channels, or spend levels at once.

A/B test

Copy link

An experiment that compares two groups to measure relative performance between them. Importantly, an A/B test does not show incrementality. It only tells you which variant performed better, not whether either variant drove results beyond what would have happened organically with no marketing at all.

Accuracy

Copy link

How close a measurement is to the true underlying value. A model can be accurate on average but inconsistent, or consistently off in one direction. In measurement, both accuracy and precision matter.

Believed spend impact

Copy link

Your prior assumption about how much lift a marketing tactic will generate on your primary KPI. This is used as an input when designing a test to determine how much statistical power you'll need to reliably detect a real effect if one exists.

Causal Attribution

Copy link

The daily performance layer of Haus’ Causal Marketing Platform. Haus calibrates marketing performance metrics against your GeoLift experiment results so the signal received is rooted in causal truth.

Causal inference

Copy link

The process of understanding cause-and-effect relationships that go beyond observing correlations and explain why a campaign over- or under-performed. This is the foundational concept underpinning Haus’s approach.

Causal MMM (cMMM)

Copy link

Haus’ cMMM measurement model calibrates results with experiment data, which reduces conflicting signal and produces more accurate, trustworthy, causal data of how marketing drives results.

Learn more about Causal MMM here.

Confidence interval

Copy link

A range of values constructed from sample data that indicates how likely results will repeat. A narrow confidence interval means your estimate is precise; a wide one means there is more uncertainty in the result.

Confounding variable

Copy link

A variable that is associated with both the test and control (holdout) group, which can create a misleading appearance of a causal relationship. This is why controlled experiments are necessary for results you can trust.

Connected TV (CTV)

Copy link

As an advertising channel, this refers to any screen that is connected to the internet. It combines the reach and impact of traditional TV with the targeting precision of digital advertising.

Control (holdout) group

Copy link

The regions or audiences that are intentionally excluded from a marketing campaign and serve as the control group in a test. Comparing outcomes in the holdout to the treatment (exposed) group is how you measure lift.

Conversion lift testing

Copy link

A form of experiment run directly by ad platforms. This relies on exposure at the individual user level instead of varying exposure by geography, like a geo-based test. However, this type of test methodology often misses the cross-platform and multi-channel exposure that occurs throughout a buyer's experience with a brand.

Conversions

Copy link

Actions taken by consumers that your business cares about — such as a purchase, sign-up, app install, or lead form submission — that occur after ad exposure. Many businesses track multiple conversion event types across different stages of the funnel.

Cost per acquisition (CPA)

Copy link

Total ad spend divided by the number of customers acquired. CPA measures efficiency, but does not account for whether those acquisitions were actually caused by the ad — some customers may have converted regardless of exposure.

Cost per incremental acquisition (CPI / CPIA)

Copy link

Total ad spend divided by the number of truly incremental conversions — those that would not have happened without the marketing. Because it filters out organic activity, CPI is a more rigorous efficiency metric than CPA, and requires an incrementality test to calculate.

Decision threshold

Copy link

The benchmark your results are tested against — most commonly zero. In an incrementality test, the core question is whether the measured lift is statistically distinguishable from zero. If it crosses the threshold with sufficient confidence, you can conclude the marketing had a real causal effect.

Designated market area (DMA)

Copy link

A geographic region defined by Nielsen representing a distinct US advertising market, with historical roots in television broadcast license territories. DMAs are commonly used as the unit of geography in geo-based incrementality tests because they function as relatively self-contained markets.

Difference in differences (DiD)

Copy link

A statistical method that shows how much a test and control group change before and after exposure.

Efficiency testing

Copy link

A type of test designed to find the optimal level of marketing spend. It provides insight into how much you should scale up a certain channel, or where the point of diminishing returns starts.

False negative

Copy link

When a test concludes there is no effect even though a real effect exists. Also called a Type II error. False negatives are more likely when a test is underpowered. For example, when the test runs too short, the holdout is too small, or the expected lift is smaller than the test was designed to detect.

False positive

Copy link

When a test concludes there is a significant effect even though no real effect exists. Also called a Type I error. The risk of false positives is controlled by your significance threshold. The lower you set it, the less likely you are to mistake random noise for a real signal, but this will increase the risk of false negatives. A trustworthy signal requires balance between these.

First-click attribution

Copy link

An attribution model that assigns 100% of conversion credit to the very first marketing touchpoint a customer interacted with. This approach tends to overvalue top-of-funnel channels and ignores everything that happened between first touch and conversion.

First party data

Copy link

Data collected directly by your business from its own customers and platforms — such as purchase history, email lists, website behavior, and CRM records. Because you own and control it, first party data is generally higher quality and more privacy-compliant than data sourced from third parties.

Geo-based test

Copy link

A causal testing methodology where geographic regions are used as the unit of randomization to measure the true causal effect of advertising. One set of regions is exposed to the marketing treatment while another set is held out as the control.

GeoLift ColdStart

Copy link

An experiment type designed to measure the incremental impact of activating a brand-new marketing channel, product launch, or market for the first time. This test type solves the “cold start” measurement problem and helps marketers measure the previously unmeasurable. It isolates whether the new activity drives outcomes beyond your existing activity.

Halo effect

Copy link

A situation where a marketing campaign driving consumers to a specific sales channel ultimately drives sales in another channel. For example, you're running ads to drive purchases on your site, but consumers end up converting on Amazon or in retail locations.

Haus Market Area (HMA)

Copy link

A geographic framework for experiment design that works across all individual ad platforms and cross-platform.

Impressions

Copy link

The total number of times an ad was served and seen by consumers. Impressions measure reach and exposure volume, but not whether the ad drove any action.

Incrementality

Copy link

Incrementality experiments show you what would have happened without your marketing. Compare business outcomes between exposed and unexposed groups to isolate the impact of your marketing campaigns.

Learn more about incrementality at our Incrementality 101 hub.

Incrementality test

Copy link

A controlled experiment that measures how much of your primary KPI was caused by marketing investments. It answers the question of what additional conversions occurred due to marketing efforts such as ad campaigns.

Incremental ROAS (iROAS)

Copy link

A metric that measures how much incremental revenue is generated for every dollar of spend, meaning that additional revenue would not have happened without the ads that ran.

Last-click attribution

Copy link

An attribution model that assigns 100% of conversion credit to the final marketing touchpoint before a conversion. This approach tends to overvalue bottom-of-funnel channels like branded search and ignores the role earlier touchpoints played in driving the decision.

Lift %

Copy link

The percentage increase in your primary KPI in the treatment (exposed) group relative to the control (holdout) group. Lift % expresses the relative size of the incremental effect.

Lift amount

Copy link

The raw volume of incremental conversions — the absolute number of additional outcomes (purchases, sign-ups, etc.) that occurred as a result of the marketing, beyond what the control (holdout) group produced.

Lift likelihood

Copy link

The probability that the measured lift is real and not due to random chance. A high lift likelihood means you can be confident the effect you observed reflects a genuine causal impact of your marketing.

Linear TV (LTV)

Copy link

Traditional television advertising delivered via broadcast, cable, or satellite where all viewers watch the same content at the same scheduled time. Linear TV offers mass reach but limited targeting and measurement capabilities compared to CTV/OTT.

Lower funnel metrics

Copy link

Measurements that track actions that happen at the point of conversion. This includes metrics such as ROAS, iROAS, and CPA. These metrics are critical to understanding whether your advertising is driving revenue for your business.

Marginal returns

Copy link

The incremental output, such as revenue, conversions, or new customers, generated by the next dollar of marketing spend, as opposed to the average return across all dollars already invested. Where iROAS measures the average causal return across a channel's total spend, marginal returns answer a sharper question: What will the next dollar actually do? That's the question that drives budget decisions, since past spend is sunk and only the next dollar is allocatable.

Marginal returns typically decline as spend scales because the most efficient inventory and most responsive audiences get bought first. This is the concept of diminishing returns. This curvature is what return curves visualize in a Causal MMM, and it's what spend increase and spend decrease experiments are designed to surface in-market. Check out our Open Haus podcast episode for more about the important distinction between average and marginal returns.

Marketing mix model (MMM)

Copy link

A measurement model that incorporates all marketing levers, such as price, promotions, product launches, and distribution strategy. Often interchangeable with Media Mix Model, but focused on a wider range of inputs rather than just paid digital channels. Results are used to understand which channels are most effective and to allocate budget.

Check out Haus’ Causal MMM for an experiment-calibrated approach.

Match market test

Copy link

An incrementality experiment where geographies are carefully paired based on similarity (e.g. size, demographics, baseline conversion rate) before the test begins. One market in each pair receives the treatment; the other serves as the control. Pairing improves the precision of your lift estimate.

Media mix model (MMM)

Copy link

A model that emphasizes the paid media channels being used in a marketing strategy. Often interchangeable with Marketing Mix Model, but emphasizes the paid digital channels. Results are used to understand which channels are most effective and to allocate budget.

Check out Haus’ Causal MMM for an experiment-calibrated approach.

Multi-touch attribution (MTA)

Copy link

A measurement methodology that assigns fractional credit for a conversion across all marketing touch points that a customer interacted with leading up to a purchase.

Check out how Haus is bringing causality to attribution.

Natural experiments

Copy link

A situation where a real-world event creates a condition that resembles a controlled experiment. For example, a website outage for a DTC brand impacts a specific region. You can then use that scenario to understand how that outage impacted sales.

Out of home (OOH)

Copy link

Advertising that reaches consumers outside their homes — including billboards, transit ads, bus shelters, and digital signage. OOH is a broad-reach, awareness-oriented channel that is difficult to measure with traditional digital attribution.

Over the top (OTT)

Copy link

TV advertising delivered via internet-connected devices — such as Roku, Sling, or smart TVs — outside of traditional cable and satellite infrastructure. OTT enables more precise targeting and measurement than linear broadcast TV.

Performance Max (PMax)

Copy link

Google's automated campaign type that consolidates Search, Shopping, Display, YouTube, and other Google inventory into a single campaign. PMax uses machine learning to allocate spend across placements, which can make channel-level measurement and control more challenging.

Power analysis

Copy link

A calculation performed before a test launches to determine the sample size, test duration, or holdout size needed to reliably detect an effect of a given magnitude. Running a power analysis helps ensure your test is designed to answer the question you're asking with confidence.

Precision

Copy link

The consistency of a measurement. A precise measurement produces similar results when repeated, which means there is less uncertainty around the estimate. In other words, precision tells us how noisy or stable a measurement is.

Primary key performance indicator (primary KPI)

Copy link

The main business metric you are trying to move and measure lift on — for example, purchases, revenue, or new customer acquisitions. Your primary KPI is the core outcome your incrementality test is designed to evaluate.

Pure holdout

Copy link

An experiment type with one treatment (exposed) region and one holdout (control) region, used to measure the incremental impact of an ongoing marketing tactic. It answers: what would have happened if we had run no marketing here at all?

Randomized Control Experiments

Copy link

An experiment methodology that is considered the gold standard of causal inference. An audience is randomly divided into two groups — an exposed group and a control group. The difference between these two groups is measured and attributed to the treatment the exposed group received.

ROAS (Return on Ad Spend)

Copy link

Total revenue attributed to advertising divided by total ad spend. Like CPA, ROAS is an attribution-based metric and does not account for organic conversions that would have happened anyway. Incremental ROAS — calculated using lift from a controlled experiment — is a more meaningful measure of true marketing efficiency.

Looking for a stronger, incremental metric? Check out iROAS.

Secondary key performance indicator (secondary KPI)

Copy link

A supporting metric you want to observe lift on in addition to your primary KPI — for example, tracking site visits or add-to-cart events alongside purchases. Secondary KPIs provide additional signal but are not the main basis for test decisions.

Significance level

Copy link

The threshold of probability below which you conclude a result is unlikely to be due to chance. Commonly set at 5% (p < 0.05), meaning you accept a 5% risk of incorrectly declaring a result significant. A lower significance level reduces false positives but requires more data or a larger effect to achieve.

Spend decrease

Copy link

An experiment result that measures the incremental impact of reducing spend on an existing channel. Used to understand diminishing returns — specifically, how much volume you lose when you pull back budget, and whether that budget is worth keeping.

Spend increase

Copy link

An experiment result that measures the incremental impact of increasing spend on an existing channel. Used to understand whether additional investment in a channel continues to drive proportional returns or whether you've reached a point of diminishing effectiveness.

Statistical significance

Copy link

A result is statistically significant when it is unlikely to have occurred by random chance, given your chosen significance threshold. A statistically significant result indicates the data is not due to chance.

Synthetic control (SC)

Copy link

A statistical method that constructs a weighted combination of control regions to create a close approximation of what the treatment region would have looked like without marketing. It's particularly useful when a clean one-to-one control region doesn't exist.

Synthetic difference in differences (SDID)

Copy link

A hybrid statistical method that combines the synthetic control approach with difference-in-differences estimation. It adjusts for both pre-existing differences between regions and time trends, producing more precise incrementality estimates.

Test and roll

Copy link

A testing philosophy that prioritizes speed and learning over perfection. Rather than waiting for statistically ideal conditions, Test & Roll focuses on running a series of faster, directionally useful experiments to quickly identify large opportunities and eliminate clear losers.

Third party data

Copy link

Data collected by an external platform or vendor about users on behalf of advertisers — for example, audience segments built by Facebook from its own user behavior. Third-party data is useful for targeting but is increasingly limited by privacy regulations and platform policies, and you have less control over how it's collected or used.

Treatment (exposed) group

Copy link

The regions or audiences that are actively exposed to the marketing being tested. The treatment group's outcomes are compared to the holdout group’s outcomes to calculate incrementality.

Upper funnel metrics

Copy link

Measurements that track audience awareness. This is typically the first step to driving conversions. Once an audience is aware of your brand, they can then consider it and potentially purchase it. Upper-funnel metrics include impressions, reach, brand awareness, and share of voice.

View-through conversion

Copy link

A conversion credited to an ad that a user saw but did not click on. View-through conversions are often counted in platform reporting but are highly susceptible to over-attribution because many viewers may have converted regardless of seeing the ad, making incrementality testing especially important for channels that rely heavily on this metric.

How confident are you in what’s actually driving your growth?

Make better ad investment decisions with Haus.