How long should you run an incrementality test for?

We offer heuristics for determining test duration based on data from thousands of Haus tests and advice from Haus Measurement Strategists.

•

Mar 10, 2026

As a marketing leader, you're contending with lofty pipeline goals, tight deadlines, and financial stakeholders to appease. Patience might be a virtue, but for the fast-moving marketer, it can feel like an unrealistic luxury.

That’s why teams are often so curious about test duration when they get into incrementality testing. Holding out audiences comes with opportunity costs, so the pressure to wrap up an experiment and "get back to business" is intense.

Haus Solutions Consultant Tarek Benchouia has seen this tension in action, but he advises teams to stay the course.

“We as marketers move very quickly,” says Tarek. “To hold tight for six weeks and let an experiment run is a very unnatural feeling. But the quantity of insight you get from a six‑week test really outweighs any of the feelings and nerves around being locked into an experiment.”

That’s why Tarek says the first thing teams should do before running an incrementality test is ask themselves this question. How long do I need to run in order to get a confidence band that’s narrow enough for the test result to be sufficiently powered and statistically significant? 

In other words, how long do you need to run a test in order to trust the results?

As you go to answer that question, consider these four factors:

  • Your brand’s consideration cycle: Big, infrequent purchases (e.g. furniture) require longer studies; impulse purchases (e.g. ecommerce, CPG) are better suited for shorter tests.
  • Your other measurement sources: Reference PPS data, MMM ad stock curves, and historical test results to pinpoint time-to-impact. (Multi-touch attribution is not a reliable source here. Once you have their click ID, they are later in their buying journey.)
  • Funnel dynamics and channel behavior: Results materialize differently depending on where your ad dollars are being allocated and the objective you set.
  • Ideal test power: Smaller holdouts require more time. If you only hold back a small percentage of your audience effectively, you need a longer runtime to reach statistical significance.

We’ll dive deeper into these different factors in the sections below, then offer some tactical advice on test duration baselines by channel.

The mathematics of patience: What is statistical power?

Your optimal incrementality test duration is primarily a function of statistical power. Power is the probability that your test will detect a lift if one actually exists. To achieve high power, you need a sufficient volume of conversions in both your treatment and control groups.

Three levers control this volume:

  • Holdout Size: A 50/50 split collects data faster than a 90/10 split.
  • Time: The longer you run, the more data you collect.
  • Budget/Volume: High-spending accounts accumulate data faster.

If you cannot increase your budget or the size of your holdout group, time becomes your only lever. You must run the test until you have captured enough conversion events to distinguish the signal (ad impact) from the noise (random background variance).

“The longer you run that holdout, the tighter and tighter the confidence band is gonna be on the result,” says Tarek. But of course, you can’t run the test indefinitely; that comes with opportunity costs. So Tarek advises teams to next ask where in the funnel this media is.

How does test duration vary by channel? 

Different channels operate at different velocities, which changes how long you need to wait for results. Below are working baselines the Haus team uses as a starting point by channel. You should still adjust around your own consideration cycle and power targets, but these ranges reflect thousands of experiments across the platform.

What’s the ideal test duration for lower-funnel channels (e.g. search and retargeting)? 

Channels that capture existing demand, such as branded search or bottom‑funnel social, tend to resolve quickly. Users are already close to purchase, so the gap between ad exposure and conversion is short.

In our branded search meta-analysis, the median test ran for 14 days with no post‑treatment window (PTW), and only 27% of tests added a PTW at all. When a PTW was included, it increased DTC lift by less than 10% on average, confirming that most of branded search’s value shows up during the active test window itself.

That’s why Tarek’s team is comfortable treating 2 or 3 weeks as a typical full duration (active + PTW) for true demand‑capture tactics in most businesses.

What’s the ideal test duration for Meta Conversion campaigns? 

Tarek advises a slightly longer duration (active + PTW) for Meta Conversion campaigns at 3 to 4 weeks. That’s because social platforms sit higher in the funnel. Users are discovering products, not actively searching for them, which stretches the time between impression and conversion.

This advice beared out our Meta Report, where we found that the average experiment ran 18.6 active days plus an 8.8‑day PTW, for a total of 27.4 days. 

What’s the ideal test duration for Meta Traffic, Reach, and Awareness campaigns? 

In our Meta Report, we found that upper‑funnel Meta tests (Traffic, Reach, Awareness) ran even longer, averaging 34.4 days to capture the full impact of awareness‑driving campaigns. This builds in a bit more time for the platform to stabilize delivery. Plus, it allows time for post‑treatment effects to show up, which become more important as you move up‑funnel.

What’s the ideal test duration for Pinterest and Snap campaigns? 

Tarek’s advice is 4-5 weeks for an incrementality test of this media. This length is especially important for prospecting and driving mid-funnel behavior.

What’s the ideal test duration for upper-funnel formats like YouTube and TV? 

Video formats typically require the longest duration. A YouTube or CTV impression might happen weeks before the purchase, so short attribution windows dramatically understate impact.

In our guide to YouTube incrementality testing, we recommend 3–4 weeks of active testing followed by a 2‑week PTW, for a total of ~5–6 weeks. Across YouTube experiments on Haus, including that PTW increased incremental ROAS by 79% on average — almost doubling the apparent impact relative to in‑flight results alone.

Tarek extends the same logic across upper‑funnel video and brand campaigns more broadly:

  • YouTube and TikTok video: 4-6 weeks total (treatment + PTW)
  • Meta reach campaigns: 6-8 weeks total
  • CTV / OTT: 4–6+ weeks total, often with a longer PTW for high‑AOV or retail‑heavy businesses
  • True brand campaigns (multi‑channel brand building): 2–6 months total duration, often in a smaller “playpen” of geos so you can keep other tests running elsewhere

When is a post-treatment window necessary?

If you stop measuring the day your campaign ends, you are throwing away data.

In incrementality testing, the post-treatment window, also known as a cooldown or observation window, is the period after the active test where you stop spending but continue measuring. By adding a PTW, you turn a standard holdout experiment into a longitudinal study. Aggregated data consistently shows the value of this phase.

These longer windows become especially important around promotional periods. In our Cyber Week incrementality report, we found that during BFCM:

  • Around 41% of incremental value for some campaigns appeared in the post‑treatment window
  • Delayed effects were nearly 3x larger than in evergreen PTWs
  • CTV in particular showed a median 344% improvement in efficiency when the PTW was included

CTV and other view-based channels are designed to create net‑new demand, not just capture it, building in a meaningful PTW is non‑negotiable if you want to see their real impact.

Across Haus customers, Measurement Strategist Tarek Benchouia has seen brands steadily push toward longer tests as they mature their programs. The reason is simple: longer tests both tighten confidence intervals and let you see how incrementality evolves across different time periods (e.g., pre‑promo vs. promo vs. post‑promo), instead of giving you a single, narrow snapshot.

How does consideration cycle affect how long you run an incrementality test?

Beyond statistical power, your incrementality test duration must respect your business reality. A test cannot force a customer to buy faster than they naturally do.

If you are selling enterprise software with a 90-day sales cycle, a two-week test will return a "zero lift" result more often than not. This isn't because the ads aren't working; it's because the test concluded before the harvest began.

Verticals differ significantly in how they accumulate lift. Subscription apps and digital services often benefit from faster feedback loops. For a fitness app measuring the impact of a New Year's campaign, the conversion — an app install and trial start — happens within hours or days of the ad impression. In these cases, 7–10 day tests can often be viable if transaction volume is high enough.

Contrast this with a furniture retailer. A customer might click an ad for a new sofa, visit the showroom three days later, measure their living room the next weekend, and finally purchase online two weeks after the initial click. A 7-day test here would register zero revenue for that user, completely missing the campaign's contribution.

For health and wellness brands, where consideration involves research and consultation, testing frameworks often require months rather than weeks. Guidelines for these verticals suggest observation periods that extend 4 to 8 weeks beyond campaign exposure.

Conversely, for low-AOV impulse purchases or food delivery, running an 8-week test is wasteful. You likely captured the necessary signal in the first 14 days, and continuing the test only increases the risk of contamination from external factors.

What are the risks of incorrect incrementality test duration?

Getting the timeline wrong introduces specific risks that can invalidate your investment in testing.

The risk of stopping too early is false negatives. You might conclude a channel is unprofitable simply because the conversions hadn't matured yet. This error leads brands to pull budget from high-performing upper-funnel channels because they look inefficient in short windows.

The risk of running too long is contamination. The longer a test runs, the more likely an external event will break your control group. A competitor launches a massive promo, a snowstorm hits the East Coast, or your own team launches a site-wide sale. These variables affect treatment and control groups differently, muddying the causal link.

There is also the issue of cookie decay in user-level tests. Cookie decay is less of an issue for geo-based testing, which relies on stable geographic definitions rather than fragile browser tracking. However, over a long user-level test (e.g., 60 days), the ability to track the control group degrades as cookies expire or users switch devices.

How do you work correct test timing into your experiment design process?

Determining your test length should be part of the design phase, not a decision made mid-flight.

Start by analyzing your historical time-to-conversion metrics. If 90% of your conversions happen within 7 days of a click, a 14-day test with a 7-day PTW might be sufficient. If your curve is flatter, you need to budget more time.

Also, consider the holdout size. If you are nervous about opportunity cost and only want to hold out 5% of your audience, be prepared for a longer test. If you are willing to run a 50/50 split, you can achieve statistical significance much faster.

Finally, communicate these timelines to finance and leadership stakeholders early. When they understand that the "extra" weeks are required to capture the full ROI, fundamentally making the numbers look better, they are usually willing to wait.

If you're working with a measurement platform that has a robust service component, like Haus, you won't need to worry about carefully calculating the test duration yourself; a dedicated Measurement Strategist will work with you to get the timing right based on their experiences with similar brands and previously successful testing roadmaps.

What are the benefits of correctly timing your incrementality tests?

The "right" duration for an incrementality test balances statistical necessity with operational reality. It is long enough to capture the full purchase cycle and achieve significance, but short enough to minimize contamination and opportunity costs.

Most brands eventually move away from ad-hoc duration guessing toward a standardized testing cadence.

By understanding the interplay between active runtime and the post-treatment window, you can build a measurement practice that provides confidence without slowing down your marketing operations. Haus helps teams design these parameters precisely, using historical data to calculate the exact power and opportunity cost before a test ever launches, so you can trust the results that follow.

FAQs about incrementality test duration

What is the minimum duration for an incrementality test?
Most platforms and experts recommend a minimum of 14 days for incrementality tests. A 14-day window accounts for weekly cyclicality (people shop differently on Mondays vs. Saturdays) and basic conversion lag. Tests shorter than two weeks often produce noisy, unreliable data that underreports true lift.

Does a post-treatment window count toward test duration?
Yes, the total duration of your experiment includes both the active ad delivery period and the post-treatment window. While you aren't spending media dollars during the PTW, you must wait for this period to close before analyzing the final results. Ignoring this window often leads to underestimating the channel's value.

Can I stop a test early if I see significant results?
It is generally risky to stop a test early, even if initial results look promising ("peeking"). Early wins can be false positives driven by short-term variance. It is better to stick to the pre-calculated duration required to achieve statistical power, enabling valid and repeatable results.

How does seasonality affect test duration?
Strong seasonality can force you to run shorter tests to avoid overlapping with major peaks (like Black Friday), where control groups might behave unpredictably. Alternatively, if you must test during a volatile period, you may need to extend the duration or increase the holdout size to separate the ad signal from the strong background noise of seasonal shopping.

Make smarter marketing decisions.

Find efficiency without sacrificing growth.

Get A Demo

Make smarter marketing decisions.

Find efficiency without sacrificing growth.

Get A Demo

Subscribe to our newsletter

How long should you run an incrementality test for?

Education
Mar 10, 2026

We offer heuristics for determining test duration based on data from thousands of Haus tests and advice from Haus Measurement Strategists.

The Best Incrementality Testing Tools: How to Choose

Education
Mar 9, 2026

Whether you’re actively evaluating incrementality platforms or simply curious to learn more, consider this checklist your first stop.

How Haus Scales Causal Marketing Measurement Without Human Bias

From the Lab
Feb 24, 2026

How automating hundreds of causal models a week – and grading them with a blind exam – yields better outcomes for businesses.

How to Tie a Super Bowl Ad to Business Outcomes

Education
Feb 4, 2026

The Haus playbook for measuring tentpole brand campaigns

From Guesswork to Causal Truth: Measurement Lifer Feliks Malts’ Best Practices for Incrementality Testing

Inside Haus
Jan 28, 2026

Haus’ Feliks Malts has partnered with hundreds of teams on their incrementality testing programs. Here's how he ensures they're set up to drive real business impact.

Causal Intelligence, Explained: How AI Powers Incrementality Testing at Haus

From the Lab
Jan 8, 2026

Haus is built on AI and machine learning that strengthens the speed, accuracy, and reliability of incrementality tests.

MTA vs. MMM: Choosing Between Multi-Touch Attribution and Marketing Mix Modeling

Education
Dec 26, 2025

Let's unpack the differences between multi-touch attribution (MTA) and marketing mix modeling (MMM) and when each approach comes in handy for marketers.

Measuring Big Brand Moments With Time Tests

Haus Announcements
Dec 15, 2025

Now live in the Haus app, Time Tests estimate the incremental impact of a significant change to your business when test and control groups aren’t feasible.

The Cyber Week Incrementality Report: How CTV, YouTube, and Paid Social Drive ROI

From the Lab
Dec 11, 2025

We analyzed over a hundred incrementality tests before, during, and after BFCM 2025 and uncovered dramatic delayed conversion effects.

MMM Software: What Should You Look For?

Education
Dec 5, 2025

We discuss some of the key questions to ask a potential MMM provider — and the importance of prioritizing causality.

The TikTok Report

From the Lab
Dec 4, 2025

Can TikTok scale? How much omnichannel impact does TikTok have? Haus’ analyses of hundreds of TikTok incrementality tests answers these questions and more.

Causal Intelligence: How AI Works in Haus

From the Lab
Nov 21, 2025

Haus' AI-driven platform unleashes scientific rigor, reduces manual errors, and makes complex analytics accessible and actionable for every marketer.

Why Identification Matters: Changing How We Think About MMM

From the Lab
Nov 11, 2025

Identification is all about teasing out real causal relationships between tactics and outcomes — and it's the backbone of Haus’ Causal MMM.

How are incrementality experiments different from A/B experiments?

Education
Nov 5, 2025

Use A/B testing to optimize and incrementality testing to prove impact. Dive into differences, use cases, and how to pick the right test.

How Traditional Marketing Mix Modeling (MMM) Works in 2025 — and Why It’s Evolving

Education
Oct 31, 2025

Traditional marketing mix modeling (MMM) often relies on linear regression to illustrate correlation, not causation — but that's changing.

“It Felt Like A Civic Duty”: Why MMM Specialist Arthur Anglade Joined Haus

Inside Haus
Oct 22, 2025

Haus Principal Measurement Strategy Specialist Arthur Anglade has seen a lot of MMMs in his day. He sits down to explain why Causal MMM is different.

Marketing Measurement: The Fundamentals

Education
Oct 19, 2025

Learn the fundamentals of marketing measurement — from ROAS and MMM to incrementality testing — and discover how to measure true campaign impact.

Introducing Causal MMM

Haus Announcements
Oct 16, 2025

Meet Causal MMM: A marketing mix model that inspires trust, provides clarity, and drives smart marketing investments.

Incrementality: The Fundamentals

Education
Oct 9, 2025

Let's explore incrementality from every angle — what it is, what you can test, and what you need to get started.

World-Renowned Economist Susan Athey Joins Haus As Scientific Advisor

In this Q&A, John Bates Clark Medal-winning economist Susan Athey discusses why Haus is a company she just had to be a part of.

Marketing Attribution: The Fundamentals

Education
Oct 3, 2025

Discover what marketing attribution is, how it works, its pros and cons in 2025, and why incrementality testing provides a more precise measure of ROI.

When Is Branded Search Worth the Investment?

From the Lab
Sep 29, 2025

Our large-scale analysis of real-world, branded search incrementality experiments unpacks when branded search is most effective and how best to leverage it.

Can You Measure The Incrementality of Out-Of-Home (OOH) Marketing?

Education
Sep 26, 2025

Experiments reveal causality and establish the ground truth for cross-channel decision-making — even when it comes to OOH campaigns.

How Hoon Hong Uses Testing To Help Haus Customers Sharpen Their Storytelling

Inside Haus
Sep 23, 2025

Learn how Haus Measurement Strategist Hoon Hong uses incrementality tests to sharpen brands' storytelling and connect with key audiences.

Trust In, Trust Out: Why An MMM Built on Experiments Yields More Accurate Results

Education
Sep 17, 2025

Traditional MMM fails on bad data. Learn how Haus treats experiments as ground truth, taming noise and multicollinearity for trustworthy guidance.

What To Test in Q4: Advice from Haus Experts

From the Lab
Sep 16, 2025

Haus experts offer four useful, non-obvious incrementality tests that can prepare you for the busiest period of the marketing calendar.

Marketing Mix Modeling (MMM) Fundamentals: A Modern Guide

Education
Sep 15, 2025

Explore key concepts, benefits, and practical steps around MMMs — helping you measure and optimize marketing effectiveness.

Incrementality Experiments: A Comprehensive Guide

Education
Sep 4, 2025

We explore incrementality experiments from all angles — what they are, why they matter, and how they translate into business impact.

Optimizing Meta Ads: A Playbook for Brands

Education
Aug 15, 2025

From balancing the funnel to optimizing creative, our new guide has data-backed tips for improving performance on Meta.

Is Meta Incremental?

Education
Aug 13, 2025

We analyzed 640 Meta experiments on the Haus platform, revealing key insights into Meta's incrementality.

Geo Experiments: The Fundamentals

Education
Aug 7, 2025

Explore geo experiments from all angles — what they are, why they matter, and how you can use them to measure incremental impact.

GeoFences: Precise Geographic Control for Marketing Experiments

From the Lab
Aug 6, 2025

GeoFences enable you to exclude markets from your test that aren’t relevant to your business — helping you focus more deeply on the ones that are.

The Meta Report: Lessons from 640 Haus Incrementality Experiments

From the Lab
Jul 28, 2025

An exclusive Haus analysis show Meta is incremental in most cases — but is the platform's move toward automation improving incremental efficiency?

When Is It Time To Start Incrementality Testing?

Education
Jul 23, 2025

At our Open Haus AMA, a customer asked us: “How do you know when a brand is at the scale where investing in an incrementality tool makes sense?”

Why Incrementality? (And How to Start Testing)

Education
Jul 11, 2025

In our recent Open Haus AMA, we were asked why incrementality has become such a buzzword in marketing. Let's dive in.

Run Cleaner, More Accurate Holdout Tests with Haus Commuting Zones

From the Lab
Jul 9, 2025

Based on open source mobility data collected from mobile phone GPS data, Haus Commuting Zones minimize spillover and improve test accuracy.

What's The Difference Between Test-Calibrated MMM and Causal MMM?

Education
Jul 7, 2025

Sometimes used interchangeably, it’s worth clearly distinguishing between these two approaches and what they accomplish.

Incrementality Experiments: Best Practices and Mistakes to Avoid

Education
Jul 1, 2025

Incrementality testing requires proper design and intentional analysis. Let's walk through some best practices (and potential mistakes).

How An Applied Math Professor Turns Her Expertise Into Impact at Haus

Inside Haus
Jun 26, 2025

We talked to Senior Technical Fellow Vanja Dukic about her journey to Haus and how Haus lets academics translate their knowledge into action. 

Haus Launches Fixed Geo Tests to Measure Billboards, Regional, and OOH Activations

Haus Announcements
Jun 23, 2025

Haus’ exclusive Fixed Geo Tests enable marketers to rigorously measure campaigns that are traditionally difficult to quantify.

Incrementality vs. Attribution: What's The Difference?

Education
Jun 20, 2025

Dive into the key differences between attribution and incrementality — how each approach works, when they're useful, and key pros and cons.

Building An Incrementality Practice: A Practical Guide

Education
Jun 13, 2025

Building an incrementality practice is vital for any brand that's trying to improve decision making. Let's explore how to get started.

How Victoria Brandley Went from Early Haus Customer to Haus Measurement Strategist

Inside Haus
Jun 9, 2025

Measurement Strategist Victoria Brandley walks us through her full-circle journey to Haus and how she became an incrementality believer.

Assembling A Marketing Measurement Plan

Education
Jun 6, 2025

Let’s walk through how to build a marketing measurement plan that puts experimental data at the center.

What Brands Should Be Thinking About In Advance of Prime Day 2025

From the Lab
Jun 5, 2025

Our Measurement Strategists explain which tests can make a difference as you prep for Prime Day 2025.

Incrementality Testing vs. Traditional MMM: What's The Difference?

Education
May 30, 2025

Let's explore some of the differences between traditional media mix modeling and incrementality testing.

Optimizing Your Paid Media Mix in Economic Uncertainty: Your 5-Step Playbook

Education
May 26, 2025

When macroeconomic conditions shift, marketers should proactively partner with finance, understand how budgets may change, and test for efficiency.

Incrementality Testing: The Fundamentals

Education
May 22, 2025

Incrementality testing isolates true campaign impact — giving you clarity, confidence, and a case your CFO will love.

Marketing Measurement: What to Measure and Why

Education
May 5, 2025

This guide outlines the metrics, testing methods, and proven frameworks you can use to measure marketing effectiveness in 2025.

Why An Econometrics PhD Left Meta To Tackle Big Causal Questions at Haus

Inside Haus
May 2, 2025

Senior Applied Scientist Ittai Shacham walks us through life on the Haus Science team and the diverse expertise needed to build causal models.

What You’re Actually Measuring in a Platform A/B Test

Education
May 1, 2025

Platform creative tests may not meet the definition of a causal experiment, but they can be performance optimization tool within the bounds of the algorithm.

Beyond the Buzzwords: Why Transparency Matters in Incrementality Testing

From the Lab
Apr 29, 2025

Brands need to have complete information to make responsible decisions like their company depends on it.

Should I Build My Own MMM Software?

Education
Apr 11, 2025

Let's unpack the pros and cons of building your own in-house marketing mix model versus working with a dedicated measurement partner.

Why An Analytics Expert Left Agency Life to Become Haus' First Measurement Strategist

Inside Haus
Apr 10, 2025

Measurement Strategy Team Lead Alyssa Francis sat down with us to discuss how she pushes customers to challenge the testing status quo.

Understanding Incrementality Testing

Education
Apr 2, 2025

Fuzzy on some of the nuances around incrementality testing? This guide goes deep, unpacking detailed examples and step-by-step processes.

How to Know If An Incrementality Test Result Is ‘Good’ – And What to Do About It

Education
Mar 21, 2025

Plus: What to do when a test result is incremental but not profitable, and a framework for next steps after a test.

Why A Leading Economist From Amazon Came to Haus to Democratize Causal Inference

Inside Haus
Mar 19, 2025

We sit down with Principal Economist Phil Erickson to talk about Haus’ “unhealthy obsession” with productizing causal inference.

Haus x Crisp: Measure What Matters in CPG Marketing

Haus Announcements
Mar 13, 2025

When real-time retail data meets incrementality testing, CPG brands can finally measure what’s working and optimize ad spend with confidence.

Why Magic Spoon’s Former Head of Growth Embraces Incrementality at Haus

Inside Haus
Mar 10, 2025

In our first episode of Haus Spotlight, we speak to Measurement Strategist Chandler Dutton about the in-the-weeds approach Haus takes with customers.

Do YouTube Ads Perform? Lessons From 190 Incrementality Tests

From the Lab
Mar 6, 2025

An exclusive Haus analysis shows YouTube often delivers powerful new customer acquisition and retail halo effects that traditional metrics miss.

Getting Started with Causal MMM

Education
Feb 24, 2025

Causal MMM isn’t rooted in historical correlational data – it’s rooted in causal reality.

A First Look at Causal MMM

Haus Announcements
Feb 19, 2025

Causal MMM is a new product from Haus founded on incrementality experiments. Coming 2025.

Would You Bet Your Budget on That? The Case for Honest Marketing Measurement

From the Lab
Feb 14, 2025

Acknowledging uncertainty enables brands to make better, more profitable decisions.

Getting Started with Incrementality Testing

Education
Feb 7, 2025

As the customer journey grows more complex, incrementality testing helps you determine the true, causal impact of your marketing.

Matched Market Tests Don't Cut It: Why Haus Uses Synthetic Control in Incrementality Experiments

From the Lab
Jan 28, 2025

Haus’ synthetic control produces results that are 4x more precise than those produced by matched market tests.

Incrementality School, E6: How to Foster a Culture of Incrementality Experimentation

Education
Jan 16, 2025

Having the right measurement toolkit for your business is only meaningful insofar as your team’s ability to use that tool.

Geo-Level Data Now Available for Amazon Vendor Central Brands

Industry News
Jan 6, 2025

Vendor Central sellers – brands that sell *to* Amazon – can now use Haus to measure omnichannel incrementality.

2025: The Year of Privacy-Durable Marketing Measurement

From the Lab
Dec 28, 2024

Haus incrementality testing doesn’t rely on pixels, PII, or other data that may be vulnerable to privacy regulations.

Meta Shares New Conversion Restrictions for Health and Wellness Brands

Industry News
Nov 25, 2024

Developing story: Starting in January 2025, some health and wellness brands may not be able to measure lower-funnel conversion events on Meta.

Incrementality School, E5: Randomized Control Experiments, Conversion Lift Testing, and Natural Experiments

Education
Nov 21, 2024

Sure, the title's a mouthful – but attributing changes in data (ex: ‘my KPI went up') to certain factors (ex: ‘we increased ad spend’) is hard to do well.

Incrementality School, E4: Who Needs Incrementality Testing?

Education
Nov 14, 2024

As brands' marketing strategies grow in complexity, incrementality testing becomes increasingly consequential.

Incrementality School, E3: How Do Brands Measure Incrementality?

Education
Nov 7, 2024

Traditional MTAs and MMMs won't measure incrementality – but geo experiments reveal clear cause, effect, and value.

Incrementality School, E2: What Can You Incrementality Test?

Education
Oct 31, 2024

Haus’ Customer Marketing Lead Maddie Dault and Success Team Lead Nick Doren dive into what you can incrementality test – and why now's the time.

Incrementality School, E1: What is Incrementality?

Education
Oct 24, 2024

To kick off our new Incrementality School series, three Haus incrementality experts weigh in describing a very fundamental concept.

Inside the Offsite: Why Haus?

Inside Haus
Oct 17, 2024

At this year's offsite, we dove into why – of all the companies, options, and career paths out there – our growing team chose Haus.

Haus Named One of LinkedIn's Top Startups

Inside Haus
Sep 25, 2024

A note from Zach Epstein, Haus CEO.

Google Announces Plan to Migrate Video Action Campaigns to Demand Gen

Industry News
Sep 6, 2024

The news leaves advertisers swimming in uncertainty — which is why it’s so important to test before the change.

Conversion Lag Insights: How Haus Tests Can Help Optimize Q4 Budgets

From the Lab
Sep 5, 2024

Post-treatment windows offer a unique glimpse into the lingering impacts of advertising campaigns after they’ve concluded.

PMAX Experiments Revealed: Including vs. Excluding Branded Search Terms

From the Lab
Aug 20, 2024

We analyzed experiments from leading brands to understand the incremental impacts of including vs. excluding branded terms in PMAX campaigns.

CommerceNext Session Recap: How Newton Baby Uses Incrementality Experiments to Maximize ROI

From the Lab
Aug 9, 2024

“We ran the test of cutting spend pretty significantly and it turns out a lot of that spend was not incremental,” says Aaron Zagha, Newton Baby CMO.

Introducing Causal Attribution: Your New Daily Incrementality Solution

Causal Attribution syncs your ad platform data with your experiment results to provide a daily read on which channels drive growth.

Haus Announces $20M Raise Led by 01 Advisors

Haus Announcements
Jul 30, 2024

With this additional support, Haus is well-positioned to deepen our causal inference capabilities and announce a new product: Causal Attribution.

3 Ways to Perfect Your Prime Day Marketing Strategy

Education
Jun 26, 2024

Think Amazon ads are the only effective marketing channel for Prime Day? Think again.

Maximize Your Q4 Growth With 4 High-Impact, Low-Risk Tests

Education
Nov 8, 2023

Not testing during your busy season may be costing you more than you think.

Why Maturing Direct to Consumer Brands Need to Run Incrementality Tests

Education
Sep 15, 2023

The media strategy that gets DTC brands from zero to one does not get them from one to ten.

5 Signs It’s Time to Invest in Incrementality

Education
Aug 9, 2023

5 common signs that indicate it is definitely time to start investing in incrementality.

$17M Series A, Led by Insight Partners

Haus raises $17M Series A led by Insight Partners to build the future of growth intelligence.

Why Meta's “Engaged Views” Is a Distraction, Not a Solution

Industry News
Jul 25, 2023

While additional data can be useful, we must question whether this new rollout is truly a solution or merely another diversion.

Why You Need a 3rd Party Incrementality Partner

Education
Jul 6, 2023

Are you stuck wondering if you should be using 3rd party incrementality studies, ad platform lift studies, or trying to design your own? Find out here.

iOS 17 Feels Like iOS 14 All Over Again. What It Means for Growth Marketing…And Does It Matter Anymore?

Industry News
Jun 12, 2023

A single press release vaguely confirmed that Apple will continue its assault on user level attribution. Here, I unpack what I think it means for growth marketing.

How Automation Is Transforming Growth Marketing

Education
May 30, 2023

As platforms force more automation, the role of the media buyer is evolving. Read on to learn what to expect and what levers are left to pull.

Statistical Significance Is Costing You Money

From the Lab
Apr 13, 2023

It is profitable to ignore statistical significance when making marketing investments.

The Secret to Comparing Marketing Performance Across Channels

Education
Mar 2, 2023

While incrementality is better than relying on attribution alone, comparing them as-is is challenging. Thankfully, there’s a better way to get an unbiased data point regardless of the channel.

Attribution vs Marketing Mix Modeling: Are Any Of Your Methods Precise?

Education
Feb 8, 2023

Learn which common marketing measurement tactics are accurate, precise, neither or both.

How to Use Causal Targeting to Save Money on Promotions

Education
Feb 1, 2023

Leverage causal targeting to execute promotions that are actually incremental for your business.

Are Promotions Growing Your Business or Losing You Money?

Education
Feb 1, 2023

Promotions, despite their potential power and ubiquity, are actually hard to execute well.

User-Level Attribution Is Out. Geo-Testing Is the Future.

Education
Jan 27, 2023

Geotesting is a near-universal approach for measuring the incremental effects of marketing across both upper and lower funnel tactics.

The Haus Viewpoint

Inside Haus
Jan 18, 2023

Haus transforms the way businesses make decisions.