Skip to content

3 Case Studies on Incrementality Testing in Marketing

3 Case Studies on Incrementality Testing in Marketing

3 Case Studies on Incrementality Testing in Marketing

🧠

This content is the product of human creativity.

Incrementality testing helps marketers measure the true impact of their campaigns by comparing results from test and control groups. This method reveals which marketing efforts genuinely drive conversions or revenue, avoiding misleading attribution models.

Key takeaways from the article:

  • Shinola (E-commerce): Incrementality testing showed a 14.3% increase in online conversions from Facebook ads, uncovering underreported performance by 413%.
  • ServicePro (Local Services): Geographic testing revealed 40% of leads were incremental, delivering a 400% ROI on Google Ads.
  • SaaS Company: Display ads had minimal impact, with only a 1.2% increase in subscriptions and a 62% lower ROAS than reported, leading to budget reallocation.

Quick Comparison

Case Study Problem Methodology Key Result
Shinola Misleading ad performance data Geo-matched audience test 14.3% conversion lift; 413% underreporting
ServicePro Lead attribution issues Geographic testing 40% incremental leads; 400% ROI
SaaS Company Overstated display ad metrics Holdout testing Minimal impact; shifted 80% of ad budget

Incrementality testing ensures marketing budgets are spent on strategies that truly drive results, making it a critical tool for modern marketers.

Case Study 1: E-commerce Social Media Ad Testing

Problem: Rising Ad Costs

Shinola, a luxury goods retailer, faced a challenge with increasing advertising costs. They suspected that relying on last-click attribution was giving them misleading insights. To address this, they needed solid data to fine-tune their budget allocation and prove the return on investment (ROI) of their advertising efforts [1].

Method: Testing with Geo-Matched Audiences

To tackle the issue, Shinola used a geo-matched market testing approach at the zip-code level. This method included:

  • Running tests independent of platform-reported metrics at the zip-code level.
  • Comparing performance between awareness campaigns and DABA ads.
  • Measuring the impact on conversions over a four-week period.
Test Component Details
Control Group Zip codes without ad exposure
Test Group Matched zip codes with ad exposure
Duration Several weeks to ensure reliable results
Primary Metric Incremental conversion lift

Results

The testing revealed actionable insights that helped Shinola:

  • Pinpoint audience segments with the most growth potential.
  • Use data to refine their social media ad strategies and improve ROI.

This approach not only validated their advertising efforts but also highlighted gaps in platform-specific measurement, allowing them to better allocate their budget and reduce risk.

Case Study 2: Local Service Google Ads Testing

Problem: Lead Attribution

ServicePro, like Shinola, faced difficulties in tracking the true impact of their advertising efforts. Their challenge was figuring out if their Google Ads campaigns were genuinely driving new business or if they were simply pulling in leads that would have come through organic search anyway. With a substantial monthly ad budget, they needed proof that their investment was paying off and a clear understanding of how their paid search efforts influenced results.

Method: Geographic Testing

To tackle this, ServicePro used geographic testing, employing a similar strategy to the one in Case Study 1. Here’s how they structured their test:

Test Component Details
Test Duration 3 months
Control Regions Markets where ads were paused
Test Regions Markets with active campaigns
Key Metrics Lead volume, conversion rates, revenue
Control Factors Demographics, market size, seasonal trends

They used precise geo-targeting to ensure no overlap between the test and control areas, keeping the results clean and reliable.

Outcomes

The results after three months of testing were striking:

  • 40% of the leads generated during the campaign were incremental, meaning these customers wouldn’t have converted through organic search alone [1][2].
  • With a $25,000 monthly ad spend, the campaigns brought in $100,000 in incremental revenue, delivering a 400% ROI [4][2].
  • Based on these findings, ServicePro adjusted their strategy. They continued investing in Google Ads, increased their focus on SEO, and developed more targeted ad creatives.
  • They also adopted a hybrid attribution model to better measure and optimize their efforts.

This case highlights how incrementality testing can be applied effectively outside of social media, proving its usefulness across various marketing channels.

Case Study 3: SaaS Display Ad Testing

Problem: Poor Display Ad Performance

A B2B SaaS company was struggling with low returns from display ads. They suspected that their attribution models were overstating the ads’ actual performance, creating a misleading picture of success [1][4].

Method: Subscription Testing

To tackle this, the company set up a holdout test, similar to the approach used in ServicePro’s Google Ads test. Here’s how they structured it:

Test Component Implementation Details
Control Group A segment of the audience excluded from display ads
Test Group A segment exposed to regular display ad campaigns
Key Metrics Subscription rates and return on ad spend (ROAS)

Using their CRM data, they analyzed subscription behavior in both groups, factoring in typical SaaS conversion cycles. This approach followed established industry testing practices [1][4][3].

Outcomes

The test produced some eye-opening results about the impact of display advertising:

  1. Minimal Impact on Subscriptions: The test found only a slight subscription increase (+1.2%) and a 62% lower ROAS compared to reported metrics [1][4].
  2. Strategic Changes:

    • Shifted 80% of the display ad budget to content marketing and high-intent retargeting efforts.
    • Focused on reallocating funds to better-performing channels.

These findings echoed the lessons from Case Study 2’s geographic testing, highlighting the importance of incrementality testing across different channels [1][4][3].

sbb-itb-2ec70df

Measured Incrementality Insights: GeoTesting

Testing Guidelines

These guidelines break down key lessons from case studies into practical steps marketers can follow.

Setting Up Test Groups

To create reliable test groups, you need a well-thought-out approach. The goal is to keep test and control groups separate while ensuring they remain comparable.

Component Best Practice Pitfall
Sample Size Use statistical power tools to calculate the minimum size Testing with too little data
Group Division Randomly assign groups, using geographic or audience splits Allowing overlap or cross-contamination
Test Duration Run tests for at least 2-4 weeks Stopping too early without enough data
Control Setup Keep all variables constant except the one being tested Changing multiple variables at once

For geographic testing, choose areas with similar demographics and economic conditions to ensure consistency.

Measuring Results

Tracking the right metrics is critical for accurate measurement. Focus on incremental lift – the additional value your marketing generates beyond what would happen naturally.

To measure lift:

  • Compare conversions between test and control groups.
  • Validate results with statistical significance (p ≤ 0.05).
  • Calculate incremental ROAS (Return on Ad Spend).

Common Mistakes to Avoid

Some common errors can derail your testing efforts. Here’s how to steer clear of them:

1. Insufficient Test Duration

Tests need to run for at least 2-4 weeks to account for weekly trends and delayed conversions [4]. For instance, ServicePro extended its test to 3 months (Case Study 2) to account for seasonal changes.

2. External Variable Interference

Minimize the impact of outside factors like seasonality or competitor actions by:

  • Avoiding major holiday periods.
  • Closely monitoring competitor campaigns.
  • Choosing control groups with similar market conditions.

3. Flawed Data Analysis

Focus on statistically significant results (p ≤ 0.05) rather than raw numbers. A small, validated improvement is more reliable than a large, unproven one.

For deeper analysis, tools like Google Analytics can track performance metrics, while platforms like Optimizely or Mixpanel provide more detailed insights.

Conclusion

The three case studies highlight how incrementality testing has become a cornerstone of modern marketing measurement. By uncovering the actual impact of marketing efforts, businesses can make smarter, data-backed decisions to boost their ROI.

Future of Marketing Measurement

With stricter privacy rules and the decline of third-party cookies, incrementality testing is becoming a go-to method for reliable measurement.

Here are three trends shaping the future:

Trend Impact Business Advantage
AI Integration Automates complex data analysis Delivers precise, scalable testing
Privacy-First Analytics Reduces dependency on third-party data Ensures long-term reliability
Cross-Channel Testing Offers a complete view of marketing impact Optimizes budget allocation
Real-Time Insights Speeds up decision-making Enhances campaign flexibility

Growth-onomics Services

Growth-onomics

These insights align closely with the strategies used by Growth-onomics. Their approach reflects the methods proven effective in the case studies. Growth-onomics offers:

  • Advanced analytics for precise measurement
  • Custom testing frameworks aligned with business objectives
  • Performance optimization across channels
  • Data-driven strategies for smarter budget allocation

For example, ServicePro achieved a 400% ROI through geo-testing [2]. This demonstrates how thorough testing can help businesses maximize their marketing investments and achieve consistent growth.

FAQs

How to measure incrementality in marketing?

To measure incrementality in marketing, you need to compare two groups: one exposed to your marketing efforts (test group) and one that isn’t (control group). This method helps pinpoint the actual impact of your campaigns by showing the additional results generated beyond what would have happened naturally.

For example, in Case Study 2, ServicePro used geographic comparisons to assess impact, while the SaaS company in Case Study 3 compared subscription rates between exposed and unexposed groups.

To calculate incremental ROAS (Return on Ad Spend), divide the incremental revenue (the difference between test and control groups) by the campaign spend. In Case Study 3, the SaaS company discovered that display ads drove only 1.2% more subscriptions, even though the reported metrics seemed higher [1] [4].

Accurate measurement requires:

  • Large enough sample sizes
  • Well-matched test and control groups
  • Tests that span complete business cycles
  • Starting with a single primary metric for clarity

An incremental ROAS greater than 1.0 means your campaigns are generating profit. This insight helps you allocate your budget wisely and focus on strategies that genuinely drive new results, instead of just capturing conversions that would have happened anyway.

Related Blog Posts