The 100 Conversions Rule: Where It Came From and Why You Should Ignore It

Stuart Brameld, Founder at Growth Method

Article written by

Stuart Brameld

If you've spent any time in the A/B testing world, you've probably heard the magic number: 100 conversions per variation. It's repeated in blog posts, testing tools, and marketing courses like gospel. But here's the uncomfortable truth—this rule is complete rubbish.

The "100 conversions per variation" guideline has become one of the most persistent myths in conversion rate optimisation, and it's leading marketers down the wrong path. It's time we called it out for what it is: an oversimplified metric that can seriously damage your testing programme.

Where Did This Rule Actually Come From?

The origins of this rule aren't rooted in statistical science—they're rooted in convenience. Let's trace back how we got here:

Google Optimize's Vague Guidance

The trail leads back to Google Optimize (now discontinued), which recommended "at least a few hundred conversions per variant" in their documentation. Notice the vague language? "A few hundred" somehow got interpreted as "around 100" by the testing community. It's like a game of Chinese whispers, but with your marketing budget at stake.

A/B Testing Vendors Jumped on the Bandwagon

Once this simplified metric was out there, A/B testing vendors and marketing blogs ran with it. Why? Because it's easy to communicate and sounds authoritative. "Just get 100 conversions and you're sorted" is much simpler than explaining statistical power calculations.

The problem is that simplicity doesn't equal accuracy. And in A/B testing, accuracy is everything.

Why 100 Conversions Per Variation Doesn't Work

Let's break down why this rule is fundamentally flawed:

It Ignores Your Baseline Conversion Rate

A test with a 1% conversion rate needs vastly different sample sizes compared to one with a 10% conversion rate. The 100-conversion rule treats them identically, which makes no statistical sense.

Consider these scenarios:

  • Scenario A: Your current conversion rate is 15%, and you want to detect a 10% improvement

  • Scenario B: Your current conversion rate is 2%, and you want to detect a 25% improvement

Both might hit 100 conversions, but they require completely different sample sizes to reach statistical significance.

It Doesn't Account for Effect Size

The minimum detectable effect (MDE) is crucial for determining sample size. If you're looking for a 5% improvement versus a 50% improvement, you need different amounts of data. The 100-conversion rule completely ignores this.

Statistical Power Gets Thrown Out the Window

Proper A/B testing requires understanding statistical power—typically set at 80%. This means you have an 80% chance of detecting a true effect when it exists. The 100-conversion rule doesn't consider power at all, leaving you flying blind.

What the Experts Actually Recommend

Real conversion optimisation experts have moved far beyond this simplistic approach.

Peep Laja from ConversionXL suggests that 300-400 conversions per variation is more realistic for most marketing teams, but even this should be calculated based on your specific test parameters.

The key insight from ConversionXL's research is clear: there's no magic number of conversions that guarantees statistical significance. Instead, you need to focus on:

  • Representative sample sizes

  • Understanding experiment power

  • Calculating minimum detectable effect

  • Considering your baseline conversion rate

The Real Cost of Bad Sample Sizing

Following the 100-conversion rule isn't just academically wrong—it's expensive. Here's what happens when you get sample sizing wrong:

False Positives (Type I Errors)

You think you've found a winner when you haven't. You implement the "winning" variation across your site, only to see performance drop. I've seen companies lose thousands in revenue because they jumped on false positives.

False Negatives (Type II Errors)

You miss real improvements because your test didn't have enough power to detect them. That 15% conversion rate improvement? It was real, but your undersized test called it inconclusive.

Wasted Resources

Running tests with improper sample sizes means you're either running them too long (opportunity cost) or stopping too early (unreliable results). Both waste time and money.

How to Actually Determine Sample Size

Instead of relying on arbitrary rules, here's how to properly calculate sample size:

1. Define Your Parameters

  • Baseline conversion rate: Your current performance

  • Minimum detectable effect: The smallest improvement you care about

  • Statistical power: Usually 80%

  • Significance level: Usually 95%

2. Use Proper Sample Size Calculators

Tools like Optimizely's sample size calculator or Evan Miller's calculator actually consider these parameters. Don't just guess—calculate.

3. Plan Before You Test

Determine your required sample size before launching the test. This prevents the temptation to stop early when you see "encouraging" results.

A Better Framework for A/B Testing

Here's a more robust approach to A/B testing sample sizes:

Step

Action

Why It Matters

1

Calculate baseline conversion rate

Foundation for all sample size calculations

2

Define minimum detectable effect

Determines the sensitivity of your test

3

Set statistical power (80%) and significance (95%)

Standard practice for reliable results

4

Calculate required sample size

Tells you exactly how much data you need

5

Estimate test duration

Helps with planning and resource allocation

Common Scenarios and Real Sample Size Requirements

Let's look at some realistic examples:

E-commerce Checkout Optimisation

  • Baseline conversion rate: 3%

  • Minimum detectable effect: 20% relative improvement

  • Required sample size: ~9,000 visitors per variation

  • Expected conversions: ~270 per variation

Landing Page CTA Test

  • Baseline conversion rate: 8%

  • Minimum detectable effect: 15% relative improvement

  • Required sample size: ~6,200 visitors per variation

  • Expected conversions: ~496 per variation

Notice how different these are from the arbitrary 100-conversion rule?

When You Can't Reach Ideal Sample Sizes

Sometimes you simply don't have enough traffic to reach statistically robust sample sizes. Here's what to do:

  • Test bigger changes: Larger effect sizes require smaller samples

  • Use micro-conversions: Test email signups instead of purchases

  • Run tests longer: Accept that robust testing takes time

  • Consider Bayesian testing: Different statistical approach that can work with smaller samples

What you shouldn't do is pretend that 100 conversions will give you reliable results when the math says otherwise.

Moving Beyond Oversimplified Rules

The marketing industry loves simple rules because they're easy to remember and implement. But A/B testing isn't simple—it's a sophisticated statistical process that deserves proper treatment.

The 100-conversion rule persists because it feels actionable. It gives teams a target to hit. But hitting the wrong target is worse than having no target at all.

Instead of memorising arbitrary numbers, invest time in understanding the statistical principles behind sample size determination. Your tests will be more reliable, your insights more actionable, and your optimisation programme more successful.

The "100 conversions per variation" rule needs to be retired. It's time for the marketing industry to embrace statistical rigour over convenient shortcuts. Your conversion rates—and your bottom line—will thank you for it.

Growth Method is the only AI-native project management tool built specifically for marketing and growth teams. Book a call to speak with Stuart, our founder, at https://cal.com/stuartb/30min.

Stuart Brameld, Founder at Growth Method
Stuart Brameld, Founder at Growth Method
Stuart Brameld, Founder at Growth Method

Article written by

Stuart Brameld

Category:

Acquisition Channels

Real experiments. Shared monthly.

Join 500+ growth professionals