Skip to content
Go back

The 5 stages of experimentation maturity (and how to advance through them)

Stuart Brameld

Stuart Brameld

Founder

Most marketing teams cannot tell you, honestly, where their experimentation programme sits today or what it would take to move it forward. A maturity model closes that gap.

A useful maturity model gives you two things: a way to benchmark where you are, and clear stepping stones to where you want to be. The framework below draws on the Conversion Maturity Model developed by Stephen Le Prevost and the team at Conversion, an experimentation agency that has spent years working with in-house programmes. It maps the journey from ad hoc testing to a fully embedded experimentation function across five stages and four assessment areas.

Why a maturity model matters

It is easy to assume your programme is further along than it is. Running a few A/B tests a quarter feels like progress, until you compare the output to a team that ships hundreds of tests a year and uses every result to compound learning.

According to Stephen Le Prevost, the model exists to answer a specific question: “How do you take an immature experimentation organization, and turn it into a Booking.com?” The honest answer is that you do not get there by accident. You get there by knowing which stage you are in, picking the next move, and building the muscle deliberately.

“I needed a map with clear milestones that I could use to benchmark my clients’ maturity.”

Stephen Le Prevost, Conversion

The four areas to assess

Before the stages make sense, it helps to understand the four dimensions a maturity model evaluates. According to Conversion, most teams do not mature evenly across all four. A team can be highly sophisticated on data and tools while still firefighting on goals and process.

1. Experiment goals

Are your experiments random and goalless, or do they ladder up to strategic business objectives? Mature programmes tie every test to a hypothesis that links back to a metric that matters to the business. Less mature programmes test whatever is in front of them this week.

2. Delivery and process

This covers experiment velocity, prioritisation, documentation, and quality control. Mature programmes have rigorous, well-documented processes that everyone follows. Read more on the importance of testing velocity and prioritisation frameworks for getting this right.

3. Strategy and culture

Often the hardest area to change and the most impactful. It encompasses leadership buy-in, psychological safety to ship losing tests, and whether the broader business treats experimentation as core or optional. We cover this in depth in our guide to building a culture of experimentation.

4. Data and tools

Research infrastructure, analytics, testing platforms, and how well they integrate. Mature programmes have constant research loops, integrated tech stacks, and the ability to act on results quickly. At the top end, that means custom-built tooling and AI in the workflow.

The five stages of experimentation maturity

Stage 1: Reactive

At the Reactive stage, testing is sporadic and individual. Someone runs a test because a stakeholder asked for one, or because a tool was bought and needs to justify itself. There is no formal framework, no documentation, no shared backlog, and no leadership mandate. Wins are anecdotal and rarely repeated.

If this is you, skip the framework hunt. Produce repeatable evidence that experimentation works in your business. Pick one funnel, one hypothesis, one metric, and ship.

Stage 2: Emerging

At Emerging, experimentation has a foothold in one or two teams. People are demonstrating wins, KPIs are being assessed, and a backlog is starting to form. Prioritisation methods are still informal, knowledge sharing is patchy, and the work is dependent on a small number of believers.

The risk here is plateauing. You have proof points but no system. The move forward is to make the process repeatable: a documented growth experiment template, a shared backlog, a simple prioritisation framework, and a regular cadence that does not depend on heroics.

Stage 3: Strategic

Strategic is where experimentation becomes a recognised function. There is clear buy-in from senior leadership, dedicated headcount, defined success metrics, and research conducted regularly. Experiments ladder up to strategic objectives and the team has shared standards for what good looks like.

The unlock at this stage is breadth. Experimentation is real, but still concentrated in one team. The move is to start exporting the practice into adjacent functions, brand, lifecycle, product, paid, and to make the results visible across the business.

Stage 4: Integrated

At Integrated, experimentation is embedded across the entire company. Cross-functional teams contribute to a single backlog. The vision for testing is owned at the executive level, not just by a CRO lead. Backlog quality rises because more people across the business know what a good hypothesis looks like.

This is where compounding starts to bite. Each test makes the next test better, because the team has a shared library of learnings to draw on. The move forward is depth: better tooling, better data infrastructure, and the ability to test things that less mature programmes simply cannot.

Stage 5: Optimized

Optimized is the Amazon, Netflix, Booking.com tier. Experimentation is fundamental to the business model and integrated into everything they do, across every channel. Custom tools, AI-driven processes, and thousands of tests per year are the norm rather than the exception.

According to Conversion, most companies need three to five years to move from Reactive to Optimized, with year one focused on fundamentals.

How to use the model

The model is most useful when you score each of the four areas separately. A team can be Strategic on data and tools while still being Reactive on goals. Treating maturity as a single number hides the imbalance and sends you optimising the wrong thing.

A simple scoring exercise:

  1. Rate each of the four areas (goals, process, culture, data and tools) on the five-stage scale
  2. Identify the lowest scoring area
  3. Pick one move that would lift that area to the next stage
  4. Commit to it for a quarter, then re-score

“The key is starting where you are and taking consistent steps forward.”

Stephen Le Prevost, Conversion

Common mistakes

Chasing the Optimized stage. Booking.com runs over 25,000 experiments a year on infrastructure most teams will never have. Trying to copy their tooling before you have the goals and culture in place is expensive and will not work.

Optimising one area in isolation. Investing heavily in tools without leadership buy-in produces a beautiful platform that nobody uses. The four areas advance together, even when they do not advance evenly.

Confusing activity with maturity. Test count is a useful input metric, not a maturity score. A team running 50 tests with no hypothesis is not more mature than a team running 10 tests tied to clear strategic outcomes.

Treating it as a one-time exercise. Maturity drifts. Teams that reach Strategic and stop investing tend to slide back. The model is a benchmark to revisit, not a certificate to hang on the wall.

Where to start

If you have never scored your programme before, do it this week. An hour with the right people in the room and an honest read of the four areas will tell you more about your next move than any vendor pitch deck.

Then pick the smallest possible step forward in your weakest area:

Maturity compounds. The teams that get to Integrated and Optimized are not smarter, they are more consistent. They picked a stage, did the work, and picked the next one.

Growth Method is the work management platform built for growth teams, combining ideation, experimentation, and analytics in one place. Book a call to learn more.


Back to top ↑