Home / Experimentation / Marketing experimentation best practises [+ our best performing experiments]

Marketing experimentation best practises [+ our best performing experiments]

Article originally published in November 2022 by Stuart Brameld. Most recent update in March 2023.

Request a demo

Project management for growth and agile marketing professionals. Map your acquisition funnel, integrate analytics and run agile experiments.

Experiment results

Recent experiments results include competitor SEO, AI-driven content, exit-intent modals and AB testing homepage headlines.

Case Studies

"We are on-track to deliver a 43% increase in inbound leads this year. There is no doubt the adoption of Growth Method is the primary driver behind these results." 

Certified

We are vetted mentors with Growth Mentor and a partner with the Agile Marketing Alliance.

How the Wright Brothers used agile & experimentation

In the late 1800s, governments around the world had invested over a billion dollars in aviation projects led by prominent scientific minds that had tried (and failed) to achieve powered man flight. Ultimately, they were beaten by two bicycle enthusiasts known as the Wright brothers.

But why did the Wright brothers succeed?

Instead of building an entire aircraft and trying to get it to fly, as many of their predecessors had done, the Wright brothers completed over 700 flights in gliders first. They started small, gathered data, and continuously iterated their way to success – building, measuring and learning along the way. The timeline below summarises the story of their success:

  • 1899 – 1.5m wingspan kite
  • 1900 – 5.3m wingspan tethered glider
  • 1901 – 6.7m wingspan untethered glider
  • 1902 – 9.8m untethered glider with rudder
  • 1903 – 12.3m powered airplane, short straight-line flight
  • 1904 – 12.3m powered airplane, short circular flight
  • 1905 – 12.3m powered airplane, 30 minute flight duration

Experts believe it was this continuous learning and experimentation, what we would now call an agile approach, which led to their success.

The problem with campaigns & waterfall project delivery

Marketing projects typically follow the classic waterfall project delivery approach, and tend to look something like the following:

  1. Come up with a theme for your marketing project or campaign
  2. Seek approval from your manager and stakeholders
  3. Build out the plan based on the agreed requirements, with support from in-house teams, agencies, designers and developers
  4. Bring together all campaign assets and test the user journey
  5. Launch

Unfortunately, with this big bang approach, you take one huge swing and if you miss – if your target audience is wrong, the messaging doesn’t resonate, people don’t engage or convert – it’s over.

If the initial plan was wrong (and let’s face it, unless you’re run exactly the same campaign before under very similar conditions, it’s likely more of a guess than a plan), you have just spent considerable time and money building a campaign that nobody pays attention to and that achieves nothing. Big bang = big risk.

The problem is that the waterfall project methodology was developed during the mass production era (car manufacturing, steel production, tobacco production etc) where problems were well-defined and the solutions clearly understood.

Using waterfall project management you can launch the “perfect” marketing campaign – one that is on time, on budget and beautifully executed – but that delivers absolutely nothing for the business. Eric Ries refers to this successful execution of a bad plan as “achieving failure”.

Ries also uses the term “success theatre” to describe charismatic individuals, often in larger organisations who are able to rally individuals and gather buy-in for projects that fail to deliver business value. Anyone can say “I have a vision for something big”, it’s far better to say “I have a vision for something big, and I’ve already run tests and proven that there is demand for it”.

“There is surely nothing quite so useless as doing with great efficiency what should not be done at all.”

Peter Drucker

When our world is changing and evolving, customers are changing, customer expectations are changing, marketing channels are changing and your company strategy is changing, a 100 year old project management approach no longer works. Enter marketing experimentation.

The benefits marketing experimentation

What is needed is a new approach to marketing project management, one that is appropriate for marketers and marketing teams today that operate under conditions of uncertainty.

As a result of work done on agile methodology in IT and software development over the years we know a few things to be true about the big bang, waterfall approach to projects:

  1. Longer projects tend to get longer, they suffer from more scope creep and are more likely to get interrupted
  2. Cost and time increase with complexity
  3. There is often zero customer value delivered until right at the end

These problems are not unique to the IT world. The same applies when writing a book or essay, when writing code and in many other areas of life. Every creative human endeavour requires an enormous amount of trial-and-error.

Big bang equals big risk, and if we can decrease the time between pivots and changing the direction, we can increase the odds of project success. We need to shift from resource optimisation and instead focus on time-to-market optimisation. Marketing experimentation approach enables this shift from fixed time versus fixed scope.

The more seldom we release something into the world, the more expensive and risky each release it. Conversely, the more often we release things, the cheaper and safer those releases become.

This is where the marketing experimentation, and the minimum viable test comes in.

Marketing experimentation & the minimum viable test

“Humans will do marvellous things to avoid getting into the arena, where the possibility of failure is present. That’s why planning is so seductive. Planning can reduce uncertainty, and uncertainty is scary. But uncertainty can never be reduced to zero. So in most cases, it’s best to get momentum and solve the biggest problems as they come.”

Unless you’ve released the same thing, to the same audience, at the same time before, you’re not really planning, you’re guessing. And if you’re going to be wrong, you’re better off spending $100 and losing a few days work, than spending $100,000 and losing 3 months of work.

This is why modern marketers run marketing experiments. Marketing experimentation is all about making specific, concrete predictions ahead of time in order to increase learnings and reduce uncertainty over the long term.

The goal of marketing experimentation is to move away from a big bet culture towards a more agile scientific approach to marketing with the ultimate goal of reducing waste.

Anyone can put compounds in a beaker and heat it up, in the same way that any marketing team can produce a piece of content – neither are science. The science comes from having a hypothesis, a set of predictions on what is likely to happen. The goal is for marketing to be effective, not purely efficient. Any marketing team can produce things in an efficient way, but only some are effective.

Why use marketing experimentation?

There are many reason to use marketing experimentation, these include:

  1. Marketing experiments can help improve user experience, grow engagement and increase sign-ups from prospects and customers
  2. Marketing experiments help to isolate specific variables (such as conversion rate) to ensure the right decisions are being made
  3. Experiments allow us to quickly and easily prove that an opportunity is worthy of additional time and investment before it is too late.

You and your team will learn more and achieve more by implementing 3 ideas that have been properly considered and scoped, than by implementing 15 new ideas on a whim.

Being explicit with your experiment documentation also ensures that:

  1. You have clearly thought through your plan and how you’re doing to measure the results so that, once the experiment is complete, you can clearly prove or disprove the original hypothesis
  2. Team members are kept up to date with what is being done and continually learning from each others experiments, without the need for meetings
  3. New team members can go back and understand and/or repeat previous experiments (without having to talk to the person that ran it last time)
  4. Where changes are made, and experiments are unsuccessful, there is a clear log of configuration changes that should be “undone”

Lastly, building a culture of experimentation in larger organisations, especially where long established (often poor) practices have taken root, can be a challenging. Spreading awareness and evangelising your agile marketing programme by sharing results and data in a transparent manner is the best way to gather support from colleagues across the business.

Marketing experimentation examples

If you’ve used the Internet today, the chances are you’ve participated in a number of online experiments (officially know as randomised controlled trials) without knowing it. Here are a couple of famous marketing experiment examples from Google and Microsoft.

Google’s ’50 shades of Blue’ experiment

In the late 1990s, shortly after Google launched ad links in Gmail, they decided to test which shade of blue would result in the most clicks on the ad link.

Google ran a series of 1% AB test experiments where 1% of users were shown a particular shade of blue, then another 1% a different shade of blue, and so on. In total, over 40 different shades of blue were tested. The results were surprising and prove the potential value of data-driven decision making at scale.

“We saw which shades of blue people liked the most, demonstrated by how much they clicked on them. As a result we learned that a slightly purpler shade of blue was more conducive to clicking than a slightly greener shade of blue …. the implications of that for us, given the scale of our business, was that we made an extra $200m a year in ad revenue.”

Dan Cobley, Managing Director, Google UK

Microsoft Bing search engine experiment

In 2012 an employee at Microsoft had an idea about changing the way search engine ad headlines were displayed. The suggestion was to move ad text to the title line to make it longer, as below.

The idea was delayed for months as it wasn’t seen as particularly valuable, until many months later an engineer felt it was simple enough to test and launched an AB test. The idea increased Bing’s revenue by 12% (over $120M at the time) without hurting user experience metrics, it was so impactful it triggered internal ad revenue monitoring alerts.

Marketing experimentation cycle time

“Success is a direct result of the number of experiments you perform.”

Anthony Moore

The best marketing teams are built on a system of compounding loops. The more experiments you run, the more you learn (about your users, your product and your marketing channels), and the more you are able to apply those learnings over time to increase your ratio of successful experiments and grow.

Rapid iterations will beat the competition and built team morale and so the higher the velocity of testing, the faster your team will learn how to accelerate growth. Relatively few tests are likely to produce dramatic gains hence finding wins, both big and small, is a numbers game.

“You could beat any grandmaster at chess if you could move twice every time he moved once.”

James Currier

Your teams goal should always be to maximise learning and as a result most agile marketing teams run experiments in 4-week or 6-week cycles.

Marketing experiment owners

Every experiment or project is assigned an owner. The owner is completely in charge of the experiment and gets to choose how the experiment is executed. They are in charge of getting it done, from initial documentation, to the process, to the delegation, to the goals, to the deadlines and logging the results.

This ownership allows everyone to work in a way that they feel comfortable but within defined boundaries. It gives the whole team trust, creative freedom and the chance to prove themselves and share new ways of doing things with the rest of the team.

Marketing experiments & the minimum viable test

The minimum viable test is a core principle and lean startup and design thinking that can equally be applied to marketing opportunities.

Our goal in discovery is to validate our ideas the fastest, cheapest way possible. Discovery is about the need for speed. This lets us try out many ideas, and for the promising ideas, try out multiple approaches. There are many different types of ideas, many different types of products, and a variety of different risks that we need to address (value risk, usability risk, feasibility risk, and business risk). So, we have a wide range of techniques, each suitable to different situations.

Marty Cagan, Inspired

Experimentation doesn’t have to be a big project that requires developers, designers, expensive software, or complicated data analysis. It just needs to be a way for you and your team to test out the hypotheses and learn from the data. Do the smallest, minimum amount of work to get the insight you’re looking for. Wes Kao calls this the minimum effective dose.

Remember that when introducing a new idea, the goal isn’t perfection. The goal is initial feedback and learning.

Test one thing at a time – like the title or imagery – so that you’re able to learn from each experiment which variable is causing a difference in results.

The more variables you have in one experiment, the less meaningful your results will be. If you test one landing page design against a completely different landing page (without any experiments in between), your results won’t tell you much about what actually worked in that experiment.

Also remember that with modern tools you don’t necessarily have to roll out a change that affects every website visitor or app user.

There are no magical ideas or silver bullets in marketing and so you must learn to avoid becoming overly invested in the idea outcome.

The age of the waterfall project is over. Digital disruptors such as AirBnb, Uber, Netflix, Amazon and others stay ahead of their competitors in tiny sprints.

Your goal should be to find the least resource intensive way to test the hypothesis such that it still delivers a meaningful experiment outcome (i.e. that still proves or disproves the hypothesis). Your aim is to test your riskiest assumptions at the same time as collecting feedback to guide a future iteration of the idea. This is known as the Minimum Viable Test.

The bigger the test, the more resources required, the more likely your work will affect other teams, and the higher the stakes. Fighting feature creep and scope creep and maintaining a good testing cadence is key to the growth marketing process.

Experiments do not, and should not have to be perfect before they see the light of day. The trap of perfection is that whilst you’re working on perfection, your competitor is chatting up your customer.

Aim to minimise these potential downsides by keeping your test as small as possible. If the results are good, then you double down.

What does good marketing experimentation look like?

1. Test Methodology

One of the most important aspects of experimentation is the research and planning phase. This planning forces us to think through the “why” of an experiment, including what we expect to happen and determining whether it’s worth doing in the first place.

Following on from the growth hypothesis, the experiment research and methodology may include:

ItemDescription
Experiment Aim / GoalWithout clearly articulating the aim it’s easy to lose direction and then not know if it was a success
A Detailed Test DescriptionHow will the test be run? where? when? what are the key action items?

Include any relevant audience segmentation i.e. if targeting specific set of website pages or group of users.

What’s the maximum percent of users you feel comfortable testing this with? (aim for the highest possible i.e. a 50/50 split)

“If this works we should …. “, use this to document your brilliant-at-the-time scope creep ideas. This is a mental hack, simply promise yourself that if it works you will do this next time.
Any dependencieswhat sort of resources will be needed? budget? staff? How much of the work would affect other teams? How many teams do we need to inform?
Research & LinksAdd links to any past experiments, customer interviews, reference articles, charts, benchmark data or best practises. Consider links to reports in tools such as Amplitude and Google Analytics.

2. Success Metric

Record your primary success metric (goal) and any success criteria at the same time as refining your hypothesis. You should understand how you are going to measure success before you get started in order to avoid making the data fit your own preconceived ideas or hopes for the outcome.

It is likely you may need to measure more than one metric, include metrics downstream from the experiment that may be impacted. However, keep in mind that more data equals more time and more risks in muddying the focus and inundating individuals with too much information.

Additionally, however rational and objective you think you are, we all want to win and to bring in results. Whether intentional or not, it is easy to create a successful experiment by cherry-picking the data once it’s complete.

Here’s a real-world example from Buffer a/b testing the text in a tweet to one of their articles. Can you tell which the winner was?

Buffer Tweet 1 (Source: How Buffer A/B Tests)
Buffer Tweet 2 (Source: How Buffer A/B Tests)

Without knowing the metric they were looking to influence, it’s impossible to know which was the winner. If clicks were the goal, tweet 1 was the winner, if retweets were the goal, tweet 2 was the winner.

You should consider:

  1. What data can be collected to prove or disprove the hypothesis? What is the clear measure of success? Collect this data and no more.
  2. Is the above data currently being recorded? e.g. required Google Analytics events
  3. What metric(s) are you trying to improve?
  4. What determines success versus failure?

3. Baseline & Predicted Impact

Before starting your test you should create a baseline value – a reference point of the current state – so that you can measure progress or change. If baseline data is unavailable use benchmark data or a reference point, such as average industry conversion stats.

“Experiments need a baseline, so you can measure results, otherwise you’re just spinning your wheels”

Tim Wu, Director of Growth, Framed.io

Your baseline may be based on:

  1. Quantitative Primary Data – previous experiments, surrounding data, funnel data
  2. Qualitative Primary Data – surveys, support emails, user testing recordings
  3. Secondary Data – blogs, competitor observation, case studies, things you have read or heard from others

4. Experiment notifications

Once your experiments is ready a notification should be sent to your team that it is being launched so that everyone is aware of live changes and there are no surprises. Another notification should be sent with results once the experiment is complete.

5. Constant experimentation

Your experiments shouldn’t be a one-off piece of work, it is incredibly unlikely a marketing project cannot be improved. Aim to continually revise and improve your work as good growth is built on a culture of constant experimentation and compounding results.

How to implement marketing experimentation today

Looking for growth marketing experiment templates for Pipefy, Trello, Airtable, Excel, see our article on growth experiment templates here.

Looking to get started with a growth marketing project management tool? We’ll love to show you Growth Method.

Here are some related articles and further reading you may find helpful.