Home / Experimentation / The Minimum Viable Test (MVT): for marketing & growth

The Minimum Viable Test (MVT): for marketing & growth

Article originally published in April 2019 by Stuart Brameld. Most recent update in April 2024.

Request a demo

Project management for growth and agile marketing professionals. Map your acquisition funnel, integrate analytics and run agile experiments.

Experiment results

Recent experiments results include competitor SEO, AI-driven content, exit-intent modals and AB testing homepage headlines.

Case study

"We are on-track to deliver a 43% increase in inbound leads this year. There is no doubt the adoption of Growth Method is the primary driver behind these results." 

Certified

We are vetted mentors with Growth Mentor and a partner with the Agile Marketing Alliance.

You may have come across the term minimum viable test within the context of lean and agile methodologies.

A minimum viable test is the antithesis to the traditional, everything-at-once, waterfall approach to marketing project delivery. This more agile approach is the fastest way to get feedback from customers and prospects and should be the foundation of most modern marketing teams.

How waterfall project delivery became the norm

The ideas surrounding waterfall project management (and task-based systems) dates back to Frederick Winslow Taylor’s work in the early 1900s. This was the time of mass production and the 2nd industrial revolution – car manufacturing, steel and tobacco production. These industries involved processes that were repeated many thousands of times, well-defined problems, and clearly understood solutions.

Taylor’s theory, known as Scientific Management, and what we now consider waterfall methodology, centred around the idea that dividing work into standardised discrete tasks was the key to increasing process and project efficiency.

Waterfall methodology argues the best thing to do with projects such as these is to decompose the project into a series of individual tasks, assign each task to a functional specialist and work through a series of steps towards the end deliverable.

Waterfall project delivery follows a series of steps, executed in a linear fashion one after the other. The steps are typically Planning, Design, Implementation, Testing and Deployment.

Image source

For a marketing campaign this may look like:

PhaseWork required
PlanningDiscuss campaign theme and objectives at a high-level
DesignUnderstand individual assets required (content, graphics, landing page & ad creation, tools, nurture sequences, analytics etc)
ImplementationCreation of the above assets, including design and development work
TestingEnsure everything works as planned, test user journey, device types, translations etc
LaunchPush live and launch to prospects and customers

To this day, Taylor’s ideas still form the bedrock of the modern capitalist economy and are very much alive in global workplace culture. His thinking permeates virtually all management thinking to this day.

The problem with waterfall project delivery

Taylor is responsible for the way most people see work, teams, and leadership today. At the same time his theories stand in the way of many corporate innovation and transformation efforts.

The problem with task-based systems is that they work best for predictable, frequently recurring projects. In other words, waterfall planning only really works where there is certainty around the problem and the solution.

Unfortunately today’s marketers and marketing teams, and increasingly other business functions as well, operate under conditions of extreme uncertainty and constant change. Customers and audiences are changing, experiences and expectations are changing, acquisition channels are changing, company strategy and messaging is changing. In this kind of environment your marketing plan very quickly becomes less of a “plan” and more a case of “build it and see what happens”.

The most common outcome when using waterfall project planning in conditions of uncertainty is the successful execution of a bad plan. Eric Ries, renowned author and coach of lean methodology, refers to this as “achieving failure”.

In marketing this typically takes the form of a campaign that is on time, on budget and beautifully executed. All campaign planning and tasks are completed perfectly. Regular updates show everything to be on-track, and yet the end result is no increase in lead generation or revenue.

Marketers need to reduce the risk of spending time and money on content that people don’t read, and on campaigns people don’t engage with. If we create something nobody wants, does it matter if we do it on time and on budget?

In my experience marketers have a tendency to over-build. Big bang equals big risk. You build the thing until it’s 100% done and deliver it to the user at the very end.

What is needed is an approach to reduce the uncertainty by getting early market feedback from prospects and customers.

Why marketers need the minimum viable test (MVT)?

The solution where the outcome is unpredictable is a more agile approach, and the minimum viable test. Software developers call it Agile. The manufacturing industry calls it Lean. Designers call is Design Thinking. Entrepreneurs call it the Minimum Viable Product (MVP). Marketers call it Experimentation or Agile Marketing. Additionally, you may hear terms and phrases such as lean sprints, growth sprints and agile sprints.

Regardless, all of these methodologies essentially describe the exact same thing – an incremental and iterative approach to project delivery where you test your assumptions as quickly as possible.

  • Iterative – don’t try to get it all right from the beginning
  • Incremental – don’t build it all at once

With traditional waterfall projects all of the value delivery to users comes right at the end of the project. The goal with the minimum viable test is to bring forward user value delivery as early as possible, and to iterate from there. Think of it like trial-and-error marketing, where the team that finds the errors the fastest wins.

To summarise, try to simplify your marketing ideas. Rather than jumping in with a big complex idea or campaign, start with a small and simple test. Once the test is launched, then gather feedback and iterate your way to a more successful, sustainable, and scalable marketing strategy.

How to launch a minimum viable test?

The illustration below from Henrik Kniberg, an agile and lean coach, is commonly used as a metaphor for iterative and incremental product development (note the faces depicting customer satisfaction throughout delivery).

Image source

As marketers, the minimum viable test enables us to get real user feedback by focusing on the actual customer need, delivering the smallest thing possible, and getting early user feedback. Remember the main goal at this point is not to build the perfect solution but is instead to find the cheapest and fastest way to learn.

Characteristics of the minimum viable test

As mentioned, the main characteristics of a minimum viable test are small, fast & cheap – let’s explore these further below.

1 Small

TL;DR – don’t aim for perfect

Avoid spending time building out a big complex idea and then releasing it to customers. Unless the outcome is highly predictable your aim shouldn’t be to deliver the perfect solution to the problem but instead to decrease time-to-market in order to get early user feedback.

Consider the minimum viable test as an early version of an idea that has limited or imperfect functionality. You can then use this early version to decide if and how to move forward. Your early release may lead you to discover that users don’t want or need all of the bells and whistles that were planned.

source

2 Fast

TL;DR – launch things quickly

Time is a valuable resource and wasting it on work which provides no benefit to your business or team makes little sense. As a marketer, your goal is to learn and get user feedback as fast as possible, which means getting things out in public fast and adjusting your plan as needed.

There is a wealth of research around the advantages of smaller, more frequent delivery of work which proves that:

  • Planning is easier
  • Estimates are more accurate
  • There is less scope creep
  • Teams get more user feedback
  • There are less interruptions

Move from a fixed scope to a fixed time mindset. Instead of starting a project by asking “how long will it take to get xyz done?”, start out with a mindset of “what can I get shipped in the next 2 weeks?”.

This may mean reducing your initial scope, narrowing down features or functionality, or trialling an imperfect solution.

What’s the smallest thing you could deliver in order to get real customer feedback and validate your plan?

3 Cheap

TL;DR – launch things cheaply

Avoid spending lots of money until the outcome of your work is predictable. Once it’s proven that the content ranks, the ads work or the landing page converts, then it’s time to consider ramping-up spend.

Until then, keep it cheap by:

  • Reducing scope
  • Doing it in-house rather than hiring an agency
  • Use off-the-shelf software rather than building custom code
  • Conducting a small in-house test before engaging with stakeholders

The goal is to detect potential risks and areas of improvement before increasing complexity and spend. You have 2 options:

  1. Spend 10% of your overall budget, course correct and have 90% remaining
  2. Sink 100% of your budget into a big splash campaign that fails to deliver

Buffer’s minimum viable test

Buffer, the social media platform, today serves over 140,000 customers around the world however the company started life as just two landing pages. Before investing in building anything, Joel (the CEO) wanted to check demand for the product. He tweeted out a link to the 2 pages below as a test:

Image source

The first page described the benefits of Buffer and how it works. Clicking on ‘plans and pricing’ took you to the second page with the message “Hello! You caught us before we’re ready” leave your email address & we’ll let you know when we’re ready. If people entered their email address this validated people wanting the product.

The next step was to discover what they would pay for it.

Joel added a pricing page in-between the two pages above to understand if people were willing to pay. The extra step both recorded the plan that was clicked on and further tested demand by requiring an additional step before the email capture.

These two quick and simple tests gave Joel the confidence to move forward with building the first version of Buffer.

Colt’s minimum viable test

Colt provides world-class network and voice connectivity to businesses in Europe, Asia and the US. After analysing live chat conversations with sales, the digital team discovered that pricing estimates were by far the most common request.

The team wanted to test if they could increase conversion rates on the site by making this process easier, and self-serve. They did this by adding a custom ‘get a quote’ action on the website.

Image source

Building a real-time quoting solution in a company the size of Colt would have involved a number of different teams:

  • Web design and development teams to build the front-end
  • Backend web development interfacing with internal data teams to pull data from pricing APIs
  • Co-ordinating with product management for testing and pricing accuracy
  • Co-ordinating with legal around the release of real-time pricing information

Definitely not small, fast and cheap.

Instead, the team focused on the core customer need (receiving a price estimate) and not on building a real-time pricing tool. This simple change reduced technical complexity, scope and cost by over 10X. The team were able to build the entire solution themselves, in-house, using an off-the-shelf piece of software. The team worked with the internal sales teams to ensure there was a process to handle incoming requests and the entire solution was up and running within 3 weeks.

The minimum viable test checklist

Our simple checklist below consolidates a lot of the thinking in this article into a simple checklist to keep on-hand for your next marketing test or experiment.

ItemDescription
Reduced scopeSteer clear of anything too complicated. Focus on the must have’s, not the nice-to-have’s.
Single teamWherever possible avoid the need to involve other internal teams or departments. Keep stakeholders to a minimum.
No agenciesAvoid the need to engage outside agencies wherever possible. If unavoidable, steer them to off-the-shelf rather than custom solutions.
Customer focusedFocus on the real customer need, not what you think they need or what you’d like to create.
Restricted audienceReduce the size of the audience for your initial test in order to reduce risk, and stakeholder involvement. Single language, limited audience, limited time etc.
Limited budgetReduce budget. There is relatively little that can’t be tested for £1000.
Fixed timeFixed time rather than fixed scope mindset i.e. “what can I get shipped in the next 2 weeks?”
Soft launchNo grandiose launching, shouting or celebrating yet.

Conclusion

To summarise, whether it’s called agile, lean, experimentation or something else, this iterative and incremental approach to work is here to stay. In today’s world big wins do not come from big swings, big wins come from lots of swings.

Rarely, if ever, do successful marketing projects go from idea to mature & stable in a straight line.

Image source

Success involves trail-and-error. This is not unique to marketing but is the case for product development, writing code, writing a book, essay or music. There are early tests, many code rewrites and refactors, many drafts. Every creative human endeavor requires an enormous amount of trial-and-error.

Image source

Therefore the single best way to increase growth is to increase the rate and the quality of the tests you and your team run.

With that in mind, aim to fail small and fast. After all, it could be the next experiment you run that changes the growth trajectory for your business.

It is not the strongest species that survive, nor the most intelligent, but the ones most responsive to change.

Charles Darwin

Additional reading

If you found this content useful you may like some of the articles below for additional reading.

TitleLinkAuthor
Minimum Viable Test: The Framework We Use To Kill It With Our A/B Testshttps://www.innertrends.com/blog/minimum-viable-testDennis van der Heijden
Sense and Respondhttps://www.amazon.com/Sense-Respond-Successful-Organizations-Continuously/dp/1633691888Jeff Gothelf
Making sense of MVP (Minimum Viable Product) – and why I prefer Earliest Testable/Usable/Lovablehttps://blog.crisp.se/2016/01/25/henrikkniberg/making-sense-of-mvpHenrik Kniberg
A Minimum Viable Product Is Not a Product, It’s a Processhttps://www.ycombinator.com/library/4Q-a-minimum-viable-product-is-not-a-product-it-s-a-processJim Brikman
Avoid ‘Big Bang’ Deliveryhttps://www.simonoregan.com/short-thoughts/z4sln4epd0svtk8binaxzhktaxkfbjSimon O’Regan