Request a demo
Project management for growth and agile marketing professionals. Map your acquisition funnel, integrate analytics and run agile experiments.
Recent experiments results include competitor SEO, AI-driven content, exit-intent modals and AB testing homepage headlines.
"We are on-track to deliver a 43% increase in inbound leads this year. There is no doubt the adoption of Growth Method is the primary driver behind these results."
We are vetted mentors with Growth Mentor and a partner with the Agile Marketing Alliance.
Republished here with permission.
Our digital team review
In our quarterly digital team review I looked around at the Colt digital marketing team. Our website manager and digital lead had just finished the mammoth task of updating >100 product pages into a new template (across six languages), our marketing automation manager had just finished updating 10,000+ contacts to support our ABM and segmentation programmes, our cloud campaign manager had recently launched the second global campaign for 2022, and our digital exec was in the middle of five different campaigns including Gartner Magic Quadrant which had been performing the most successfully we’d ever seen. But Stuart Brameld, our growth marketing agency director wasn’t happy.
From an outside perspective our numbers were better than ever. We were smashing our search rankings, our website sales form-fills were the highest ever, and we’d met our stretch quarterly goal for website quote requests… but we weren’t meeting our 15 a quarter experiment goals. We had new team members who had replaced long-term team members recently, and they weren’t quite getting the growth method.
Our failure rate was too low
The growth method is how our digital team has been working for the last three years. We have projects but we run continuous optimisation via experiments, with each team member set an objective of running 4-5 experiments a quarter. They create a measurable hypothesis with a numerical outcome and share it with the team in our weekly digital team meeting. The team input and the experiment owner adapts. The experiment owner then builds out the experiment themselves, or works with our development agency if there’s dev required. They have 6 weeks to see if it’s worked and give it a pass, inconclusive, or fail ranking. Based on this, they extend the experiment, tweak the experiment and run again, or they move on to a new experiment. Our failure rate was at 50%, and it was too low (a recent HBR article shared Booking.coms experiment failure rate of circa 90%).
In our quarterly review the team shared their latest experiment ideas. It was soon clear what the issue was. Over-complication and a bit of fear. The new starters thought the experiments had to be big, lofty, and sure to pass. They were worried about their experiments failing and being seen as less competent as a result. We discussed the challenges, and I told them I needed them to fail more.
Removing the fear of failure allows us to make decisions at pace, try new things easily, experiment more widely, build incrementally and move on quickly. Elon Musk (whatever your personal view of him!) is one of the world’s best known innovators and is quoted as saying ‘If things are not failing, you are not innovating enough.”
It’s part of the culture of the team, so there’s no blame when things don’t work, and everything is considered a learning experience. We then take our wins and we scale (the product page template update is an example of implementing results from a series of experiments).
Using the design method
In the team we use design thinking, which means ‘using systemic reasoning and intuition to address complex problems and explore ideal future states’ (McKinsey 2016). Having experiment data means that all of our decisions become data-driven. When asked to do things, or make changes we use our collective knowledge of experiment outcomes to make decisions quickly (or explain why we’re not going to do what we’re being asked!). And the act of running the experiments gives you a deep knowledge of the user journey and Colt’s capability. It’s the reason our team has continued to see year on year improvements across our digital stats, and why we’re brought in to help on wider Colt projects.
It was a breakthrough for the team, particularly the new starters. We needed that reminder to fail more. We’re now on track for experiments for Q2, and I’m excited to see how much my team have succeeded, and how much they’ve failed.
Other articles you might like
Here are some related articles and further reading you may find helpful.