SEO Experiments: DIY, 3rd party service or in-house?

Article written by
Stuart Brameld
Why Run SEO Experiments
Search Engine Optimisation (SEO) is the process of improving your website to ensure that when a potential customer searches for what you have to offer, you are one of the top results on their favourite search engine.
When it comes to SEO, companies tend to implement a set of SEO best practises, often supported by some kind of auditing tool such as Moz or Ahrefs.

The problem with this approach is that you can spend months implementing these best practises (speeding up response times, getting new backlinks, adding structured data, updating alt tags and adding new content to pages) and it make absolutely no improvement to your search traffic.
"Traditionally, SEO tactics include trying out different known strategies and hoping for the best. You might have a good traffic day or a bad traffic day and not really know what triggered it, which often makes people think of SEO as magic rather than engineering."
Julie Ahn, Pinterest Growth Engineer
If your organic search traffic in Google Analytics looks like the below, your SEO efforts aren't working.

Just because something is a best practise doesn't mean doing it will bring you more search traffic, and just because a change on somebody else's website had a positive effect doesn't mean it will have a positive effect for you. In short, cookie cutter "best practises" and big SEO audits don't work and are not useful.
What is required is a data-driven approach that ensures, regardless of your team size, you are spending time and money on SEO activities that actually make a difference.
AB Testing
Most marketers are familiar with the concept of split testing (or a/b testing) where different versions of a webpage are tested to see which one performs better. This typically falls under the practise of Conversion Rate Optimisation or CRO.
To run one of these tests you split your website visitors into two groups using a tool such as Optimizely, VWO or Google Optimise, and show half of your visitors the original page whilst showing the other half the variant that the tool imposes.
Control - the original page, with no changes
Variant - the page with a change made e.g. an additional email form
If, over a period of time, the variant performs better based on some predetermined metric, such as more quote requests or form submissions, the variant becomes the new and improved regular site page.
The Challenge with SEO
When running paid search ads it is easy to a/b test content such as your ads and content in this way, and to attribute an increase or decrease in results to specific changes. Measurement is straightforward as there are relatively few variables, hence these tests provide a clear basis for decisions and further investment.
Unfortunately, measuring the effectiveness of Search Engine Optimisation changes is not so straightforward, hence why it is often done so poorly.
Duplicate Content
We can't simply create two versions of one page and send half of Google's traffic to one version and half to the other to see which one ranks better in the search engines, as this would result in duplicate (or near-duplicate) content.
Duplicate content is frowned upon by all search engines and, particularly when done at scale, can result in your site being demoted or removed from search results.
In the past some marketers have tried showing one set of content to humans, and a different set to search engine crawlers (such as Googlebot), to work around this. This is known as cloaking which is now firmly against the Google Webmaster Guidelines and will lead to penalties against your site in search rankings.
The Dynamic Search Environment
In addition to the duplicate content challenges above, Google is estimated to take into account over 200 different components and variables when determining search results, which obviously introduces a huge amount of variability to any test you may decide to run.

Many of these variables are completely outside of your control, in particular Google algorithm changes and the activity of your competitors, which hampers the ability to setup a controlled test.
As marketers we do not have control over many of the factors that determine how our website pages appear in search engine results. Variables in the daily search environment that affect SEO include:
Lag times - between when a page is crawled, and when it is processed
Search engine algorithm changes - on average two per day
Search result variations - based on location, time and login status
Competitor activity - such as significant ranking gains or drops
Backlink changes - a sudden change in backlinks for an individual page, for example, through a significant news event
Internal linking changes - which can cascade through a site in unpredictable ways
Running Good SEO Experiments
Despite the above, with a good methodology it is possible to decrease the effects of these variables in order to run high quality, valid SEO experiments. Companies such as Pinterest, Etsy and Thumbtack all perform regular SEO experiments that have led to huge increases in search traffic.
There are hundreds of different ways to do SEO, including sitemaps, link-building, search-engine-friendly site design and so on. The best strategy for successful SEO can differ by product, by page and even by season. Identifying what works best for each case helps us move fast with limited resources. By running a large number of experiments, we found some well-known strategies for SEO didn’t work for us, while certain tactics we weren’t confident about worked like a charm.
The best introduction to running SEO experiments is this short video from Rand Fishkin, founder and former CEO of Moz.
How Do I Successfully Run SEO Tests on My Website(s)?
Article written by
Stuart Brameld