SEO split testing is a relatively new concept, but it’s becoming an essential tool for any SEO who wants to call themselves data-driven. People have been familiar with A/B testing in the context of Conversion Rate Optimisation (CRO) for a long time, and applying those concepts to SEO is a logical next step if you want to be confident that what you’re spending your time on is actually going to lead to more traffic.
At Distilled[1], we’ve been in the fortunate position of working with our own SEO A/B testing tool[2], which we’ve been using to test SEO recommendations for the last three years. Throughout this time, we’ve been able to hone our technique in terms of how best to set up and measure SEO split tests.
In this post, I’ll outline five mistakes that we’ve fallen victim to over the course of three years of running SEO split tests, and that we commonly see others making.
What is SEO Split testing?
Before diving into how it’s done wrong (and right), it’s worth stopping for a minute to explain what SEO split testing actually is.
CRO testing is the obvious point of comparison. In a CRO test, you’re generally comparing a control and variant version of a page (or group of pages) to see which performs better in terms of conversion. You do this by assigning your users into different buckets, and showing each bucket a different version of the website.
In SEO split testing, we’re trying to ascertain which version of a page will perform better in terms of organic search traffic. If we were to take a CRO-like approach of bucketing users, we would not be able to test the effect, as there’s only