This post breaks down SEO A/B testing from start to finish - starting with definitions and ending with communicating the results to clients. Let’s take it from the top!
SEOs use A/B testing to learn how different variations of a web page would affect user experience if implemented permanently. A/B testing is conducted in an effort to improve a client’s KPI, and works by presenting users with one of two versions of the same page; version A, the control page, and version B, the tweaked page (having versions C & D are also common.)
The page elements chosen for testing will, of course, differ based on the KPI. However, commonly tested elements include:
At the end of the experiment, the SEO has data on which page version was most successful in getting users to perform the desired action (i.e., increased button clicks, time on site, purchases) and to what degree of certainty this success can be attributed to the change made. The SEO can then offer a stronger recommendation to the client on whether a change should be implemented to a page, and a numerical forecast of how said change would influence the KPI.
Wpromote ran an A/B test on its What We Do page with three different CTAs. The KPI was generating button clicks, which opened to a lead generation form.
At the end of the experiment we found that Version B performed best, with a probability of outperforming the control CTA at 96.7%. In other words, the data from the experiment suggested that updating the CTA to Version B would result in increased conversions, with a 96.7% of statistical certainty. See a snippet of our findings, below:
So why aren’t more SEOs running these tests, if they are so great? The short answer is that it’s hard, even with testing software. And it all begins with a problem-centric (!), testable (!!) hypothesis.
A great hypothesis should begin with a problem, not a solution. Examine the pain points of your client’s website (such as ignored buttons, pages with lower-than-expected page views, or abandoned shopping carts) and then brainstorm a solution. Next, define your goal (which should align with the KPI), tactics, and prediction in a formal hypothesis. Stumped? Try this template.
Because of [PROBLEM], we anticipate that [CHANGE] will cause [IMPACT]. We will measure this using [DATA METRIC(S)], and expect to see results in [X WEEKS].
Got your hypothesis ready? Great. Now let’s finish up prep.
A common mistake made by new A/B testers is confusing “statistically significant” data as a “significant result.” Avoid false positives or negatives by measuring out your A/B testing ingredients carefully before getting started, and allowing the test enough time to fully bake. Doing this ensures that the data you serve to your clients at the end are savory and representative. More on these ingredients:
To decide how much traffic to allocate to the test page (version B), balance your client’s risk tolerance against how much traffic the page receives and how much time you have to run the test.
Avoid making any changes to a page variant mid-experiment, as this confuses the data and may render your test results inconclusive. If a change is necessary mid-test, scrap the first experiment and replace it with a new one.
Need help calculating your test duration and sample size? Check out this cool tool.
After a statistically significant number of conversions have taken place, you can determine which variation was most successful, and at what degree of confidence you are declaring a winning variation. A common target set by A/B testers is to reach a 95% significance level.
The better a marketer is at interpreting data, the better recommendations he or she can provide. In other words, look critically at the data in front of you. Remember: the purpose of an A/B experiment is not to prove your hypothesis is correct but rather to better user experience and reach the client’s goals.
There are two common pieces of feedback that I hear from clients regarding A/B testing, both of which are founded on myth. Let’s address them:
Myth 1: A/B Testing will negatively affect my SEO efforts by splitting the link juice and keyword rankings between two versions of the same page.
Myth 2: Google will penalize me for keyword cloaking or duplicate content.
The Reality: Today’s search engines have come to expect highly dynamic AJAX-driven sites, meaning that swapping content dynamically is no longer considered cloaking.
Regarding duplicate content, Google only penalizes websites for having the same content as another, external domain. In other words, Google doesn’t mind what you do with your site content, as long as you aren’t stealing it from someone else. In the case of a split URL test, a rel=”canonical” tag that points to the original version is a good solution. What is more, search engines are running their own A/B tests daily! It wouldn’t make sense for them to penalize websites for using the same improvement strategy.
Like this post? Check out more A/B testing tips on our blog!