SHARE

Beginner’s Guide to A/B Testing

This post breaks down SEO A/B testing from start to finish - starting with definitions and ending with communicating the results to clients. Let’s take it from the top!

What Is A/B Testing & Why Do It?

SEOs use A/B testing to learn how different variations of a web page would affect user experience if implemented permanently. A/B testing is conducted in an effort to improve a client’s KPI, and works by presenting users with one of two versions of the same page; version A, the control page, and version B, the tweaked page (having versions C & D are also common.)

The page elements chosen for testing will, of course, differ based on the KPI. However, commonly tested elements include:

  • Calls to action
  • Page content
  • Navigation bars
  • Funnels
  • Visual media

At the end of the experiment, the SEO has data on which page version was most successful in getting users to perform the desired action (i.e., increased button clicks, time on site, purchases) and to what degree of certainty this success can be attributed to the change made. The SEO can then offer a stronger recommendation to the client on whether a change should be implemented to a page, and a numerical forecast of how said change would influence the KPI.

A Quick Example:

Wpromote ran an A/B test on its What We Do page with three different CTAs. The KPI was generating button clicks, which opened to a lead generation form.

versiona
Version A: Control Page with “Get Started” CTA.
versionb
Version B: Tweaked Page with “Convert” CTA.
versionc
Version C: Tweaked Page with “Request Consultation” CTA.

At the end of the experiment we found that Version B performed best, with a probability of outperforming the control CTA at 96.7%. In other words, the data from the experiment suggested that updating the CTA to Version B would result in increased conversions, with a 96.7% of statistical certainty. See a snippet of our findings, below:

findings

So why aren’t more SEOs running these tests, if they are so great? The short answer is that it’s hard, even with testing software. And it all begins with a problem-centric (!), testable (!!) hypothesis.

What Is A Problem-Centric, Testable Hypothesis?

A great hypothesis should begin with a problem, not a solution. Examine the pain points of your client’s website (such as ignored buttons, pages with lower-than-expected page views, or abandoned shopping carts) and then brainstorm a solution. Next, define your goal (which should align with the KPI), tactics, and prediction in a formal hypothesis. Stumped? Try this template.

Because of [PROBLEM], we anticipate that [CHANGE] will cause [IMPACT]. We will measure this using [DATA METRIC(S)], and expect to see results in [X WEEKS].

Got your hypothesis ready? Great. Now let’s finish up prep.

Pre-Calculate Your Test Duration, Sample Size, & Traffic Allocation.

A common mistake made by new A/B testers is confusing “statistically significant” data as a “significant result.” Avoid false positives or negatives by measuring out your A/B testing ingredients carefully before getting started, and allowing the test enough time to fully bake. Doing this ensures that the data you serve to your clients at the end are savory and representative. More on these ingredients:

Test Duration:

Make sure your A/B test has enough time to spit out truly representative data. It can be tempting to peek in on your test (which is always live) after a day or two and try to draw your conclusions. However, the only thing the data can provide at this point is a “false negative” or “false positive.” One day, two days or even one week is not typically long enough to glean meaningful insight.

Sample Size:

Define your sample size of users before you launch the A/B test. For example, if an experimenter ends a test before acquiring a representative sample size, the data, and therefore recommendations become less useful. If the test runs too long, you’re wasting time on data that could have been acted on earlier. An average of ≤10,000 visitors per month is a good rule of thumb to make sure your findings are significant (although websites with less monthly traffic can still benefit from A/B testing if the test runs for a longer duration.)

Allocation:

Decide how much traffic you will allocate to version A versus version B. (For example, you may allocate 50% of traffic to version A and 50% of traffic to version B.)

To decide how much traffic to allocate to the test page (version B), balance your client’s risk tolerance against how much traffic the page receives and how much time you have to run the test.

Avoid making any changes to a page variant mid-experiment, as this confuses the data and may render your test results inconclusive. If a change is necessary mid-test, scrap the first experiment and replace it with a new one.

Need help calculating your test duration and sample size? Check out this cool tool.

Now, Let’s Run The Experiment

The first step is to choose the platform that you’ll use to run the experiment. Both Google Experiments and Optimizely are great options, with a robust number of features and easy to use.

Watch walk-through videos of Google Experiments and Optimizely’s platforms, here.

Google Experiments, Optimizely, and most other A/B testing software will provide you with a piece of JavaScript code that needs to be copied and pasted into the head tag of the page that would represent a successful test. This page differs depending on goal of the experiment but are typically variations of a “thank you” form only accessible by users who have performed the desired action (i.e., submit, purchase, and white paper confirmation pages.) As soon as the conversion event occurs on your website, the software records which page variation (i.e. version A or version B) a visitor saw.

After a statistically significant number of conversions have taken place, you can determine which variation was most successful, and at what degree of confidence you are declaring a winning variation. A common target set by A/B testers is to reach a 95% significance level.

Communicating The Results

The better a marketer is at interpreting data, the better recommendations he or she can provide. In other words, look critically at the data in front of you. Remember: the purpose of an A/B experiment is not to prove your hypothesis is correct but rather to better user experience and reach the client’s goals.

Last Thought: Addressing A Common Myth About SEO A/B Testing

There are two common pieces of feedback that I hear from clients regarding A/B testing, both of which are founded on myth. Let’s address them:

Myth 1: A/B Testing will negatively affect my SEO efforts by splitting the link juice and keyword rankings between two versions of the same page.

The Reality: While Google Experiments does use different URLs for page versions, setting a rel=”canonical” tags to the original version negates this myth. With Optimizely, content on the page is swapped using JavaScript. So while users see different variations of a page, those Google bots still see the original page only.

Myth 2: Google will penalize me for keyword cloaking or duplicate content.

The Reality: Today’s search engines have come to expect highly dynamic AJAX-driven sites, meaning that swapping content dynamically is no longer considered cloaking.

Regarding duplicate content, Google only penalizes websites for having the same content as another, external domain. In other words, Google doesn’t mind what you do with your site content, as long as you aren’t stealing it from someone else. In the case of a split URL test, a rel=”canonical” tag that points to the original version is a good solution. What is more, search engines are running their own A/B tests daily! It wouldn’t make sense for them to penalize websites for using the same improvement strategy.

Like this post? Check out more A/B testing tips on our blog!

Good luck!

written by: Elizabeth Lefelstein

Check Out Other Relevant Guides

Let’s Talk Shop! Learn how to grow your business from the Wpromote experts.

Thank You For Your Interest In Wpromote!

Your message has been received and you will be contacted by one of our marketing specialists shortly. If you have any other questions, please do not hesitate to contact us by calling 310.421.4844 or by emailing sales@wpromote.com. We look forward to speaking with you shortly.

Sincerely,

The Wpromote Team

Get Educated! Recieve Wpro U Updates, Case Studies & More

Thanks for signing up to be a Wpromote Insider.
You’ll be the first to get the scoop on our latest services, promotions and industry news.


CONNECT
  • Los Angeles HQ: 310.421.4844
  • Chicago: 312.690.7112
  • San Francisco: 415.423.1535
  • Melville: 646.807.4074
  • Dallas: 214.696.9600
  • Houston: 281.974.5569
  • Denver: 720.583.9064
  • NYC: 310.321.4566