Defining problems is a key step in the discovery of value for online businesses, and is sadly easier said than done. Understanding that your problems are not your competition’s problems is often the point most businesses have a hard time grasping. Everyone has a unique approach to solving their objectives, yet while many Ecommerce sites’ goals are the same – send visitors to clean and easy to understand landing pages and have them convert – they don’t always get the results they accounted for. This happens because websites are highly contextual and your problems are unique to your business – you’ll never have general issues, only focused ones.
There is no magic button that makes your page perform better, it’s a labor of love. (Read: process)
Well let’s just increase spend with paid search and that will by default increase our conversions? (More visitors = more conversions)
Wrong, you’re simply throwing money at problems you don’t understand.
Well then let’s just redesign the page, and test against the old one, that’s always shown to increase our numbers!
You’re missing the point, because you haven’t applied what you’ve learned to your approach in fixing the problem!
Let’s get down to business and start with your highest traffic pages. We’re choosing these pages first because with an average of around 25,000-40,000 monthly unique visitors you can finish these tests quicker. With lower trafficked pages you must test a longer period of time. Meaning we’re testing higher viewed pages quicker, leading to immediate positive results, and ultimately speeding up your optimization process.
Evaluate existing information, meaning what do we already know that’s working or not working for the specific page in question? We can see where users are abandoning the forms, where they stop scrolling, and how to put value propositions at key points along the path of a user’s journey. Narrow down the highest relevant keywords that are getting these users to your page, and focus on user intent vs. what your page is trying to accomplish.
Once we’ve completed our research portion of this process we can collaborate with our team in coming to an informed hypothesis that we can test against. Using this data-driven process you’re ensuring your testing the right things from your conversion research.
A hypothesis sounds like an educated guess, how is this relevant?
A hypothesis defines why you think a problem is occurring. If your problem is high abandonment with mobile views, your hypothesis for the reason that’s occurring might be “People question our validity after seeing a visually unappealing site.” Now we need to come up with variations for our split test, testing variations that will give us meaningful results.
It’s also important to state your hypothesis goal: don’t just create tests to pit problems against one another. We must be able to measure it, gain valuable information on your customers, and test against the control. If you’re not tracking the right indicators, you could very well see a drop in revenue when testing. This is why using the key metrics from historical data can guide the hypothesis through a specific testing phase.
“Never stop testing, and your advertising will never stop improving.” – David Ogilvy
Learning from the results and starting over is one of the most overlooked steps in the A/B process. If the goal is to filter or distill a page down to its highest converting state, do you think that is going to happen the first time around? It can take several tests to whittle down a bloated and confusing page to one that’s sleek and gives the user everything they need and nothing they don’t.
Google hates duplicate content, can we be penalized for A/B testing?
They actually encourage A/B testing as it is a method for providing customers with an accurate reflection of what they are searching for. This of course comes with some discretionary measures.
Use 302 redirect instead of a 301 – You want Google to keep the original URL in its index.
Use rel=”canonical” – You just need search engines to understand that your variations are, quite simply, variations of an original URL. Using noindex may create problems later.
Don’t drag out an experiment past its prime – Once you have a clear winner implement these changes and make that the permanent page as soon as you’re sure of the statistical significance of the test.
Make sure you are getting actionable data from Google Analytics – to make use of data, you need to have goals first. Define your target metrics.
- Surveys – Users test your pages to find out where and why things fail.
- Segment – You’ll quickly cherry pick the most actionable insights.
- Prioritize – High traffic = high potential, test these first and see results quicker.
- Hypothesize – You need to define your problems before you can act on an idea.
- Statistical Relevance – Don’t stop until you’ve won, or in other words hit at least a 95% significance percentage.
- Test, Rinse, Repeat – No one has a perfect page, and one time through this process will not yield the best results. With your newfound knowledge, test it again!
In the end A/B testing is a wonderful approach if used correctly, but without the proper research it can result in false positives. Research must be the precursor to testing, otherwise you’re simply arbitrarily deciding on an approach. Improving the outcome of your tests takes time and extensive feedback from your User Experience and Digital Strategy teams, both which we happen to have.