When discussing conversion rate optimisation (CRO) it is nearly impossible not to include A/B testing. With the help of a myriad of analytics tools, businesses can track how their targeted audience responds to certain changes and the effects it has on their key performance metrics. CRO Specialists and digital marketers with an understanding behind ab testing can gather both qualitative and quantitative user insights and make data-informed decisions to produce positive results for their business. If you’re here to learn the ins and outs of A/B testing and how it helps conversion rates, read on, as we start from the beginning.
Here’s what we’re going to cover:
- What is A/B testing?
- How Does A/B Testing Work?
- Why should you utilise A/B testing?
- How to conduct A/B testing?
- How long should A/B testing marketing run?
- What Can You A/B Test to Improve Conversion Rates?
What is A/B testing?
A/B testing, also known as split testing, refers to the method of comparing two versions of a variable, such as a landing page testing, call-to-action button, ad or email, and determines which version supplies the highest selected outcome, such as leads, or CTR (click through rates). In A/B testing, ‘A’ refers to the control or original testing variable and ‘B’ refers to the variation or a new version.
Image from Optimizely
The variants are shown to differing users randomly and the performance is then analysed based on key performance metrics. A/B testing is a fundamental component of conversion rate optimisation. Split testing provides data and clarity on potential changes to a variable; it collects data that signals the impact of any change/s.
How Does A/B Testing Work?
A/B testing can be as simple or as complex as required. For example, you may want to test a single headline, call-to-action button, or conduct a complete redesign. Generally, half of the users will be shown the original version, and the other half will be shown the modified version. The version that delivers the most favourable results will be your guide to optimisation.
As users are delivered either the control or challenger, their engagement with each experience is measured. Analysing these results will enable you to determine whether changing the experience had a positive, negative or neutral effect on user behaviour, conversions and/or clicks.
As you optimise your webpages, you might discover there are several variables that can and should be a/b tested. To evaluate how effective a change is, it is recommended that you isolate one independent variable and measure its performance. Ie. Change the colour of a CTA button first, then move onto changing the form. Don’t do both at once.
Split testing several different variables at once may lead to unclear information about which variable is responsible for the changes in performance. Therefore, focusing on one change at a time will help pinpoint which edits influenced visitor behaviour, and which ones did not. Over time, you can combine the effect of multiple positive changes and significantly boost your conversion metrics. To be clear, you can test more than one variable for a single webpage, however testing them one at a time will ensure you gain the most accurate and reliable results.
Why should you utilise A/B testing?
A/B testing helps to identify any problems that may be affecting your core conversion metrics, like leaks in the conversion funnel, drop-offs in payment etc. Not only does split testing help to answer quick questions, but it continually improves user experience. Here are some ways that A/B testing can help your conversion:
- Achieve better ROI from existing traffic
It comes as no surprise that the cost of acquiring quality traffic on your website can be significant. To get the most out of your existing traffic, A/B testing allows you to increase conversions without increasing your advertising budget to acquire new website visitors. A higher ROI is likely to result from existing traffic when you A/B test as you improve the user experience and optimise your content for your users.
- Improve Bounce Rates
We have previously explored the importance of bounce rate in analysing the performance of your website. A visitor who bounces from your website will not convert and there are several causes that may results in this action, such as low-quality content, unclear call-to-actions and confusing navigation. There is no optimal bounce rate for any one site. Although businesses and industries can set expectations based on averages, your website is unique to your business and your bounce rate is a starting point that can be optimised. If it is currently 40%, and that’s great for your industry average, that doesn’t mean it couldn’t still be improved with CRO tweaks. A/B testing marketing is the best way to determine which variation of your content has the most positive impact on your users.
- Make low risk modifications
Testing and then modifying minor, incremental changes to your webpage can reduce the risk of jeopardizing your current conversion rate and save you a lot of money. A/B testing optimises resources for maximum output with minimal modifications. For example, you may be looking to introduce a new feature onto your website, however it is unclear if this change will positively affect your website visitors. By conducting a split test of the new feature you can understand the impact of the change before it is implemented across your website. Creating certainty around the outcome of the change will help to pay off in the long run.
How to conduct A/B testing?
1. Determine Which Conversion to Improve
Analytical insights will help with optimising. Elements of your webpage, app, or ad that have low conversion rates or high bounce rates are often a good place to start. And if you’re starting with poor KPIs, you have no where to go but up!
2. Identify Your What Results You Want
Optimal KPIs are typically data-driven, like higher click-through-rates, a higher conversion rate on form submission, or a longer time spent on page. You need to decide what KPI you want to measure before you begin split testing.
Although you may measure several metrics during the test, we recommend choosing a primary metric to analyse. This acts as your dependant variable, which will change based on how you manipulate the independent variable. Your dependant variable will help you to determine whether the variation is more successful than the original.
3. Hypothesise the change
Once you have identified your primary goal, you can generate a clear and official hypothesis. For example:
“If I change the CTA button from green to blue, I expect the CTR to increase.”
You may also like to include a statement about why you think this will happen. A strong hypothesis gives you an objective statement to refer to after you complete your test. It will The data will determine whether your hypothesis was correct and you achieved your desired outcome.
4. Create the variations (The A & B)
Now that you have your dependant and independent variables and have hypothesised the desired outcome, you can set up the original or control version of what will be tested – the ‘A’ version. This could be the call-to-action button you currently use, or the landing page used in your marketing.
Next build the ‘B’, that is the modified or altered version of your call-to-action button or landing page testing. Remember, the change should only focus on one element. So, if you want to optimise your landing page by testing a change to the heading, then only change the heading on your ‘B’ version.
5. Run the experiment
There are various A/B testing marketing tools you can use to run split testing experiments. The most common is Google Analytics when performing an A/B test. Typically, visitors are randomly assigned to either the control or the variation in the experiment. For tests where you have more control it is best practice to test with two similar audiences for conclusive results. All user experience and interactions are measured, counted, and compared to determine how each performs.
6. Measure and analyse results
After you complete the A/B test, refer to your primary goal metric in your analysis. Although you may measure and inspect multiple metrics, your goal metric will determine whether you achieved the desired outcome. For example, if you tested two variations of ad creative, and chose leads as your primary metric, don’t overanalyse click-through rate. A new ad variation may have generated a higher click-through-rate but lower conversion. Based on your primary goal you should keep your existing ad content as that generated more conversions, despite a lower click through rate.
7. Act based on your results
From analysing the performance of your primary goal, you can determine a winning variation. The losing variation should be disabled in your A/B testing tool to complete the test. From here you can then implement the winning variation.
If neither variation is statistically better, the results of your test will be inconclusive. This means that the variable you tested did not impact results and users. This may mean you can implement the new content without fear of a negative impact to users, you can stick with the original variation or plan another test.
How long should A/B testing marketing run?
It is important to allow your test to run long enough to obtain a substantial sample size as it’s required to confidently determine whether there was a statistically significant difference between the two variations tested. Obtaining statistically significant results could happen in a matter of hours, days or weeks, depending on your business and how you ran the A/B test. Ideally, you want the test to run long enough to make sure there is no evidence of results convergence. This occurs when there appears to be an initial significant difference in the two variations, but the margin decreases over time. If this occurs during a split test, you can conclude that the variation did not have a significant impact on your primary goal metric.
A useful guide for how long it may take to get results is how much traffic you generate. If your business does not get a lot of traffic, it will take longer for you to run an A/B test to develop a substantial sample size. Conversely, a business that receives high traffic volumes may generate a functional sample size quickly.
What Can You A/B Test to Improve Conversion Rates?
Split testing can help you analyse the performance of all the elements on your website, emails, and marketing campaigns. Determining where to start with A/B testing should always be backed by analytics to identify your conversion rate pain points. Testing the elements that are likely to have the biggest impact on your conversions will help to ensure you are utilising A/B testing effectively. Here are 5 examples of areas for conversion rate optimisation that may apply to you.
The headline is typically the first thing visitors notice. If it does not grab attention, link to a user’s intentions, or compel a response then you will not get the conversions you want. Experiment with highly focused headings, catchy and to the point to observe positive changes in user behaviour.
2. Landing Pages
The purpose of your landing page is to encourage users to convert. If your landing page is not converting your users, then you’re losing potential customers. A/B testing your landing pages should be carefully thought-out.
A good place to start is with a heat map to determine where visitors are viewing and clicking on your landing pages. A heat map collects data that can help you decide which elements are the most important to test for optimisation as it shows where users are spending their time.
Image from CrazyEgg
A call-to-action button can be tested in various ways including, copy, colour, shape, size, and placement. Most conversions will take place after people click on a call-to-action; therefore, it should stand out immediately and capture attention so that it cannot be missed. Consider testing the placement of your call-to-action by analysing a heat map or experimenting with more compelling content.
4. Content Depth
Depending on your consumers, some may prefer valuable information that provides a basic overview of a topic, while others may want to drive deeper with more longform information. You can test what your target consumers prefer by testing content depth. Create two pieces of content, one significantly longer than the other providing more information on your product or service.
A form is extremely important for lead generation and encouraging conversions. Asking for a lot of information from your customers may cause frustration if you require a lot of detail. A/B testing your forms along with the required research to determine which fields your potential customers are most likely to feel comfortable in providing could give you valuable insight into what fields to require in a form and which to ignore for now.
A/B testing allows you to gain deep insights into establishing what content your target audience wants to see and which marketing tactics resonate with users. After reading this comprehensive introduction to A/B testing, you should be fully equipped to begin planning your next steps for conversion rate optimisation. Dedicate your time to creating the most effective changes that will positively impact your business and watch it pay off in the long run.
Some Split testing tools we’d recommend are:
What is A/B testing?
A/B testing, also known as split testing, refers to the method of comparing two versions of a variable and determines which version supplies the highest selected outcome, such as leads, or CTR (click through rates).
How Does A/B Testing Work?
Generally, half of the users will be shown the original version, and the other half will be shown the modified version. The version that delivers the most favourable results will be your guide to optimisation.
How long should A/B testing marketing run?
Obtaining statistically significant results could happen in a matter of hours, days or weeks, depending on your business and how you ran the A/B test.