A/B Testing: 9 Best Practices
A/B testing allows you to find the best combination of elements on your webpage, email, or other promotional materials to drive conversions and generate leads. You can test countless variables including your title or subject line, layouts, calls to action, copy, images, forms, fonts, and colours to determine which ones best appeal to your potential customers. By doing so, you improve the effectiveness of your marketing efforts and increase your marketing ROI. In order to gain actionable insights, however, you need to run your A/B tests properly. Here are nine best practices to ensure your A/B tests deliver results you can use:
Start with a Hypothesis
Begin your test with a hypothesis that identifies the test variable and the expected result. This way, when you create your test versions, it’s clear exactly what you are changing and you avoid additional variables creeping into the test.
It’s also a good idea to get feedback from multiple stakeholders when you formulate your hypothesis so that you are beginning with the best educated guess, rather than testing one person’s hunch.
Define Your Metrics
Determine what your success criteria look like. What are you trying to achieve with the changes, and how will you measure results?
For example, if you’re changing the shopping cart sidebar on your ecommerce site, your metric might be increased sales or decreased shopping cart abandonment. If you’re testing email subject lines, your metric might be the percentage of email opens.
By determining what success looks like in advance, you’ll be able to tell if you achieved it when you get your results.
Test Only One Variable at a Time
If you change more than one thing, how do you know which one gave you results? The only way to tell if your change was effective is to test it in isolation. This isn’t to say that you can only test one new version of an element, but only test one element at a time. So, for example, you could test three different versions of your click-to-call button, but don’t change anything else on the page.
It’s also important that you test a control version of your original click-to-call button along with the new ones to account for any unforeseen variables outside your influence.
Test All Your Versions at Once
Test all the versions of your variable, including your control version, at the same time. This avoids the influence of any extraneous factors such as day of the week or time of year, other promotions you may be running, or the actions of your competitors or others in your marketplace.
Ensure Your Sample and Results Are Statistically Significant
You need to run your test on a large enough sample that the results can be extrapolated to the larger population. Similarly, you’ll need to confirm that your results are statistically significant to know whether your changes actually had any impact on the outcome.
Determine these amounts before your begin your test so you can make sure your sample groups are big enough and so that you don’t come to premature conclusions based on early results.
Avinash Kaushik has a great post on determining statistical significance on his blog.
Create Randomized Test Groups
Don’t just divide your mailing list down the middle and send one version to each group. Take a minute to consider whether the way that your list is organized may impact the validity of your results. For example, if your list is organized by add date from oldest to newest, the people at the top of your list are long-time subscribers who are familiar with your company, while those at the bottom are brand new. This could have a significant impact on how many people from each group open your email.
You can separate your contacts randomly for emails using SalesForce or Excel spreadsheets. For website testing, Google Analytics’ Content Experiments allows you to set the testing parameters.
Track and Document Your Results
This may seem obvious, but it’s important to track the results of each test and then document them for future reference. This way, you have a record of what elements on your site or campaign have been tested and why you chose the final iterations you did. You can also reference your results when developing future pages or campaigns so that you don’t have to run similar tests over again.
Your marketing efforts can always be improved. Once you’ve found the best wording for your subject line, play around with the placement of your call to action. When your layout is perfect, try different images. There are lots of things that impact the success of your website, emails, and other promotions, so keep working to find what appeals best to your customers.
It’s also important to test again if your first test didn’t yield statistically significant results. Just because the versions you tried didn’t create a difference doesn’t mean that a different change won’t.
But Don’t just Test for Testing’s Sake
That said, it’s important to only test things that are relevant to your marketing efforts. Don’t just create tests for the sake of running tests, and don’t test variations that you would never be willing to push live if they turned out to be the top performer. The purpose of running A/B tests is to find the best version of your site or campaign and put it into action, so any tests that don’t serve this purpose are a waste of your effort and time.
A/B testing helps you maximize the efficiency of your website and increase your conversions and lead generation. It’s important to run your tests according to these best practices to generate results you can put to good use. Set out your parameters in advance, run tests scientifically, and track your results to ensure you get actionable value from every test. And when you’re finished that, find something else worth testing.
Have questions about A/B testing or other design issues? Contact us or leave them in the comments below.