A/B testing is a common technique used to judge the effectiveness of new options against a control sample in order to improve success as measured by a particular metric. While the technique has traditionally been used by larger firms, more small and medium sized companies are seeing its value and using it to improve their marketing materials.

A/B testing is a common technique used to judge the effectiveness of new options against a control sample in order to improve success as measured by a particular metric. While the technique has traditionally been used by larger firms, more small and medium sized companies are seeing its value and using it to improve their marketing materials.

 

 

 

 

What is A/B testing?

A/B testing is a form of market research, allowing companies to compare a new piece of marketing collateral (contact forms, flyers, emails) against a control piece. The company then offers both options to consumers and judges the performance of both on a chosen metric. In marketing this metric is often the response rate. The results of A/B testing can be used by organisations to improve their marketing materials.

Trenton Moss, MD and Founder of Webcredible, says A/B testing is a "good way to get high volume and real user statistics in a relatively cheap and quick way, to be used when making the final tweaks on a website. It is most commonly used by many companies at the end of the design process, when making specific decisions such as the colour of a certain button on a website."

 

 

 

 

 

 

Randomisation

Randomisation is the key to A/B testing. It eliminates the possibility that external conditions will impact the results so the two samples (A and B) can be compared purely on their own innate merit. Systems must be put in place so that both samples are dealt randomly and in equal measure to the intended audience. For some materials, such as website contact forms, this is far easier as computer programs can be written that alternate the options automatically. For other materials, such as flyers, steps must be taken to ensure they are handed out at the same time, in the same place and on the same day, etc.

 

 

 

 

 

 

What things should I test?

Anything where ‘success’ can be quantified is a candidate for A/B testing. All customer-facing materials can generally be used in an A/B testing environment. A/B testing as a process leads to constant improvement as you weed out the less-successful aspects of your materials and introduce more successful parts.

Here are some possible candidates for A/B testing:

 

 

 

 

  • Quote forms on your website
  • Email marketing materials
  • Direct marketing materials e.g. flyers and leaflets
  • Website checkout process

 

How long should I run the test for?

This will depend on a variety of factors: one of the most important will be the budget available to you. This will be more relevant to direct marketing A/B testing rather than, for example, website form testing. Aside from financial considerations, running your test for too long can result in lost revenue if either A or B are significantly worse-performing, and running it for too short a time can cause you to miss statistically significant data. Talk to a professional marketing company, or use tool-assisted methods to help you find the optimum time for your test.
 

Ensuring statistical significance

Although you can take steps to ensure the results are as random as possible, it’s impossible to eliminate all factors. Because of this you need to run the test for sufficient time to ensure that any statistical anomalies that have occurred relatively random are accounted for and contextualised by the remainder of the experiment.

For example:

  • You serve two versions of a website contact form (A and B)
  • The next day’s results suggest version B is more successful
  • You check your website analytics package and find the majority of visitors on that day came from a particular country, due to a high-profile television advert in that country on that day for a competitor
  • You realise the colours on version B closely match the country’s flag
  • You continue running the test and find that over the next week, version A becomes far more popular among users of all countries

Eliminating variables (nationality of visitors, time of day, etc) is important to ensure statistical significance throughout. The best way you can help ensure statistical significance is to make sure you don’t end the test too quickly.
 

A/B testing tips

  • Both options must run simultaneously so that outside factors can’t influence the results, such as public holidays or a surge in media coverage
  • You must show the variants to unbiased visitors only – if, for example, your control example is an existing website design, you’re testing results for existing visitors may be skewed as they will already have opinions on the design
  • Conduct multiple A/B tests on the same materials. The more you conduct, the more you can trust the significance of the statistics produced. This also helps offset the chance of random occurrences influencing your results.

Limitations

According to Trenton Moss, MD and Founder of Webcredible, A/B testing is useful but "it will not address all usability issues and so will not provide a full user experience strategy when redesigning a website. My advice to small businesses, is to undertake upfront user research before the design process occurs as well as more qualitative user testing during the design process. Conducting this type of user research upfront will reduce re-design costs in the long run, which is an important factor for small businesses.”