Shopify Ecosystem

Tips and Tricks for Running a Successful Split Test

ux-789162_1920

With any website, optimization is key to improve the experience each page gives to the user. Optimization shows which components of each page lead to higher conversion rates, and split tests are an excellent method of learning which changes should be made to better the user experience, while simultaneously increasing conversion rates.

Optimization can be done in a number of ways, however there are some key elements that can implemented to get the most out of any split test, which in turn will lead to a more successful optimization campaign.

Prior to beginning an optimization campaign, there are a number of applications that will be necessary in order to start in the right direction. These include:

  • Google Analytics (or another analytics application)
  • Testing software (Optimizely, VWO, etc.)
  • Qualitative applications (CrazyEgg, UsabilityHub, etc.)
  • Test duration and test significance calculator

Each of these tools plays an integral part in A/B split testing, as they collect and show the necessary data to get an idea of how the website performs, as well as what can be fixed. While some of these tools could be considered optional, it is an absolute must that all optimization campaigns utilize Google Analytics (or another analytics tool) in order to obtain basic performance data about the site.

Analytics tools form the foundation of any successful optimization campaign, however the other tools listed are necessary to get the most out of split testing. Tools such as Visual Website Optimizer (VWO) allow users to track conversions on their website, as well as test significant data, making it highly useful to optimization campaigns. In addition, a qualitative tool such as CrazyEgg can be used to track clicks and create a heatmap of the site, which can provide for very useful optimization data.

Once the proper tools have been collected, split testing requires that certain research be done before any tests begin. Research is collected and evaluated to create a valuable test that will provide useful information, whatever the outcome may be.

Collecting research for an A/B split test involves a number of steps that, if followed, will result in a hypothesis that can be used as the foundation for the test itself:

  1. Identify pages which are in need of testing.
  2. Incorporate the use of heatmaps and other qualitative data to get a basic understanding of what works and what doesn’t.
  3. Decide which key performance indicators (KPI) should be tested.
  4. Determine the timeline of the test period.
  5. Create a hypothesis that will be used to create the test itself.

Once this data has been collected, work on the test itself can begin. After variants have been determined and goals have been set, it’s time to begin developing a pre-launch strategy for the optimization campaign. 

It’s important to identify and define the goals of the test in Google Analytics prior to launch. By setting goals in Analytics, it becomes easier to identify campaign metrics, as well as have realistic expectations for the outcome of the test. 

Prior to the launch of an optimization campaign, it is also important to omit mobile traffic (unless it’s a mobile test). While a site or page may be optimized for desktop users, this is not always the case for mobile users, making it an unnecessary variable to test in most cases. In addition, all pages must be tested to render properly in all major internet browsers prior to the beginning of a split test, as broken variants can ruin any test from the beginning. This includes removing the “flicker” from any pages that may have trouble loading properly, as this can be detrimental to the results of the split test.

Finally, all other variants should be tested to work properly. There should be no broken links, or links leading to the wrong page. In addition, all links should open in a new tab, as this is typically preferred by users, and will result in better test results.

After the pre-launch data has been collected, it’s time to define the test itself. Tests should stick to a pre-determined timeline, and should stick to that timeline under all circumstances. In addition, tests should start and end on the same day of the week, in order to collect a fair amount of data about site.

Finally, it’s time to run the test and analyze the results. Firstly, all findings from the test should be written down on a test report sheet, and should include every aspect of the test. This includes the name of the test, the hypothesis, all variants, results, analysis, and observations. In addition, all key segments in Google Analytics should be analyzed, and the variation that performed better should be implemented into the site itself.

By following the steps above, anyone can run a successful split test campaign and learn what can and should be changed about their site in order to achieve the highest conversion rate possible. 

Read the full article here…

I'm also on