Guide to UX AB Testing

UX AB testing, also known as split testing, is a polarizing idea in the UX community. On many sites that talk about AB testing, the pictures with headlines often show an apple and an orange being compared, thus showing, symbolically, the natural paradoxical nature of this testing concept.

Many will swear by UX AB testing, citing that it works like evolution – massively parallel variations on a theme being tested in the same environment, the most effective, capable variation winning out. Survival of the fittest, or natural selection is the key to this idea.

Now, this kind of mathematical parallel natural selection of multiple variants isn’t new to developers. Most designers have heard of the vaunted genetic algorithm concept, which follows the same idea procedurally. But is it really that complicated and convoluted?

Only if you let it be. The trick with split testing like this is having a good sense of how many variations to run, and how clearly defined and disparate the variations must be in order to merit instantiating one. How many can you run at a time, while giving them enough time each, as well as focus and care, to truly be tested fully in this push to parallelism.

Along with this, you need to know, going in, what defines success or failure (called the fitness tracker in genetic algorithms) within your system. What ideal goals, that they all move to, do you need to meet, and to what tolerances in order for a variation to be considered more or less successful than others.

The next thing to consider is how many cycles you want to take this natural mechanic. Do you want to just do a single pass test, and pick the variation that works best? Or, do you want to be a little more advanced by taking the highest couple ranking variations, ascertain what attributes improve their fitness, and then wisely combining them, and then making variations of that joined new model, to run through the path a few more times.

Doing that could take some extended time, and you do run the risk of it mutating naturally, and causing the problem of drift, but it can also yield some pretty powerfully accommodating UX parameters when it works out well.

Either way, the fear of AB testing as being unfair and inaccurate due to variations making them apart from being equally testable and comparable, is unfounded. It’s entirely possible for this to work if it’s handled cautiously.

Again, the biggest thing is to plan how far you want to take it, how long you want to run the tests, and how many variations you want to juggle, which matters in how you approach this, and how you apply logic to using it well.

So, don’t hesitate to use UX AB testing, just don’t let it get away from you, and choose the best variations to even test, rather than throwing every slightly distinct prospect you’ve come up with into the fray. It works in moderation, and along with other, more standard testing processes before and after, you’ll get a solid grip on how effective your design truly is.

ux

Megan Wilson
Megan Wilson is user experience specialist & editor of UX Motel. She is also the Quality Assurance and UX Specialist at WalkMe Megan.w(at)walkme.com
Megan Wilson on sabtwitterMegan Wilson on sablinkedinMegan Wilson on sabgoogleMegan Wilson on sabfacebook