A/B Testing or Usability Testing. Which one is better?

First thing first, A/B tests and Usability tests are not the one and same thing.

A/B tests are carried out to measure users’ preferences. Whereas usability tests are carried out to measure users’ behavior.

A/B testing (also known as split testing) is a conversion optimization technique in which we compare two or more versions of a web page against each other, in order to determine which version performs better in terms of driving leads, sales or completing other conversions.

Usability testing is a conversion optimization technique which is used to evaluate website usability i.e. how easy it is for your customers, to use your website or complete a particular task.

Having said that, A/B tests and usability tests are not so different either. They are not as different as apples and oranges. The ultimate aim of both tests, is to improve the website user experience and increase conversion rate.

The comparison between A/B and usability tests, is more of a comparison between, testing users’ preferences and testing users’ behavior. The objective of this article is not to prove that one testing method is superior to the other (though you are likely to get that impression).

The objective is to think about, where your focus should be: testing users’ preferences or testing users’ behavior?

A/B tests are much more difficult to design and execute than usability tests

Good knowledge of statistics is a prerequisite, for designing and running a A/B test. Many optimizers lack such knowledge. Consequently their A/B tests are considerably prone to statistical errors and are destined to fail, from the very start.

This is not the case with usability tests.

While usability tests esp. on site tests, do require lot more preparation than A/B tests, but the preparation that is required like: hiring test participants, creating tasks, creating scenarios, delivering instructions etc, do not require any knowledge of statistics as prerequisite.

Because of this reason, usability tests are not prone to statistical errors, are easier to design and execute and are much less likely to fail. The advent of remote usability testing services like usertesting.com has made carrying out usability testing much more hassle free and affordable.

A/B tests are more likely to fail than usability tests in producing real lift in conversions

If you have run more than five A/B tests, you know that most of these tests fail to produce any real lift in conversions. They either do not produce any lift in conversions or the lift they produce is usually imaginary/short term.  

Many testers are cool with that. But I am not and you should not be, either.

 

Most A/B tests fail to produce any real lift in conversions for two main reasons:

#1 Not all confounding variables were identified, controlled, measured or eliminated while conducting the test.

Confounding variables are those variables which a tester failed to identify, control, measure or eliminate while conducting the test and interpreting the test results. Presence of such variables lead to false positive results.

In an ideal world, we would be able to control all of the confounding variables and A/B testing would be the most powerful testing method, ever known to optimizers. But the reality is, it is never really the case.

If you run a high traffic website and do multi channel marketing, it is virtually impossible to control all of these variables. You can’t ask the marketing department to stop marketing or wait, before you test is over. You can’t stop the IT from making website changes, until your test is over.

You can’t expect the whole business to ‘stand still’ while you run your A/B tests. It does not work that way.

The best you can do and only if you are seasoned tester, is minimize the adverse effects of these variables on your tests. But that is not good enough to produce real lift in conversions in a timely manner.   

#2 In A/B testing, you are basically testing your own assumptions

Is version ‘B’ better than version ‘A’?

You are not testing how good version ‘B’ is in the range of context.

You are not testing users’ behavior.

May be users would have preferred version ‘C’,  version ‘D’ or version ‘Z,  had they got the chance to look at it.

Even if your hypothesis is based on quantitative and qualitative data, at the end of the day it is your hypothesis, it is your assumption. It is what you think, may solve your customer’s’ problem if tested.

So it is not always the tester, who is the problem here. The logic behind A/B testing is innately flawed which makes it much more difficult to produce any real lift in conversions.

A/B tests do not conform to Agile Analytics methodologies

In Agile analytics, ‘agile’ is the ability to move quickly and cost efficiently in response to changes in the marketing conditions.

In order to respond fast to the every changing needs of our customers, capitalize on new marketing opportunities and make super timely decisions, we need to move fast.

Therefore, in ‘Agile Analytics’, the focus is on, rapidly deploy solutions which solve your customer’s problems, either wholly or in parts.

My biggest pet peeve with A/B test is, that they are so damn slow to show results (if any). A/B tests usually take a month to cook and show statistically significant test results.

But even after waiting for a month and getting statistically significant test result, with the right sample size, there is no guarantee that the winning variation will bring any real lift in conversions.

The chances of getting ‘real lift’ is as good as flipping a coin and expecting ‘head’.  The worse part is, it take one month to flip the coin.

Since in an A/B test, you are basically testing your own assumptions, you need to make and test lot of such assumptions, before you can get real lift in conversions. So in the long run, A/B tests can turn out more expensive than usability tests. Think of the opportunity cost.

Now instead of making and testing your own assumptions, focus on measuring users’ behavior and determine what they want and not what you want.  Usability testing can be carried out in a day or two. Thus within one month, you can conduct dozens of usability tests and optimize users’ experience much faster.

You really don’t learn anything from A/B test results

A/B test results do not help you in moving ahead, with other design decisions.

Let me give you few examples.

Say, after conducting an A/B test, you concluded that ‘red button’ performed much better than ‘blue button’ in terms of generating leads.

So does that mean, if you make all the buttons on your website red, the buttons’ CTR will increase across your website? The answer is, “we don’t know”. We need to conduct more A/B tests for that.

Another example,

Say, after conducting an A/B test, you concluded that bigger CTA (call to action) outperforms smaller CTA in generating sales.

So does that mean, you will generate even more sales with an even bigger CTA.

Again, the answer is, “we don’t know”. We need to conduct more A/B tests for that.

Thus conducting one A/B test does not help you in moving ahead with other design decisions and this happen because you are not testing user behavior but your own assumptions about what work or does not work on your website.

This is not the case with usability testing, where you can use the test results, (as you measure users behavior) in moving ahead, with other design decisions.

Usability testing is open-ended and customer centric

Your users’ feedback will almost always highlight the most important issues that need fixing first.

Without such feedback, you will often end up spending significant amount of time and resources in finding and fixing a problem, which does not really matter to your customers (like changing the color of a button to ‘red’).

And if something does not matter to your customers than it won’t help you in improving the business bottomline. It is as simple as that.

When you carry out a usability test, you don’t give limited options to your users: choose between ‘A’ and ‘B’. You don’t test your own assumptions about what really matters to your customers.

You don’t carry out a test under the assumption that a single desired user action (like generating a lead or sales) is the only thing that counts in determining whether one web page version is better than the other. You take into account the overall user journey and experience.

There are lot of factors other than page design, which affect conversions like: brand visibility, perception, credibility, market value etc. All of these factors are overlooked in A/B tests.

Testing User’s Preference vs Testing User’s behavior

Allow me to remind you one more time. The objective of this article is not to prove that one testing method is superior to the other. The objective is to think about, where your focus should be: testing users’ preferences or testing users’ behavior.

If you desire big and quick gains, focus on testing users’ behavior. For fine tuning, focus on testing users’ preferences.

Consequently, if your website has never been optimized for conversions or you are just starting out, you will benefit the most from testing users’ behavior (usability testing).

Once you have reached the point of diminishing returns from your usability testing efforts and you want to squeeze every last drop of conversion, out of your landing pages then go for A/B testing.

Learn about the Google Analytics Usage Trends Tool

The Google Analytics usage trend is a new tool which is used to visualise trends in your Google Analytics data and to perform trend analysis.


Do you want to Learn Web Analytics in 4 Weeks?

  • Learn and Master Web Analytics, Conversion Optimization & Google Analytics from Industry Expert in 4 weeks.
  • Lifetime access to the course + Lifelong FREE course updates.
  • New study material added every few months (lifelong learning).
  • Up to date training material.
  • Most exhaustive course on Google Analytics on the internet.
  • Hundreds of Assessments to test your learning.
  • Your 24/7, 365 days a year reference source.
  • Learn at your own pace and from any place.
  • Risk Free with 30 days 100% Money Back Guarantee.

Take your Analytics knowledge to the next level. Checkout my Best Selling Books on Amazon

Maths and Stats for Web Analytics and Conversion Optimization
This expert guide will teach you how to leverage the knowledge of maths and statistics in order to accurately interpret data and take actions, which can quickly improve the bottom-line of your online business.

Master the Essentials of Email Marketing Analytics
This book focuses solely on the ‘analytics’ that power your email marketing optimization program and will help you dramatically reduce your cost per acquisition and increase marketing ROI by tracking the performance of the various KPIs and metrics used for email marketing.

Attribution Modelling in Google Analytics and Beyond
Attribution modelling is the process of determining the most effective marketing channels for investment. This book has been written to help you implement attribution modelling. It will teach you how to leverage the knowledge of attribution modelling in order to allocate marketing budget and understand buying behaviour.

Himanshu Sharma

Certified web analyst and founder of OptimizeSmart.com

My name is Himanshu Sharma and I help businesses find and fix their Google Analytics and conversion issues. If you have any questions or comments please contact me.

  • Over eleven years' experience in SEO, PPC and web analytics
  • Google Analytics certified
  • Google AdWords certified
  • Nominated for Digital Analytics Association Award for Excellence
  • Bachelors degree in Internet Science
  • Founder of OptimizeSmart.com and EventEducation.com

I am also the author of three books:

error: Alert: Content is protected !!