The little known details about hypothesis in conversion optimization

Everyone is talking about testing, right? But not many talk about the cost of testing

This could be because the majority of optimizers do not pay the cost of failed tests from their own pocket. So for them, if a test fails, it should not be a big deal. After all, it is just a test. A test can succeed or it can fail.

But it is not just a test for your client.

They lose money every second while you run badly designed tests and even more money when your winning variation fails to produce any real lift in sales and actually decreases conversion rate.

It is equivalent to setting a pile of money on fire every single day in the name of testing.

Testing is not free

It has never been free and it will never be free. Whenever you choose to conduct a test, you risk losing money.

The amount of money you risk depends upon the sample size, duration of the test, and the nature and size of your business.

Imagine you are conducting an A/B test on an airline website. Now, how much money they could lose every single day because of a badly designed test….. millions of dollars.

With every bad test you lose money, maybe tons of money, provided you take the cost of testing into account and/or calculate the ROI of testing.

You can’t afford to run mindless tests.

maths stats page ad
maths stats page ad mini 1

Do you want expert help in setting up/fixing GA4 and GTM?

If you are not sure whether your GA4 property is setup correctly or you want expert help migrating to GA4 then contact us. We can fix your website tracking issues.


My step-by-step blueprint for selecting the best Excel charts for data analysis and reporting (40 pages)

Get the FREE e-book on Best Excel Charts For Data Analysis And Reporting (40 Pages)

Download the FREE ebook

 

‘Always be testing’ is bad advice because it encourages mindless testing

Every bad test kills sales and also your credibility as a tester.

If losing tests continue to outnumber the winning tests, you won’t be able to run your optimization program for long.

Any business can tolerate losing money in the form of testing only to a certain extent. Once they lose confidence in your testing abilities, they will either discontinue the testing program or say goodbye to you and find someone else. So it is not cool to create and run bad tests.

Our job as an optimizer is to make more money for our clients and not lose tons of money on testing. So we need to be very selective about what we chose to test.

Creating and running one well-designed test is better than running ten stupid tests

We all know that the entire test revolves around the hypothesis and underpowered hypothesis mark your test for failure from the very start. But what almost no one understands about testing is the concept of Hypothesis.

It’s not the hypothesis that helps you get a real lift in sales and conversions, it’s understanding the little details of creating a powerful hypothesis. And that can happen even if you are not a fully-blown statistician, as long as you are asking the right questions.

A hypothesis in its most basic form is just an assumption

This assumption can be based on personal opinion and not on research data. This assumption can be based on flawed research data and analysis. Any weak, vague, or strong assumption can be a hypothesis.

Your hypothesis does not need to be correct either. But it needs to be an assumption, proposed explanation, or a guess and not a well-known fact.

For example, “the sun rises in the east and sets in the west” is a well-established fact. So it can’t be used as a hypothesis. Similarly, ‘there are 24 hours in a day” is a well-established fact and hence can’t be used as a hypothesis.

Not every statement can be a hypothesis

However when it comes to statistical testing, a more stringent definition of a hypothesis is expected from a researcher.

A researcher is expected to refine his hypothesis before testing it, to the point where it is testable, measurable, and meaningful in solving a business problem that really matters to his customers.

Let’s call this refined hypothesis a ‘scientific hypothesis‘.

So now we have got two categories of hypothesis:

  1. General hypothesis
  2. Scientific hypothesis

In statistics, we transform a ‘general hypothesis’ into a ‘scientific hypothesis’

In statistics, your assumption should be testable and measurable. So whatever you are proposing as hypothesis should be testable and measurable.

Pay attention to the words ‘should be’. There is no written rule in statistics which say that your hypothesis must be testable and measurable.

The hypothesis which is not easily testable and measurable is called an underpowered hypothesis.

You, as an experimenter, are allowed to create an underpowered hypothesis and many in fact create them (obviously unknowingly) and test and fail spectacularly month after month until they run out of testing budget or lose all the confidence in testing.

Characteristics of an underpowered hypothesis

Your hypothesis is considered to be underpowered:

  1. When it is just based on your personal opinion and not on research data.
  2. When it cannot be easily tested and measured.
  3. When it is based on inadequate/flawed research data or analysis.
  4. When it does not include dependent and independent variables.
  5. Most importantly, when it tries to solve a problem which does not really matter to your customers. And when something does not matter to your customers (like the color of a button) it won’t improve the business bottomline. It is as simple as that.

Just because your hypothesis is scientific does not automatically mean it is powerful

The power level of your hypothesis is directly proportional to the confidence you have, that what you are testing is something that really matters to your customers.

You gain this confidence by developing a great understanding of your business and target market.

Characteristics of a powerful hypothesis

The following are the characteristics of a powerful hypothesis:

  1. The hypothesis which tries to solve a problem that really matters to your customers and which has the potential to considerably improve the business bottomline.
  2. The hypothesis which can be easily tested (guide in designing experiment) and measured (variables involved can be easily measured).
  3. The hypothesis is based on considerable research data or analysis. Ideally you should run out of answers from known data before you even think of forming a hypothesis.
  4. The hypothesis which includes dependent and independent variables that can be easily measured and tested.
  5. The hypothesis which clearly outlines the predicted effect (what do you think will happen as a result of your experiment).

When to form a hypothesis

The purpose of forming a hypothesis is to find an answer to the question which cannot be reasonably answered/explained through available research data (whether qualitative or quantitative).

So before you form a hypothesis, you should make sure that you have done your homework, your research, and you have concluded that the available research data cannot be used to satisfactorily answer your business question.

If your data is clearly highlighting the problem that needs fixing then go ahead and fix it. Don’t run a test just to confirm that the problem actually exists.

For example, if your data is clearly telling you that conversion rate is 25% lower than the site average when users browse your website via ‘internet explorer 9’ then fix the cross-browser compatibility issue with ‘internet explorer 9’.

You don’t need to run a test just to confirm that the conversion rate is actually lower for IE 9 browser. In other words do not waste your time forming hypotheses on known data. If you do that, then you will create and run unnecessary tests and lose money.

So spend a good amount of time conducting data analysis and market research and develop a very good understanding of your business and target market before you even think about running a test.

Before you form a hypothesis, you need to have clearly defined question/problem

It is important to remember that hypothesis itself is not a question. It is a written statement which is proved or rejected by conducting a test and which is used to find an answer to a specific question.

How to form a hypothesis

Following is the basic syntax for creating a hypothesis:

If {i do this to independent variable} then {this will happen to dependent variable.}

For example:

If I add social proof to the landing page, it will lead to more purchases.

If I remove the address field from the form, it will increase signups.

While your hypothesis doesn’t have to be correct, in statistics your hypothesis is always assumed to be wrong.

maths stats page ad
maths stats page ad mini 1

The only way to prove your hypothesis is to create and reject the null hypothesis

According to the null hypothesis, any difference or significance you see in a data set is due to chance and not due to a particular relationship.

So if your hypothesis is ‘If I add social proof to the landing page, it will lead to more purchase’ then your null hypothesis would be ‘adding social proof to the landing page would not lead to more purchase’.

It is important to remember that in statistics, you do not test your hypothesis. The hypothesis that you test is the null hypothesis.

You run a test with the intention of rejecting this null hypothesis. When the null hypothesis is rejected the result is said to be statistically significant.

A statistical test can:

  • reject a null hypothesis or
  • fail to reject a null hypothesis or
  • incorrectly reject a true null hypothesis (false positive error) or
  • fail to reject a false null hypothesis (false negative error)

A statistical test can never prove or establish a null hypothesis.

It is also important to remember that your hypothesis is temporary and a starting point for further research. It lasts just until a better one comes along.

Hypothesis testing is not just limited to traditional statistical tests

You can also test a hypothesis using attribution models.

For example, consider the following hypothesis:

“If a user completes a transaction on my website within 12 hours after viewing (but not clicking) one of my display ads then the display ad impressions, should get three times more conversion credit than the other interactions in the conversion path”

This is the kind of hypothesis which you can not prove or reject via traditional statistical testing methods like A/B tests.

You would need to create a custom attribution model in Google Analytics or use some other similar tool which provides attribution modelling capabilities.

My best selling books on Digital Analytics and Conversion Optimization

Maths and Stats for Web Analytics and Conversion Optimization
This expert guide will teach you how to leverage the knowledge of maths and statistics in order to accurately interpret data and take actions, which can quickly improve the bottom-line of your online business.

Master the Essentials of Email Marketing Analytics
This book focuses solely on the ‘analytics’ that power your email marketing optimization program and will help you dramatically reduce your cost per acquisition and increase marketing ROI by tracking the performance of the various KPIs and metrics used for email marketing.

Attribution Modelling in Google Analytics and BeyondSECOND EDITION OUT NOW!
Attribution modelling is the process of determining the most effective marketing channels for investment. This book has been written to help you implement attribution modelling. It will teach you how to leverage the knowledge of attribution modelling in order to allocate marketing budget and understand buying behaviour.

Attribution Modelling in Google Ads and Facebook
This book has been written to help you implement attribution modelling in Google Ads (Google AdWords) and Facebook. It will teach you, how to leverage the knowledge of attribution modelling in order to understand the customer purchasing journey and determine the most effective marketing channels for investment.

About the Author

Himanshu Sharma

  • Founder, OptimizeSmart.com
  • Over 15 years of experience in digital analytics and marketing
  • Author of four best-selling books on digital analytics and conversion optimization
  • Nominated for Digital Analytics Association Awards for Excellence
  • Runs one of the most popular blogs in the world on digital analytics
  • Consultant to countless small and big businesses over the decade