# The little known details about hypothesis in conversion optimization

**Table of Contents**

- Everyone is talking about testing, right? But not many talk about the cost of testing
- Testing is not free
- ‘Always be testing’ is bad advice because it encourages mindless testing
- Creating and running one well designed test is better than running ten stupid tests
- A hypothesis in its most basic form is just an assumption
- Not every statement can be a hypothesis
- In statistics, we transform a ‘general hypothesis’ into a ‘scientific hypothesis’
- Characteristics of an underpowered hypothesis
- Just because your hypothesis is Scientific does not automatically mean it is powerful
- Characteristics of a powerful hypothesis
- When to form a hypothesis
- Before you form hypothesis, you need to have clearly defined question/problem
- How to form a hypothesis
- The only way to prove your hypothesis is to create and reject the Null hypothesis
- Hypothesis testing is not just limited to traditional statistical tests

## Everyone is talking about testing, right? But not many talk about the cost of testing

And this could be because majority of optimizers do not pay the cost of failed tests from their own pocket.

So for them, if a test fail, it should not be a big deal.

After all, it is just a test. A test can succeed or it can fail.

But it is not just a test for your client.

They lose money every second while you run badly designed tests and even more money when your winning variation fail to produce any real lift in sales and actually decrease conversion rate.

It is equivalent to setting a pile of money on fire every single day in the name of testing.

**Testing is not free**

It has never been free and it will never be free.

Whenever you choose to conduct a test, you risk losing money.

The amount of money you risk depends upon the sample size, duration of the test and the nature and size of your business.

Imagine you are conducting an A/B test on an airline website.

Now much how money they could lose every single day because of a badly designed test….. millions of dollars..

With every bad test you lose money, may be tons of money, provided you take the cost of testing into account and/or calculate the **ROI of testing**.

You can’t afford to run mindless tests.

**‘Always be testing’ is bad advice because it encourages mindless testing**

Every bad test kill sales and also your credibility as a tester.

If losing tests continue to outnumber the winning tests, you won’t be able to run your optimization program for long.

Any business can tolerate losing money in the form of testing only to a certain extent.

Once they lose confidence in your testing abilities, they will either discontinue the testing program or say goodbye to you and find someone else.

So it is not cool to create and run bad tests.

Our job as an optimizer is to make more money for our clients and not lose tons of money on testing.

So we need to be very selective about what we chose to test.

**Creating and running one well designed test is better than running ten stupid tests**

We all know that entire test revolves around hypothesis and underpowered hypothesis mark your test for failure from the very start.

But what almost no one understands about testing is the concept of **Hypothesis**.

It’s not the hypothesis that helps you get real lift in sales and conversions, it’s understanding the little details of creating a powerful hypothesis.

And that can happen even if you are not a fully-blown statistician, as long as you are asking the right questions.

**A hypothesis in its most basic form is just an assumption**

This assumption can be based on personal opinion and not on research data.

This assumption can be based on flawed research data and analysis.

Any weak, vague or strong assumption can be an hypothesis.

Your hypothesis does not need to be correct either.

But it needs to be an assumption, proposed explanation or a guess and not a well known fact.

For example, “sun rise in the east and set in the west” is a well established fact.

So it can’t be used as hypothesis.

Similarly, ‘there are 24 hours in a day” is a well established fact and hence can’t be used as hypothesis.

In other words,

**Not every statement can be a hypothesis**

However when it comes to statistical testing, a more stringent definition of hypothesis is expected from a researcher.

A researched is expected to **refine his hypothesis** before testing it, to the point where it is testable, measurable and meaningful in solving a business problem which really matters to his customers.

Lets call this refine hypothesis as ‘**Scientific Hypothesis**‘.

So now we have got two two categories of Hypothesis:

#1 General Hypothesis

#2 Scientific Hypothesis

## In statistics, we transform a ‘general hypothesis’ into a ‘scientific hypothesis’

In statistics, your assumption should be testable and measurable.

So whatever you are proposing as hypothesis should be testable and measurable.

Pay attention to the words ‘should be’.

There is no written rule in statistics which say that your hypothesis **must be** testable and measurable.

The hypothesis which are not easily testable and measurable are called **underpowered hypothesis.**

You as a experimenter are allowed to create underpowered hypothesis and many in fact create them (obviously unknowingly) and test and fail spectacularly month after month until they run out of testing budget or lose all the confidence in testing.

## Characteristics of an underpowered hypothesis

Your hypothesis is considered to be underpowered:

#1 When it is just based on your personal opinion and not on research data.

#2 When it cannot be easily tested and measured.

#3 When it is based on inadequate/flawed research data or analysis.

#4 When it does not include dependent and independent variables.

#5 And most importantly, when it tries to solve a problem which does not really matter to your customers. And when something does not matter to your customers (like the color of a button) it won’t improve the business bottomline. It is as simple as that.

## Just because your hypothesis is Scientific does not automatically mean it is powerful

The power level of your hypothesis is directly proportional to the confidence you have, that what you are testing is something that really matters to your customers.

You gain this confidence by developing great understanding of your business and target market.

## Characteristics of a powerful hypothesis

Following are the characteristics of powerful hypothesis:

#1 The hypothesis which try to solve a problem which really matters to your customers and which has the potential to considerably improve the business bottomline.

#2 The hypothesis which can be easily tested (guide in designing experiment) and measured (variables involved can be easily measured).

#3 The hypothesis which is based on considerable research data or analysis. **Ideally you should run out of answers from known data before you even think of forming a hypothesis.**

#4 The hypothesis which include dependent and independent variables which can be easily measured and tested.

#5 The hypothesis which clearly outline the predicted effect (what do you think will happen as result of your experiment).

## When to form a hypothesis

The purpose of forming a hypothesis is to find an answer to the question which cannot be reasonably answered/explained through available research data (whether qualitative or quantitative).

So before you form a hypothesis, you should make sure that you have done your homework, your research and you have concluded that the available research data cannot be used to satisfactorily answer your business question.

If your data is clearly highlighting the problem that need fixing then go ahead and fix it.

Don’t run a test just to confirm that the problem actually exist.

For example, if your data is clearly telling you that conversion rate is 25% lower than the site average when users browse your website via ‘internet explorer 9’ then fix the cross-browser compatibility issue with ‘internet explorer 9’.

You don’t need to run a test just to confirm that the conversion rate is actually lower for IE 9 browser.

In other words do not waste your time forming hypothesis on known data.

If you do that, then you will create and run unnecessary tests and lose money.

So spend good amount of time in conducting data analysis and market research and develop very good understanding of your business and target market before you even think about running a test.

**Before you form hypothesis, you need to have clearly defined question/problem**

It is important to remember that hypothesis itself is not a question.

It is a written statement which is proved or rejected by conducting a test and which is used to find answer to a specific question.

## How to form a hypothesis

Following is the basic syntax for creating a hypothesis:

*If {i do this to independent variable} then {this will happen to dependent variable.}*

For example,

If I add social proof to the landing page, it will lead to more purchase.

If i remove address field from the form, it will increase signups.

While your hypothesis doesn’t have to be correct, in statistics your hypothesis is always assumed to be wrong.

**The only way to prove your hypothesis is to create and reject the Null hypothesis**

According to null hypothesis, any difference or significance you see in a data set is due to chance and not due to a particular relationship.

So if your hypothesis is ‘If i add social proof to the landing page, it will lead to more purchase’ then your null hypothesis would be ‘adding social proof to the landing page would not lead to more purchase’.

It is important to remember that in statistics, you do not test your hypothesis.

The hypothesis that you test is the null hypothesis.

You run a test with the intention of rejecting this null hypothesis.

When null hypothesis is rejected the result is said to be statistically significant.

A statistical test can:

- reject a null hypothesis or
- fail to reject a null hypothesis or
- incorrectly reject a true null hypothesis (
**false positive error**) or - fail to reject a false null hypothesis (
**false negative error**)

A statistical test can never prove or establish a null hypothesis.

It is also important to remember that your hypothesis is temporary and starting point for further research.

It last just until a better one comes along.

**Hypothesis testing is not just limited to traditional statistical tests**

You can also test hypothesis using attribution models.

For example, consider following hypothesis:

*“If a user completes a transaction on my website within 12 hours after viewing (but not clicking) one of my display ads then the display ad impressions, should get three times more conversion credit than the other interactions in the conversion path”*

This is the kind of hypothesis which you can not prove or reject via traditional statistical testing methods like A/B tests.

You would need to create a custom attribution model in Google Analytics or use some other similar tool which provides attribution modelling capabilities.

## Other articles on Maths and Stats in Web Analytics

- Beginners Guide to Maths and Stats behind Web Analytics
- How to Analyze and Report above AVERAGE
- What Matters more: Conversion Volume or Conversion Rate – Case Study
- Is your conversion Rate Statistically Significant?
- Calculated Metrics in Google Analytics – Complete Guide
- Here is Why Conversion Volume Optimization is better than CRO
- Bare Minimum Statistics for Web Analytics
- Understanding A/B Testing Statistics to get REAL Lift in Conversions
- 10 Techniques to Migrate from Data Driven to Data Smart Marketing
- Data Driven or Data blind and why I prefer being Data Smart
- The Guaranteed way to Sell Conversion Optimization to your Client
- SEO ROI Analysis – How to do ROI calculations for SEO

## Most Popular E-Books from OptimizeSmart

## How to learn and master Web Analytics and Google Analytics?

## Check out my best selling books on Web Analytics and Conversion Optimization on Amazon

## How to get lot more useful information?

I share lot more useful information on Web Analytics and Google Analytics on LinkedIn then I can via any other medium. So there is really an incentive for you, to follow me there.