Bare Minimum Statistics for Web Analytics
The role of statistics in the world of web analytics is not clear to many marketers.
Unfortunately, by and large, the analytics industry is still dominated by data collection methodologies and tools.
We all are obsessed with collecting more data. Lots of data. But rarely do we focus more on analysing and interpreting the data we already have.
Someone will learn a new hack about collecting a particular type of data and then they blog about it in the name of analytics. Then there are ‘Excel hacks’ for web analytics. But neither Excel hacks nor data collection tips and tricks will improve your business bottom line.
What that will really improve your business bottom line is the accurate interpretation of the data and the actions you take on the basis of that interpretation.
Only by leveraging the knowledge of statistics and understanding the context, you can accurately interpret data and take actions which can improve your business bottom line.
I spent an awful lot of time reading books and articles on stats and data science, in the hope that I would find something which might help me in my digital analytics career. And I must admit that majority of topics I read on stats, initially don’t seem to have anything directly to do with my job. This could be one reason why statistics is not taken seriously in the internet marketing industry.
But overall, stats knowledge has improved my interpretation of data. I am constantly looking for new ways to implement statistics in web analytics.
This article talks about the bare minimum statistics, which I think every internet marketer should get familiar with, in order to get optimum results from their analysis and campaigns.
I will explain some of the most useful stats terms/concepts one by one and will also show you their practical use in web analytics so that you can take advantage of them straight away.
What is statistical inference?
Statistical inference is the process of drawing conclusions from data which is subject to random variation.
Observational error is an example of statistical inference.
Practical use in web analytics
For e.g. consider the performance of three campaigns A, B, and C in the last one month.
Here campaign ‘B’ seems to have the highest conversion rate. Does that mean, campaign B is performing better than campaign A and campaign C? The answer is we don’t know for sure.
This is because here we are assuming that campaign B has the highest conversion rate only on the basis of our observation. So if there is an observational error, our assumption could be wrong.
Observational error is the difference between the collected data and the actual data.
In order to minimize observational error, we need to segment the ecommerce conversion rate into visits and transactions:
Now we know that campaign B doesn’t have the highest conversion rate as its sample size is too small.
More about sample size later.
Get weekly practical tips on GA4 and/or BigQuery to accurately track and read your analytics data.
What is a population’?
A population is a set of entities from which statistical inference is drawn.
It is also known as a statistical population.
What is a sub-population’?
A sub-population is a subset of a population.
Practical use in web analytics
If you consider campaign C above as a PPC campaign then its ad groups can be considered as sub-population.
In order to understand the properties of a statistical population, statisticians first try to understand the properties of individual sub-populations.
This is done for the same reason, analysts recommend segmenting data.
So if you want to understand the performance of campaign C, then you should first try to understand the performance of its individual ad groups.
Similarly, if you want to understand the performance of individual ad groups, you should first try to understand the performance of the individual keywords and ad copies in each ad group.
What is a sample?
A sample is a subset of a population that represents the entire population.
Analysing the sample should produce similar results as analysing all of the population.
Sampling is carried out to analyse large data sets in a reasonable amount of time and in a cost-efficient manner.
What is a bad sample?
A bad sample is that subset of population which is not a good representative of the entire population.
So analysing the bad sample will not produce similar results as analysing all of the population.
What is sample size?
Sample size is the size of the sample.
The larger the sample size, the more reliable is the analysis.
Practical use in web analytics
Consider the following three campaigns:
Here campaign B doesn’t have the highest conversion rate because its sample size is too small. Just 4 transactions out of 20 visits.
If campaign B had got 1 transaction out of 1 visit, its conversion rate would be 100%. Will that make its performance even better? No.
Google Analytics is notorious for its data sampling issues.
When you have got data sampling issues, the reported data/metrics can be anywhere from 10% to 80% off the mark as the sample selected by GA for its analysis would be a bad sample (the one which doesn’t represent the entire population/traffic on your site).
So you need to avoid data sampling issues as much as possible before you interpret your data.
What is statistical significance?
Statistical significance means statistically meaningful.
Statistical significant result – a result which is unlikely to have occurred by chance.
Statistically insignificant result – a result which is likely to have occurred by chance.
Practical use in web analytics
The term statistical significance is used a lot in conversion optimization and especially A/B testing.
If the result from your A/B test is not statistically significant than any uplift you see in you A/B test results will not translate into increased sales.
Another example:
Consider the following campaigns:
Here statistical significance is the statistical significance of the difference in conversion rates of the two campaigns: ‘A’ and ‘C’ and is calculated by conducting a statistical test like ‘T’ test or ‘Z’ test.
You can use this bookmarklet (based on ‘Z’ test) or this chrome extension from Lunametrics (based on ‘T’ test) to calculate the statistical significance in Google Analytics.
In this case, statistical significance turned out to be 98%.
What that means is that we are 98% confident that the difference in conversion rates of the two campaigns, A and B, is not by chance.
That means the conversion rate of campaign ‘A’ is actually higher than the conversion rate of campaign C and is not just an observational error.
What is an effect?
An effect in statistics is the result of something.
What is effect size?
Effect size (or signal) – it is the magnitude of the result and is calculated as:
Examples of effect size – sales, orders, leads, profit, etc.
What is noise?
Noise is the amount of unexplained variation/randomness in a sample.
Confidence (or statistical confidence) is the confidence that the result has not occurred by a chance.
Practical use in web analytics
Just because a result is statistically significant, it does not automatically means, that it is practically meaningful.
Statistical significance only tells you which one is better or what works. It does not tell you how well it works. It also can’t tell you, what caused the difference between control and variation groups.
For example, in the case of an A/B test, statistical significance can tell you whether or not version A is better than version B. However, it can’t tell you why one version is better than the other and how good one version is, in a range of context.
That means, if your A/B test reports an uplift of 10% in conversion rate, it doesn’t automatically result in actual uplift of 10% in conversion rate.
If increasing conversion rate was so easy, every website owner running A/B tests would be a millionaire by now.
So you need to calculate the effect size.
Consider the following campaigns:
From the table above, you can conclude that the effect size (revenue) of campaign C is much higher than the effect size of campaign A.
So even when we are now statistically confident that campaign A has a higher conversion rate than campaign C, we should still be investing more in campaign C because it has a much larger effect size.
In the real world, what that really matters is the effect size i.e. sales, orders, leads, profits… and not the lame conversion rate.
It is the effect size that brings food on the table.
It is the effect size that generates salary for the employees.
It is the effect size that runs business operations.
Whatever you do under conversion optimization must have a considerable impact on the effect size. The impact on conversion rate is secondary.
So if you are running A/B tests then it must considerably improve sales and gross profit over time. Double or triple digits increase in conversion rate is meaningless otherwise.
What is a null hypothesis?
According to a null hypothesis, any kind of difference you see in a data set is due to chance and not due to a particular relationship.
A null hypothesis can never be proven.
A statistical test can only reject a null hypothesis or fail to reject a null hypothesis. It cannot prove a null hypothesis.
What is an alternative hypothesis?
An alternative hypothesis is the opposite of the null hypothesis.
According to an alternative hypothesis, any kind of difference you see in a data set is due to a particular relationship and not due to chance.
In statistics, the only way to prove your hypothesis is to reject the null hypothesis. You don’t prove the alternative hypothesis to support your hypothesis.
Remember your hypothesis needs to based on qualitative data and not on personal opinion.
Practical use in web analytics
Before you conduct any test (A/B, multivariate or statistical test like ‘t’ or ‘z’ test), you need to form a hypothesis.
This hypothesis is based on your understanding of the client’s business and qualitative data.
For example:
A null hypothesis can be something like: changing the colour of the ‘order now’ button to red will not improve the conversion rate.
An alternative hypothesis can be something like changing the colour of the ‘order now’ button to red will improve the conversion rate.
Once you have formed your hypothesis, you conduct a test with the aim to reject your null hypothesis.
What is a false-positive?
A false-positive is a positive test result that is more likely to be false than true.
For example, an A/B test that shows that one variation is better than the other when it is not really the case.
What is a false-negative?
A false-negative is a negative test result that is more likely to be true than false.
For example, an A/B test that shows that there is no statistical difference between the two variations when there actually is.
What is a type I error?
A type I error is the incorrect rejection of a true null hypothesis.
It represents a false positive error.
What is a type II error?
A type II error is the failure to reject a false null hypothesis.
It represents a false negative error.
All statistical tests have a probability of making type I and type II errors.
What is false-positive rate?
The probability of a test to make type I error is known as the false positive rate or significance level and is denoted by the Greek letter alpha.
A significance level of 0.05 means that there is a 5% chance of a false positive.
What is false-negative rate?
The probability of a test to make type II error is known as the false-negative rate and is denoted by Greek letter beta.
A false-negative rate of 0.05 means that there is a 5% chance of a false negative.
What is statistical power?
Statistical power is the probability of a statistical test to accurately detect an effect (or accurately reject the null hypothesis) if the effect actually exists.
It is expressed as a percentage.
Statistical power (or power of statistical test) = 1- false negative rate
So if the statistical power of a test is 95% then it means there is a 95% probability that the statistical test can correctly detect an effect and 5% probability that it can’t.
This 5% probability that the statistical test can’t correctly detect an effect is the false-negative rate.
Practical use in web analytics
A lot of A/B test gurus and A/B testing software will tell you to stop your test once you reached a statistical significance of 95% or more.
Now the problem with this approach is that you will continue testing until you get a statistically significant result while choosing the sample size as you go with your test.
The consequence of this approach is that your probability of getting a statistically significant result by coincidence will go much higher than 5%.
That means you will increase your chance of getting type I error in your test.
That means your test will increase the rate of false positives.
The fundamental problem with statistics is that, if you want to reach the conclusion you really want (maybe deep down inside on a subconscious level), you can always find some way to do it.
To reduce the rate of false positives, decide your test sample size in advance and then just stick to it.
Don’t use statistical significance alone to decide whether your test should continue or stop.
Statistical significance of 95% or higher doesn’t mean anything if there is little to no impact on effect size (conversion volume).
Don’t believe in any uplift you see in your A/B test until the test is over.
Focus on the effect size per variation while the test is running.
Any uplift you see in you A/B test results will not translate into actual sales even after conducting several A/B tests and getting statistically significant results each time, if:
- There is little to no impact on effect size (conversion volume).
- You declare success and failure on the basis of statistical significance alone.
What is correlation?
Correlation is a statistical measurement of the relationship between two variables.
Let us suppose ‘A’ and ‘B’ are two variables.
If as ‘A’ goes up, ‘B’ goes up then ‘A’ and ‘B’ are positively correlated.
However if as ‘A’ goes up, ‘B’ goes down then ‘A’ and ‘B’ are negatively correlated.
What is causation?
Causation is the theory that something happened as a result.
For example, a fall in temperature increased the sale of hot drinks.
Practical use in web analytics
The most important correlations that I have found so far are:
- Negative Correlation between Conversion Rate and Average Order Value
- Negative Correlation between Conversion Rate and Transactions
- Positive Correlation between Conversion Rate and Acquisition Cost
These three correlations have completely changed the way I think about conversion optimization for good.
You can get more details about these correlations from the post: Case Study: Why you should Stop Optimizing for Conversion Rate
The whole conversion optimization process is based on correlation analysis.
Correlation-based observations help you in coming up with a hypothesis. This is the hypothesis without which you can’t conduct any statistical tests and thus improve conversions.
Correlation is also widely used in predictive analytics and predictive marketing.
Before you can predict the value of a dependant variable from an independent variable, you first need to prove that the correlation between two variables is not weak or zero.
Otherwise, such a relationship is not good to predict anything.
Finally, correlation does not imply causation. That means the mere presence of a relationship between two variables/events doesn’t imply that one causes the other.
For example, we cannot automatically assume that increase in social shares has resulted in improvement in search engine rankings.
Before we can prove a correlation between social shares and rankings, we first need to prove that linear relationship exists between social shares and rankings i.e. any increase or decrease in the value of social shares causes a corresponding increase or decrease in search engine ranking.
Without first proving the linear relationship, you could end up forming and testing the wrong hypothesis.
Once you have proved the correlation between social shares and rankings then you determine the correlation coefficient to measure the strength and direction of this linear relationship.
If the linear relationship is strong then you go ahead and conduct regression analysis to predict the value of one variable from another.
Needless to say, correlation and regression are two strong pillars of conversion optimization and are very important for you as a Digital marketer.
Other Articles on Maths and Stats in Web Analytics
- Beginners Guide to Maths and Stats behind Web Analytics
- How to Analyze and Report above AVERAGE
- What Matters more: Conversion Volume or Conversion Rate – Case Study
- The little known details about hypothesis in conversion optimization
- Is your conversion Rate Statistically Significant?
- Calculated Metrics in Google Analytics – Complete Guide
- Here is Why Conversion Volume Optimization is better than CRO
- Understanding A/B Testing Statistics to get REAL Lift in Conversions
- 10 Techniques to Migrate from Data Driven to Data Smart Marketing
- Data Driven or Data blind and why I prefer being Data Smart
- The Guaranteed way to Sell Conversion Optimization to your Client
- SEO ROI Analysis – How to do ROI calculations for SEO
The role of statistics in the world of web analytics is not clear to many marketers.
Unfortunately, by and large, the analytics industry is still dominated by data collection methodologies and tools.
We all are obsessed with collecting more data. Lots of data. But rarely do we focus more on analysing and interpreting the data we already have.
Someone will learn a new hack about collecting a particular type of data and then they blog about it in the name of analytics. Then there are ‘Excel hacks’ for web analytics. But neither Excel hacks nor data collection tips and tricks will improve your business bottom line.
What that will really improve your business bottom line is the accurate interpretation of the data and the actions you take on the basis of that interpretation.
Only by leveraging the knowledge of statistics and understanding the context, you can accurately interpret data and take actions which can improve your business bottom line.
I spent an awful lot of time reading books and articles on stats and data science, in the hope that I would find something which might help me in my digital analytics career. And I must admit that majority of topics I read on stats, initially don’t seem to have anything directly to do with my job. This could be one reason why statistics is not taken seriously in the internet marketing industry.
But overall, stats knowledge has improved my interpretation of data. I am constantly looking for new ways to implement statistics in web analytics.
This article talks about the bare minimum statistics, which I think every internet marketer should get familiar with, in order to get optimum results from their analysis and campaigns.
I will explain some of the most useful stats terms/concepts one by one and will also show you their practical use in web analytics so that you can take advantage of them straight away.
What is statistical inference?
Statistical inference is the process of drawing conclusions from data which is subject to random variation.
Observational error is an example of statistical inference.
Practical use in web analytics
For e.g. consider the performance of three campaigns A, B, and C in the last one month.
Here campaign ‘B’ seems to have the highest conversion rate. Does that mean, campaign B is performing better than campaign A and campaign C? The answer is we don’t know for sure.
This is because here we are assuming that campaign B has the highest conversion rate only on the basis of our observation. So if there is an observational error, our assumption could be wrong.
Observational error is the difference between the collected data and the actual data.
In order to minimize observational error, we need to segment the ecommerce conversion rate into visits and transactions:
Now we know that campaign B doesn’t have the highest conversion rate as its sample size is too small.
More about sample size later.
What is a population’?
A population is a set of entities from which statistical inference is drawn.
It is also known as a statistical population.
What is a sub-population’?
A sub-population is a subset of a population.
Practical use in web analytics
If you consider campaign C above as a PPC campaign then its ad groups can be considered as sub-population.
In order to understand the properties of a statistical population, statisticians first try to understand the properties of individual sub-populations.
This is done for the same reason, analysts recommend segmenting data.
So if you want to understand the performance of campaign C, then you should first try to understand the performance of its individual ad groups.
Similarly, if you want to understand the performance of individual ad groups, you should first try to understand the performance of the individual keywords and ad copies in each ad group.
What is a sample?
A sample is a subset of a population that represents the entire population.
Analysing the sample should produce similar results as analysing all of the population.
Sampling is carried out to analyse large data sets in a reasonable amount of time and in a cost-efficient manner.
What is a bad sample?
A bad sample is that subset of population which is not a good representative of the entire population.
So analysing the bad sample will not produce similar results as analysing all of the population.
What is sample size?
Sample size is the size of the sample.
The larger the sample size, the more reliable is the analysis.
Practical use in web analytics
Consider the following three campaigns:
Here campaign B doesn’t have the highest conversion rate because its sample size is too small. Just 4 transactions out of 20 visits.
If campaign B had got 1 transaction out of 1 visit, its conversion rate would be 100%. Will that make its performance even better? No.
Google Analytics is notorious for its data sampling issues.
When you have got data sampling issues, the reported data/metrics can be anywhere from 10% to 80% off the mark as the sample selected by GA for its analysis would be a bad sample (the one which doesn’t represent the entire population/traffic on your site).
So you need to avoid data sampling issues as much as possible before you interpret your data.
What is statistical significance?
Statistical significance means statistically meaningful.
Statistical significant result – a result which is unlikely to have occurred by chance.
Statistically insignificant result – a result which is likely to have occurred by chance.
Practical use in web analytics
The term statistical significance is used a lot in conversion optimization and especially A/B testing.
If the result from your A/B test is not statistically significant than any uplift you see in you A/B test results will not translate into increased sales.
Another example:
Consider the following campaigns:
Here statistical significance is the statistical significance of the difference in conversion rates of the two campaigns: ‘A’ and ‘C’ and is calculated by conducting a statistical test like ‘T’ test or ‘Z’ test.
You can use this bookmarklet (based on ‘Z’ test) or this chrome extension from Lunametrics (based on ‘T’ test) to calculate the statistical significance in Google Analytics.
In this case, statistical significance turned out to be 98%.
What that means is that we are 98% confident that the difference in conversion rates of the two campaigns, A and B, is not by chance.
That means the conversion rate of campaign ‘A’ is actually higher than the conversion rate of campaign C and is not just an observational error.
What is an effect?
An effect in statistics is the result of something.
What is effect size?
Effect size (or signal) – it is the magnitude of the result and is calculated as:
Examples of effect size – sales, orders, leads, profit, etc.
What is noise?
Noise is the amount of unexplained variation/randomness in a sample.
Confidence (or statistical confidence) is the confidence that the result has not occurred by a chance.
Practical use in web analytics
Just because a result is statistically significant, it does not automatically means, that it is practically meaningful.
Statistical significance only tells you which one is better or what works. It does not tell you how well it works. It also can’t tell you, what caused the difference between control and variation groups.
For example, in the case of an A/B test, statistical significance can tell you whether or not version A is better than version B. However, it can’t tell you why one version is better than the other and how good one version is, in a range of context.
That means, if your A/B test reports an uplift of 10% in conversion rate, it doesn’t automatically result in actual uplift of 10% in conversion rate.
If increasing conversion rate was so easy, every website owner running A/B tests would be a millionaire by now.
So you need to calculate the effect size.
Consider the following campaigns:
From the table above, you can conclude that the effect size (revenue) of campaign C is much higher than the effect size of campaign A.
So even when we are now statistically confident that campaign A has a higher conversion rate than campaign C, we should still be investing more in campaign C because it has a much larger effect size.
In the real world, what that really matters is the effect size i.e. sales, orders, leads, profits… and not the lame conversion rate.
It is the effect size that brings food on the table.
It is the effect size that generates salary for the employees.
It is the effect size that runs business operations.
Whatever you do under conversion optimization must have a considerable impact on the effect size. The impact on conversion rate is secondary.
So if you are running A/B tests then it must considerably improve sales and gross profit over time. Double or triple digits increase in conversion rate is meaningless otherwise.
What is a null hypothesis?
According to a null hypothesis, any kind of difference you see in a data set is due to chance and not due to a particular relationship.
A null hypothesis can never be proven.
A statistical test can only reject a null hypothesis or fail to reject a null hypothesis. It cannot prove a null hypothesis.
What is an alternative hypothesis?
An alternative hypothesis is the opposite of the null hypothesis.
According to an alternative hypothesis, any kind of difference you see in a data set is due to a particular relationship and not due to chance.
In statistics, the only way to prove your hypothesis is to reject the null hypothesis. You don’t prove the alternative hypothesis to support your hypothesis.
Remember your hypothesis needs to based on qualitative data and not on personal opinion.
Practical use in web analytics
Before you conduct any test (A/B, multivariate or statistical test like ‘t’ or ‘z’ test), you need to form a hypothesis.
This hypothesis is based on your understanding of the client’s business and qualitative data.
For example:
A null hypothesis can be something like: changing the colour of the ‘order now’ button to red will not improve the conversion rate.
An alternative hypothesis can be something like changing the colour of the ‘order now’ button to red will improve the conversion rate.
Once you have formed your hypothesis, you conduct a test with the aim to reject your null hypothesis.
What is a false-positive?
A false-positive is a positive test result that is more likely to be false than true.
For example, an A/B test that shows that one variation is better than the other when it is not really the case.
What is a false-negative?
A false-negative is a negative test result that is more likely to be true than false.
For example, an A/B test that shows that there is no statistical difference between the two variations when there actually is.
What is a type I error?
A type I error is the incorrect rejection of a true null hypothesis.
It represents a false positive error.
What is a type II error?
A type II error is the failure to reject a false null hypothesis.
It represents a false negative error.
All statistical tests have a probability of making type I and type II errors.
What is false-positive rate?
The probability of a test to make type I error is known as the false positive rate or significance level and is denoted by the Greek letter alpha.
A significance level of 0.05 means that there is a 5% chance of a false positive.
What is false-negative rate?
The probability of a test to make type II error is known as the false-negative rate and is denoted by Greek letter beta.
A false-negative rate of 0.05 means that there is a 5% chance of a false negative.
What is statistical power?
Statistical power is the probability of a statistical test to accurately detect an effect (or accurately reject the null hypothesis) if the effect actually exists.
It is expressed as a percentage.
Statistical power (or power of statistical test) = 1- false negative rate
So if the statistical power of a test is 95% then it means there is a 95% probability that the statistical test can correctly detect an effect and 5% probability that it can’t.
This 5% probability that the statistical test can’t correctly detect an effect is the false-negative rate.
Practical use in web analytics
A lot of A/B test gurus and A/B testing software will tell you to stop your test once you reached a statistical significance of 95% or more.
Now the problem with this approach is that you will continue testing until you get a statistically significant result while choosing the sample size as you go with your test.
The consequence of this approach is that your probability of getting a statistically significant result by coincidence will go much higher than 5%.
That means you will increase your chance of getting type I error in your test.
That means your test will increase the rate of false positives.
The fundamental problem with statistics is that, if you want to reach the conclusion you really want (maybe deep down inside on a subconscious level), you can always find some way to do it.
To reduce the rate of false positives, decide your test sample size in advance and then just stick to it.
Don’t use statistical significance alone to decide whether your test should continue or stop.
Statistical significance of 95% or higher doesn’t mean anything if there is little to no impact on effect size (conversion volume).
Don’t believe in any uplift you see in your A/B test until the test is over.
Focus on the effect size per variation while the test is running.
Any uplift you see in you A/B test results will not translate into actual sales even after conducting several A/B tests and getting statistically significant results each time, if:
- There is little to no impact on effect size (conversion volume).
- You declare success and failure on the basis of statistical significance alone.
What is correlation?
Correlation is a statistical measurement of the relationship between two variables.
Let us suppose ‘A’ and ‘B’ are two variables.
If as ‘A’ goes up, ‘B’ goes up then ‘A’ and ‘B’ are positively correlated.
However if as ‘A’ goes up, ‘B’ goes down then ‘A’ and ‘B’ are negatively correlated.
What is causation?
Causation is the theory that something happened as a result.
For example, a fall in temperature increased the sale of hot drinks.
Practical use in web analytics
The most important correlations that I have found so far are:
- Negative Correlation between Conversion Rate and Average Order Value
- Negative Correlation between Conversion Rate and Transactions
- Positive Correlation between Conversion Rate and Acquisition Cost
These three correlations have completely changed the way I think about conversion optimization for good.
You can get more details about these correlations from the post: Case Study: Why you should Stop Optimizing for Conversion Rate
The whole conversion optimization process is based on correlation analysis.
Correlation-based observations help you in coming up with a hypothesis. This is the hypothesis without which you can’t conduct any statistical tests and thus improve conversions.
Correlation is also widely used in predictive analytics and predictive marketing.
Before you can predict the value of a dependant variable from an independent variable, you first need to prove that the correlation between two variables is not weak or zero.
Otherwise, such a relationship is not good to predict anything.
Finally, correlation does not imply causation. That means the mere presence of a relationship between two variables/events doesn’t imply that one causes the other.
For example, we cannot automatically assume that increase in social shares has resulted in improvement in search engine rankings.
Before we can prove a correlation between social shares and rankings, we first need to prove that linear relationship exists between social shares and rankings i.e. any increase or decrease in the value of social shares causes a corresponding increase or decrease in search engine ranking.
Without first proving the linear relationship, you could end up forming and testing the wrong hypothesis.
Once you have proved the correlation between social shares and rankings then you determine the correlation coefficient to measure the strength and direction of this linear relationship.
If the linear relationship is strong then you go ahead and conduct regression analysis to predict the value of one variable from another.
Needless to say, correlation and regression are two strong pillars of conversion optimization and are very important for you as a Digital marketer.
Other Articles on Maths and Stats in Web Analytics
- Beginners Guide to Maths and Stats behind Web Analytics
- How to Analyze and Report above AVERAGE
- What Matters more: Conversion Volume or Conversion Rate – Case Study
- The little known details about hypothesis in conversion optimization
- Is your conversion Rate Statistically Significant?
- Calculated Metrics in Google Analytics – Complete Guide
- Here is Why Conversion Volume Optimization is better than CRO
- Understanding A/B Testing Statistics to get REAL Lift in Conversions
- 10 Techniques to Migrate from Data Driven to Data Smart Marketing
- Data Driven or Data blind and why I prefer being Data Smart
- The Guaranteed way to Sell Conversion Optimization to your Client
- SEO ROI Analysis – How to do ROI calculations for SEO
My best selling books on Digital Analytics and Conversion Optimization
Maths and Stats for Web Analytics and Conversion Optimization
This expert guide will teach you how to leverage the knowledge of maths and statistics in order to accurately interpret data and take actions, which can quickly improve the bottom-line of your online business.
Master the Essentials of Email Marketing Analytics
This book focuses solely on the ‘analytics’ that power your email marketing optimization program and will help you dramatically reduce your cost per acquisition and increase marketing ROI by tracking the performance of the various KPIs and metrics used for email marketing.
Attribution Modelling in Google Analytics and BeyondSECOND EDITION OUT NOW!
Attribution modelling is the process of determining the most effective marketing channels for investment. This book has been written to help you implement attribution modelling. It will teach you how to leverage the knowledge of attribution modelling in order to allocate marketing budget and understand buying behaviour.
Attribution Modelling in Google Ads and Facebook
This book has been written to help you implement attribution modelling in Google Ads (Google AdWords) and Facebook. It will teach you, how to leverage the knowledge of attribution modelling in order to understand the customer purchasing journey and determine the most effective marketing channels for investment.