Top 20 reasons why people misinterpret data and reports
Following are the top reasons why most people misinterpret analytics data and reports:
- Lack of context
- Not understanding the intent
- Attribution Bias
- Not understanding statistical significance
- Simpson’s paradox
- Causal reductionism (causal oversimplification)
- Dunning Kruger effect
- Streetlight effect
- Confirmation bias
- Selection bias
- Observational bias
- Cumulative error
- Relying on anecdotal evidence
- Not understanding the Pareto principle (80/20 rule)
- Woozle effect
- Correlation Causation Fallacy
- Over-reliance on Analytics reports
- Not being aware of activities/events that can significantly affect your data.
- Not maintaining a database of important changes that significantly affected your data
- Not understanding the maths and stats behind web analytics and conversion optimization
#1 Lack of context
Different people analyze and interpret the same data/chart differently. It all depends upon the context in which they analyze and interpret the data.
If you have a better understanding of the context, you will interpret the data more accurately.
Your probability of accurately interpreting the analytics data will be a thousand times better if you know the context beforehand.
Only by gaining a great understanding of your business can you really know the context in which you should analyse and interpret the analytics data.
To get the highest possible return on your analytics, you need to know the context beforehand. Otherwise, you may end up wasting a lot of your time doing meaningless analysis.
Optimisers who do not understand their business well do not understand the context in which to analyse and interpret the data. Hence they suffer from what I call directional issues.
Directional issues are the inability to move in the right direction, at the right time. It is the inability to determine:
- What data needs to be collected and analysed, and when.
- What data to look at?
- What data should be overlooked?
- Where to look in any analytics reports.
- How to translate business objectives into measurable goals.
Just because you have access to data does not automatically mean that you should go ahead and analyse it. The cornerstone of every successful analysis is ‘moving in the right direction’.
The direction in which your analysis will move will determine the direction in which your marketing campaigns and, eventually, your company will move in order to get the highest possible return on investment.
In order to find the right direction, you need to know the context in which you should analyse and interpret the analytics data.
To know the context, you need a great understanding of your business, industry, target market, competition, and business objectives.
If you do not have that great understanding or if you do not know the context before you start analysing and interpreting analytics reports, you are already moving in the wrong direction.
This direction will almost always make you return sub-optimal results for your company.
#2 Not understanding the intent
You will not be able to recognize cognitive biases (like selection bias, confirmation bias, attribution bias etc.) in data interpretation and reporting if you do not understand the intent behind a piece of data, information, or visualization.
For example, why is an agency using 28 days click attribution window and not 7 days click attribution window?
Or why view-through conversions dominate the marketing reports. What is their intent behind it?
#3 Attribution Bias
Attribution Bias occurs when you make a snap assumption, judgement or decision based on a limited amount of information, clarity of context and intent.
Jumping to conclusions, quickly providing recommendations, judging a book by its cover, and fighting for a cause you know little about are all examples of attribution bias.
The way to fight attribution bias is to ask questions and acquire deep knowledge and understanding.
#4 Not understanding statistical significance
Statistical significance means statistically meaningful.
The statistically significant result is a result that is unlikely to have occurred by chance.
Whereas the statistically insignificant result is a result that is likely to have occurred by chance.
The term ‘statistical significance’ is used extensively in conversion optimization, especially A/B testing.
If the result from your A/B test is not statistically significant, then any uplift you see in your A/B test results will not translate into increased sales.
When you do not understand the statistical significance, it often results in drawing conclusions based on a statistically insignificant sample size.
Almost all marketers that fail in paid advertising are the ones that do not understand the importance of statistical significance in marketing.
Here is how most of the conversations go in the marketing world:
“ We have been running this campaign for the last 3 days, resulting in 30 clicks, 3000 impressions, and $90 in ad spend so far but no sales. It is not working. We can not afford a $90 CPA. We need to pause this campaign.”
Here the marketer is making the following mistakes:
#1 The 3 days time frame is not long enough to measure the performance of a marketing campaign and then draw conclusions. You need at least 7 full days of data.
#2 The marketer decided to pause the campaign, which is most likely still in the learning phase. A campaign in the learning phase is unlikely to produce optimum results. It is like throwing half-cooked food in the bin because it is not tasty. You need to cook the food properly first before you even taste it.
#3 The 30 clicks out of 3000 impressions mean a CTR of 1%, which is not considered bad in the marketing world. You need to know your industry’s benchmarking data before measuring the campaign performance.
#4 In order to get 1 sale out of 30 clicks, you would need an e-commerce conversion rate of 3.33%. Is that even possible for your website, niche or industry? Have you achieved a similar conversion rate through paid ads before?
#5 Is $90 CPA too high? First of all, when an adset is in the learning phase, it tends to have a high CPA. Secondly, are you expecting your advertising to be profitable from the very start and that too when your campaign has not even left the learning phase?
If you run your campaign long enough, your CPA would likely go down as your advertising pixel will have more data to find the audience who are most likely to convert.
But until that happens, you are likely to see -ve ROI.
30 ad clicks, 3000 ad impressions, and 3 days of timeframe are all examples of statistically insignificant data in marketing.
“Campaign A generated 10 conversions but campaign B generated only 3 conversions therefore Campaign A is performing better.” We don’t draw such conclusions based on such a small sample size.
And yet most marketers remain busy tweaking campaigns based on 50 ad clicks, 2 conversions, 1000 impressions etc.
#5 Simpson’s paradox
Simpson’s paradox occurs when a trend appearing in several different groups of data disappears or reverses when combined.
It happens when you can not see the forest for the tree when you can not see the big picture.
Simpson’s paradox also occurs when you lack a multi-disciplinary approach to solving a business problem.
As a result, you can not create unexpected connections between different disciplines (like marketing and analytics), and all of your recommendations seem biased, siloed, and not very useful.
#6 Causal reductionism (causal oversimplification)
Causal reductionism means logically reducing an outcome to a single and simple cause.
For example, X resulted in Y. Therefore, X is the only cause of Y.
Other examples:
“Our sales never increased despite posting on LinkedIn for months”.
But what if your offer wasn’t good, or you never pitched on Linkedin or your posts never aligned with your business goals?
“Our sales increased by 20% ever since we started advertising on Facebook.”
But what if the 20% increase in sales is mainly because of the 10% increase in organic search traffic and a 2% decline in traffic from paid search?
“Campaign A generated 100 orders. Therefore Campaign A was the only cause of the 100 orders.”
But what if Campaign A got assistance in sales from Campaign C and Campaign D?
Whenever you assume that there is a single and simple cause of an outcome, you are falling prey to ‘causal reductionism’ (the fallacy of a single cause).
This fallacy makes it difficult for you to understand the true customers’ purchase journey and attribute conversions correctly.
Analytics reports are not what you see is what you get. They are what you interpret is what you get. There are almost always multiple causes of an outcome.
Failing to recognize causal reductionism is the root cause of all the problems around attribution.
#7 Dunning Kruger effect
It is a cognitive bias in which people believe they are smarter and more capable than they really are.
This results in not asking questions or enough questions, making lot more assumptions, jumping to conclusions, talking in absolutes, taking all the burden of proof etc.
The Dunning Kruger effect is a major problem with a lot of people who have accumulated considerable work experience over the years & who now see themselves as subject matter expert.
Asking too many questions will undermine their professional abilities & make them look clueless. So they either don’t ask questions or don’t ask enough questions.
But here is the thing. You can never be so sure about someone else’s business. You have no idea what is going on out there in their company.
So you should not be 100% sure that what you recommend will work.
#8 Streetlight effect (or the drunkard’s search principle)
One of the top reasons why many marketers and top-level executives misinterpret analytics data is that they rely only on dashboards or a set of reports to gain insight and make decisions.
Businesses create and rely on dashboards because it is easier to get their information that way.
But the side effect of relying on dashboards or the same set of reports is the ‘Streetlight effect’
What is the ‘Streetlight effect’?
The streetlight effect (also known as the drunkard’s search principle) is the observation bias where you get your information from where it is easiest to look.
Why is the ‘streetlight effect’ called the ‘drunkard’s search principle?
It is because of the following joke:
“A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys, and they look under the streetlight together. After a few minutes, the policeman asks if he is sure he lost them here, and the drunk replies no and that he lost them in the park. The policeman asks why he is searching here, and the drunk replies, “this is where the light is” . – Source: Wikepedia
Following are the examples of the streetlight effect in the context of data analysis:
>> Looking only at the top 10 results of your analytics reports.
>> Always looking only at a certain set of dashboards, metrics and KPIs
>> Just relying on standard reports available to you via the data studio etc.
The streetlight effect can make you overlook details.
So over time, you develop superficial knowledge of your website and campaign performance.
You would overlook any detail/anomaly that wasn’t captured by your dashboard or set of standard reports.
It is even worse if you are a CEO/CMO/COO/Marketing Head who regularly gets ready-made reports from data analysts/tools.
You are drifting further and further away from your data and customers.
You are most likely missing the context and not getting the complete picture.
Suppose someone is doing the data analysis on your behalf and sending you their conclusion/recommendation /insight every week/month.
In that case, you don’t understand the context in which the data was analyzed and reported.
And context is very important to understand.
Change the context, and it could totally change data interpretation.
To make matters worse, you would never know if the ‘done for you’ analysis was flawed.
And then, when you take business and marketing decisions based on flawed data analysis, it could result in financial disaster.
If you never ever do the data analysis yourself, if you never look into the original data source, the raw data, you would not be able to detect flawed analysis and anomalies in data collection.
You would have a hard time questioning a report.
So,
Spend at least 30 minutes daily analyzing the analytics data yourself, regardless of the size of your analytics team and/or your role.
Your future self will thank you for that.
If you have become used to looking at the same set of reports/dashboards, it’s time to develop a new set of reports/dashboards.
Look at different sets of metrics. Segment the data in a new way.
#9 Confirmation bias
Confirmation bias occurs when you actively seek information and patterns in data that confirms your existing theories or beliefs.
As a result, you may start seeing a certain pattern in data that does not really exist or you test a hypothesis under the assumption that it is true or you overemphasise that data that confirms your existing beliefs.
#10 Selection bias
Selection bias occurs when you select a sample that is not a good representative of all of the data. It often results in drawing wrong conclusions.
Selection bias often skews the A/B test results and create imaginary lifts in sales.
#11 Observational bias
Observational bias occurs when you take things at face value. It often results in observational errors.
For example, campaign A has a 5% conversion rate and campaign B has a 10% conversion rate.
Now you could assume that the conversion rate of campaign ‘B’ is highest only on the basis of your observation. But there is a good possibility that the campaign B conversion rate is not statistically significant.
#12 Cumulative error
The cumulative error is an error that increases with the size of the data sample revealing it.
So one wrong conclusion can lead to several wrong conclusions which could eventually make your entire analysis flawed.
Following is a real-life example of a cumulative error:
- You submitted a sales performance report to your manager. But this report has got the wrong sales figure.
- Your manager presented your report to the board of directors under the impression that the report is accurate.
- The board made a marketing decision based on the report and then everyone in your company was asked to act on that decision.
- Since the decision was made based on a faulty report, any person in your company who acts on that decision would increase the magnitude of the error you made in your report.
Over time, one small mistake could result in a company-wide catastrophic failure.
#13 Relying on anecdotal evidence
When you rely on anecdotal evidence to make future decisions or further your research, it could result in a flawed test design, misinterpretation of data, misallocation of resources and even great monetary loss.
For example,
Let us suppose you doubled the conversion rate of two of your clients just by changing the colour of their CTA button to red.
So based on this anecdotal evidence you could draw the conclusion that changing the CTR colour to red is a best practice and is most likely to result in an increase in conversion rate.
You need to remember that one or even a few (possibly isolated) examples can not stand alone as definitive proof of a greater premise.
#14 Not understanding the Pareto principle (80/20 rule)
According to the Pareto principle, 80% of the outcomes come from 20% of the input. So what that means, only 20% of the data available to you is important enough to produce 80% of the outcome you want. The rest is all junk.
Often optimizers focus on analyzing 80% of the data that produces 20% of the results. And this happens because they are unable to conduct a highly focused and meaningful analysis from the very start.
#15 Woozle effect
Woozle effect is the evidence by citation. It occurs when frequent citation of a claim lacking evidence is considered to be true/fact over time.
For example,
A blog named ‘A’ claimed that a very high bounce rate is bad for your website but without citing any real evidence.
Over time, 10 other blogs claimed the same thing by citing blog A. Eventually, the claim that a very high bounce rate is bad is considered to be a fact in your industry.
So when you look at your own reports and you see a very high bounce rate, your default conclusions could be that it is bad.
Be aware of the data presented as facts. Do your own research and draw your own conclusions.
Do not rely on case studies for data interpretation or for making business and marketing decisions.
#16 Correlation Causation fallacy
Just because two things appeared to be correlated, it does not necessarily indicate that one thing irrefutably caused the other thing.
For example,
Your client: “Our website traffic went down by 50% last week. We also switched to enhanced ecommerce the same week”
Now if you draw the following conclusion then you are likely falling prey to the correlation causation fallacy:
“Their website traffic went down by 50% last week because of switching to enhanced ecommerce tracking.”
You can always find some relationship between two variables/events if you really want to. However, the mere presence of a relationship between two variables doesn’t imply that one causes the other.
In other words, correlation doesn’t imply causation.
#17 Over-reliance on Analytics reports
Analytics reports are the last thing you should be looking at and not the first.
For example, I often ask my clients ‘where do the majority of your customers live?’
I can easily get an answer to this question from Google Analytics.
But I still ask this question because I am not sure whether the GA report I am looking at is giving me an accurate insight. Maybe there is a data collection or data sampling issue which is skewing the analytics data.
I want to match the understanding of my client with the insight I am getting from my analytics report. This way I can quickly detect anomalies in data.
For example, if my client is telling me that their top-selling product is ‘X’ and my ecommerce report is telling me that the top-selling product is ‘Y’ then either my client is wrong or my analytics data is wrong.
In either case, I now need to do some detective work.
One of the biggest drawbacks of trying to figure out everything on our own is that we tend to make a lot more assumptions about the problems our customers are facing.
We then create hypotheses around such assumptions and often test and fail spectacularly.
The most effective way of developing a great understanding of a business is by asking tons of questions from the people who actually run the business.
You need to understand that business questions can never be answered accurately by anyone other than the people who actually run the business.
You need to understand that no amount of data analysis or A/B testing can replace the understanding your client has developed over the years from successfully running a profitable business.
What is the point of spending hours and days digging for information and insight which is already known to someone in your organisation?
Your time would be best spent finding answers to questions which no one can answer.
But, for that to happen you need to know the questions which have already been answered.
#18 Not being aware of activities/events that can significantly affect your data.
You must be aware of every activity, news, event, and change, which can significantly affect your data on a daily or weekly basis.
These changes can be something like (but are not limited to):
- Major site redesign.
- Launch of a new product or a promotional campaign.
- Discontinuation of a product, process or campaign.
- A significant change in management, company policies, marketing budgets or processes.
- Any change to your digital analytics account (like adding or removing filters, adding or deleting views, etc.).
- Any change to the way data is presently collected, integrated and analyzed.
- A considerable change in the consumers’ behaviour.
- A considerable change in the competitive landscape (like the entry of a big and powerful competitor).
- Any positive or negative news about your company and competitors.
- Changes in the economy, market conditions, etc all of which can affect your data.
You can be aware of such changes by being in the loop.
Being ‘in the loop’ means being aware of what is going around you, in your organization or in your client’s organization.
Being ‘in the loop’ means being present everywhere you should be, in your organization or in your client’s organization.
What that means is, you need to be present in every board meeting, every marketing team meeting and should be CC’d or BCC’d into every important email in which major decisions are being made about your:
- Company
- Website(s)
- Promotional campaign(s)
- Business processes
- Marketing budgets
- The way data is presently collected and integrated.
#19 Not maintaining a database of important changes the significantly affected your data
You should maintain a database of all the changes that significantly affect your data, every single day.
For example:
- Note down the day and time when you first installed the Facebook pixel code on your website.
- If the website was put into maintenance mode, then note down the day, time, and duration of the server outage.
- Note down the day and time when you first started or stopped a marketing campaign.
- Document all the changes made to your website.
- Note down the changes in the marketing budget or changes in the way data is currently being collected or integrated.
- Note down all the changes that may significantly affect your data either today or in the immediate future.
#20 Not understanding the maths and stats behind web analytics and conversion optimization
Analysing data without a basic understanding of maths and statistics will most likely result in drawing the wrong conclusions and losing money.
One of the best ways to interpret data accurately is by correctly using maths and statistics.
Learning maths and statistics is an excellent way to develop your logical and critical thinking. It makes you a better marketer and, of course, a better analyst.
The knowledge of maths and statistics will help you in accurately interpreting the data and quickly find anomalies in data.
For example, if you see that the reported conversion rate is made up of a very small data sample, you know instantly that it cannot be statistically significant (i.e. statistically meaningful).
Google Analytics reports are full of averages. And if you do not know how averages work then you can easily misinterpret them.
You can then get a below-average, or even poor, insight from your GA reports.
For example, one of the most misunderstood ratio metrics is the conversion rate.
Because of poor statistics skills, many optimisers have no idea that the conversion rate can also negatively correlate with sales and profit.
They think that the conversion rate always positively correlates with conversions (i.e. as conversion rate increases, sales always increase and so does profit). However, this is not always true.
No matter how bad your analytics data is, not using the correct maths and statistics will only make it worse.
Here are a few questions for your consideration:
Q1. When your website conversion rate jumps from 10% to 12%. Is this a 2% rise in conversion rate or a 20% rise in conversion rate?
Q2. Can you double your sales by simply doubling your marketing budget?
Q3. If the average time on your website is ten minutes. Does that mean website visitors actually spent ten minutes on average?
Q4. If the campaign A conversion rate is 1% and the campaign B conversion rate is 5%, does that mean campaign B is performing better than campaign A?
The corporate world is not very forgiving to mistakes made by optimizers. If we report that the jump in conversion rate from 10% to 12% means there is a 2% rise in conversion rate, our entire analysis becomes questionable.
We instantly create a shadow on the rest of our analysis. The thought that will instantly pop up in the mind of the recipient of our report will be “what else have they done wrong?”
The role of maths and statistics in the world of web analytics is not clear to many optimisers. Not many talk or write about the usage of maths and statistics in conversion optimisation.
That is why I have written an entire book called ‘Maths and Stats for Web Analytics and Conversion Optimization’ to fill this knowledge gap.
This expert guide will teach you how to leverage the knowledge of maths and statistics to accurately interpret analytics data and reports.
Related Articles
- Bare Minimum Statistics for Web Analytics
- Understanding A/B Testing Statistics to get REAL Lift in Conversions
- Google Analytics Averages – Learn to Analyze & Report above Average
- Best Types of Charts in Excel for Data Analysis, Presentation and Reporting
- KPI Meaning, Examples, Calculation & Dashboard Tutorial
- Maths and Stats behind Web Analytics – Beginners Guide
- Data Science Vs. Data Analytics – An In-Depth Comparison Of Similarities And Differences
Following are the top reasons why most people misinterpret analytics data and reports:
- Lack of context
- Not understanding the intent
- Attribution Bias
- Not understanding statistical significance
- Simpson’s paradox
- Causal reductionism (causal oversimplification)
- Dunning Kruger effect
- Streetlight effect
- Confirmation bias
- Selection bias
- Observational bias
- Cumulative error
- Relying on anecdotal evidence
- Not understanding the Pareto principle (80/20 rule)
- Woozle effect
- Correlation Causation Fallacy
- Over-reliance on Analytics reports
- Not being aware of activities/events that can significantly affect your data.
- Not maintaining a database of important changes that significantly affected your data
- Not understanding the maths and stats behind web analytics and conversion optimization
#1 Lack of context
Different people analyze and interpret the same data/chart differently. It all depends upon the context in which they analyze and interpret the data.
If you have a better understanding of the context, you will interpret the data more accurately.
Your probability of accurately interpreting the analytics data will be a thousand times better if you know the context beforehand.
Only by gaining a great understanding of your business can you really know the context in which you should analyse and interpret the analytics data.
To get the highest possible return on your analytics, you need to know the context beforehand. Otherwise, you may end up wasting a lot of your time doing meaningless analysis.
Optimisers who do not understand their business well do not understand the context in which to analyse and interpret the data. Hence they suffer from what I call directional issues.
Directional issues are the inability to move in the right direction, at the right time. It is the inability to determine:
- What data needs to be collected and analysed, and when.
- What data to look at?
- What data should be overlooked?
- Where to look in any analytics reports.
- How to translate business objectives into measurable goals.
Just because you have access to data does not automatically mean that you should go ahead and analyse it. The cornerstone of every successful analysis is ‘moving in the right direction’.
The direction in which your analysis will move will determine the direction in which your marketing campaigns and, eventually, your company will move in order to get the highest possible return on investment.
In order to find the right direction, you need to know the context in which you should analyse and interpret the analytics data.
To know the context, you need a great understanding of your business, industry, target market, competition, and business objectives.
If you do not have that great understanding or if you do not know the context before you start analysing and interpreting analytics reports, you are already moving in the wrong direction.
This direction will almost always make you return sub-optimal results for your company.
#2 Not understanding the intent
You will not be able to recognize cognitive biases (like selection bias, confirmation bias, attribution bias etc.) in data interpretation and reporting if you do not understand the intent behind a piece of data, information, or visualization.
For example, why is an agency using 28 days click attribution window and not 7 days click attribution window?
Or why view-through conversions dominate the marketing reports. What is their intent behind it?
#3 Attribution Bias
Attribution Bias occurs when you make a snap assumption, judgement or decision based on a limited amount of information, clarity of context and intent.
Jumping to conclusions, quickly providing recommendations, judging a book by its cover, and fighting for a cause you know little about are all examples of attribution bias.
The way to fight attribution bias is to ask questions and acquire deep knowledge and understanding.
#4 Not understanding statistical significance
Statistical significance means statistically meaningful.
The statistically significant result is a result that is unlikely to have occurred by chance.
Whereas the statistically insignificant result is a result that is likely to have occurred by chance.
The term ‘statistical significance’ is used extensively in conversion optimization, especially A/B testing.
If the result from your A/B test is not statistically significant, then any uplift you see in your A/B test results will not translate into increased sales.
When you do not understand the statistical significance, it often results in drawing conclusions based on a statistically insignificant sample size.
Almost all marketers that fail in paid advertising are the ones that do not understand the importance of statistical significance in marketing.
Here is how most of the conversations go in the marketing world:
“ We have been running this campaign for the last 3 days, resulting in 30 clicks, 3000 impressions, and $90 in ad spend so far but no sales. It is not working. We can not afford a $90 CPA. We need to pause this campaign.”
Here the marketer is making the following mistakes:
#1 The 3 days time frame is not long enough to measure the performance of a marketing campaign and then draw conclusions. You need at least 7 full days of data.
#2 The marketer decided to pause the campaign, which is most likely still in the learning phase. A campaign in the learning phase is unlikely to produce optimum results. It is like throwing half-cooked food in the bin because it is not tasty. You need to cook the food properly first before you even taste it.
#3 The 30 clicks out of 3000 impressions mean a CTR of 1%, which is not considered bad in the marketing world. You need to know your industry’s benchmarking data before measuring the campaign performance.
#4 In order to get 1 sale out of 30 clicks, you would need an e-commerce conversion rate of 3.33%. Is that even possible for your website, niche or industry? Have you achieved a similar conversion rate through paid ads before?
#5 Is $90 CPA too high? First of all, when an adset is in the learning phase, it tends to have a high CPA. Secondly, are you expecting your advertising to be profitable from the very start and that too when your campaign has not even left the learning phase?
If you run your campaign long enough, your CPA would likely go down as your advertising pixel will have more data to find the audience who are most likely to convert.
But until that happens, you are likely to see -ve ROI.
30 ad clicks, 3000 ad impressions, and 3 days of timeframe are all examples of statistically insignificant data in marketing.
“Campaign A generated 10 conversions but campaign B generated only 3 conversions therefore Campaign A is performing better.” We don’t draw such conclusions based on such a small sample size.
And yet most marketers remain busy tweaking campaigns based on 50 ad clicks, 2 conversions, 1000 impressions etc.
#5 Simpson’s paradox
Simpson’s paradox occurs when a trend appearing in several different groups of data disappears or reverses when combined.
It happens when you can not see the forest for the tree when you can not see the big picture.
Simpson’s paradox also occurs when you lack a multi-disciplinary approach to solving a business problem.
As a result, you can not create unexpected connections between different disciplines (like marketing and analytics), and all of your recommendations seem biased, siloed, and not very useful.
#6 Causal reductionism (causal oversimplification)
Causal reductionism means logically reducing an outcome to a single and simple cause.
For example, X resulted in Y. Therefore, X is the only cause of Y.
Other examples:
“Our sales never increased despite posting on LinkedIn for months”.
But what if your offer wasn’t good, or you never pitched on Linkedin or your posts never aligned with your business goals?
“Our sales increased by 20% ever since we started advertising on Facebook.”
But what if the 20% increase in sales is mainly because of the 10% increase in organic search traffic and a 2% decline in traffic from paid search?
“Campaign A generated 100 orders. Therefore Campaign A was the only cause of the 100 orders.”
But what if Campaign A got assistance in sales from Campaign C and Campaign D?
Whenever you assume that there is a single and simple cause of an outcome, you are falling prey to ‘causal reductionism’ (the fallacy of a single cause).
This fallacy makes it difficult for you to understand the true customers’ purchase journey and attribute conversions correctly.
Analytics reports are not what you see is what you get. They are what you interpret is what you get. There are almost always multiple causes of an outcome.
Failing to recognize causal reductionism is the root cause of all the problems around attribution.
#7 Dunning Kruger effect
It is a cognitive bias in which people believe they are smarter and more capable than they really are.
This results in not asking questions or enough questions, making lot more assumptions, jumping to conclusions, talking in absolutes, taking all the burden of proof etc.
The Dunning Kruger effect is a major problem with a lot of people who have accumulated considerable work experience over the years & who now see themselves as subject matter expert.
Asking too many questions will undermine their professional abilities & make them look clueless. So they either don’t ask questions or don’t ask enough questions.
But here is the thing. You can never be so sure about someone else’s business. You have no idea what is going on out there in their company.
So you should not be 100% sure that what you recommend will work.
#8 Streetlight effect (or the drunkard’s search principle)
One of the top reasons why many marketers and top-level executives misinterpret analytics data is that they rely only on dashboards or a set of reports to gain insight and make decisions.
Businesses create and rely on dashboards because it is easier to get their information that way.
But the side effect of relying on dashboards or the same set of reports is the ‘Streetlight effect’
What is the ‘Streetlight effect’?
The streetlight effect (also known as the drunkard’s search principle) is the observation bias where you get your information from where it is easiest to look.
Why is the ‘streetlight effect’ called the ‘drunkard’s search principle?
It is because of the following joke:
“A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys, and they look under the streetlight together. After a few minutes, the policeman asks if he is sure he lost them here, and the drunk replies no and that he lost them in the park. The policeman asks why he is searching here, and the drunk replies, “this is where the light is” . – Source: Wikepedia
Following are the examples of the streetlight effect in the context of data analysis:
>> Looking only at the top 10 results of your analytics reports.
>> Always looking only at a certain set of dashboards, metrics and KPIs
>> Just relying on standard reports available to you via the data studio etc.
The streetlight effect can make you overlook details.
So over time, you develop superficial knowledge of your website and campaign performance.
You would overlook any detail/anomaly that wasn’t captured by your dashboard or set of standard reports.
It is even worse if you are a CEO/CMO/COO/Marketing Head who regularly gets ready-made reports from data analysts/tools.
You are drifting further and further away from your data and customers.
You are most likely missing the context and not getting the complete picture.
Suppose someone is doing the data analysis on your behalf and sending you their conclusion/recommendation /insight every week/month.
In that case, you don’t understand the context in which the data was analyzed and reported.
And context is very important to understand.
Change the context, and it could totally change data interpretation.
To make matters worse, you would never know if the ‘done for you’ analysis was flawed.
And then, when you take business and marketing decisions based on flawed data analysis, it could result in financial disaster.
If you never ever do the data analysis yourself, if you never look into the original data source, the raw data, you would not be able to detect flawed analysis and anomalies in data collection.
You would have a hard time questioning a report.
So,
Spend at least 30 minutes daily analyzing the analytics data yourself, regardless of the size of your analytics team and/or your role.
Your future self will thank you for that.
If you have become used to looking at the same set of reports/dashboards, it’s time to develop a new set of reports/dashboards.
Look at different sets of metrics. Segment the data in a new way.
#9 Confirmation bias
Confirmation bias occurs when you actively seek information and patterns in data that confirms your existing theories or beliefs.
As a result, you may start seeing a certain pattern in data that does not really exist or you test a hypothesis under the assumption that it is true or you overemphasise that data that confirms your existing beliefs.
#10 Selection bias
Selection bias occurs when you select a sample that is not a good representative of all of the data. It often results in drawing wrong conclusions.
Selection bias often skews the A/B test results and create imaginary lifts in sales.
#11 Observational bias
Observational bias occurs when you take things at face value. It often results in observational errors.
For example, campaign A has a 5% conversion rate and campaign B has a 10% conversion rate.
Now you could assume that the conversion rate of campaign ‘B’ is highest only on the basis of your observation. But there is a good possibility that the campaign B conversion rate is not statistically significant.
#12 Cumulative error
The cumulative error is an error that increases with the size of the data sample revealing it.
So one wrong conclusion can lead to several wrong conclusions which could eventually make your entire analysis flawed.
Following is a real-life example of a cumulative error:
- You submitted a sales performance report to your manager. But this report has got the wrong sales figure.
- Your manager presented your report to the board of directors under the impression that the report is accurate.
- The board made a marketing decision based on the report and then everyone in your company was asked to act on that decision.
- Since the decision was made based on a faulty report, any person in your company who acts on that decision would increase the magnitude of the error you made in your report.
Over time, one small mistake could result in a company-wide catastrophic failure.
#13 Relying on anecdotal evidence
When you rely on anecdotal evidence to make future decisions or further your research, it could result in a flawed test design, misinterpretation of data, misallocation of resources and even great monetary loss.
For example,
Let us suppose you doubled the conversion rate of two of your clients just by changing the colour of their CTA button to red.
So based on this anecdotal evidence you could draw the conclusion that changing the CTR colour to red is a best practice and is most likely to result in an increase in conversion rate.
You need to remember that one or even a few (possibly isolated) examples can not stand alone as definitive proof of a greater premise.
#14 Not understanding the Pareto principle (80/20 rule)
According to the Pareto principle, 80% of the outcomes come from 20% of the input. So what that means, only 20% of the data available to you is important enough to produce 80% of the outcome you want. The rest is all junk.
Often optimizers focus on analyzing 80% of the data that produces 20% of the results. And this happens because they are unable to conduct a highly focused and meaningful analysis from the very start.
#15 Woozle effect
Woozle effect is the evidence by citation. It occurs when frequent citation of a claim lacking evidence is considered to be true/fact over time.
For example,
A blog named ‘A’ claimed that a very high bounce rate is bad for your website but without citing any real evidence.
Over time, 10 other blogs claimed the same thing by citing blog A. Eventually, the claim that a very high bounce rate is bad is considered to be a fact in your industry.
So when you look at your own reports and you see a very high bounce rate, your default conclusions could be that it is bad.
Be aware of the data presented as facts. Do your own research and draw your own conclusions.
Do not rely on case studies for data interpretation or for making business and marketing decisions.
#16 Correlation Causation fallacy
Just because two things appeared to be correlated, it does not necessarily indicate that one thing irrefutably caused the other thing.
For example,
Your client: “Our website traffic went down by 50% last week. We also switched to enhanced ecommerce the same week”
Now if you draw the following conclusion then you are likely falling prey to the correlation causation fallacy:
“Their website traffic went down by 50% last week because of switching to enhanced ecommerce tracking.”
You can always find some relationship between two variables/events if you really want to. However, the mere presence of a relationship between two variables doesn’t imply that one causes the other.
In other words, correlation doesn’t imply causation.
#17 Over-reliance on Analytics reports
Analytics reports are the last thing you should be looking at and not the first.
For example, I often ask my clients ‘where do the majority of your customers live?’
I can easily get an answer to this question from Google Analytics.
But I still ask this question because I am not sure whether the GA report I am looking at is giving me an accurate insight. Maybe there is a data collection or data sampling issue which is skewing the analytics data.
I want to match the understanding of my client with the insight I am getting from my analytics report. This way I can quickly detect anomalies in data.
For example, if my client is telling me that their top-selling product is ‘X’ and my ecommerce report is telling me that the top-selling product is ‘Y’ then either my client is wrong or my analytics data is wrong.
In either case, I now need to do some detective work.
One of the biggest drawbacks of trying to figure out everything on our own is that we tend to make a lot more assumptions about the problems our customers are facing.
We then create hypotheses around such assumptions and often test and fail spectacularly.
The most effective way of developing a great understanding of a business is by asking tons of questions from the people who actually run the business.
You need to understand that business questions can never be answered accurately by anyone other than the people who actually run the business.
You need to understand that no amount of data analysis or A/B testing can replace the understanding your client has developed over the years from successfully running a profitable business.
What is the point of spending hours and days digging for information and insight which is already known to someone in your organisation?
Your time would be best spent finding answers to questions which no one can answer.
But, for that to happen you need to know the questions which have already been answered.
#18 Not being aware of activities/events that can significantly affect your data.
You must be aware of every activity, news, event, and change, which can significantly affect your data on a daily or weekly basis.
These changes can be something like (but are not limited to):
- Major site redesign.
- Launch of a new product or a promotional campaign.
- Discontinuation of a product, process or campaign.
- A significant change in management, company policies, marketing budgets or processes.
- Any change to your digital analytics account (like adding or removing filters, adding or deleting views, etc.).
- Any change to the way data is presently collected, integrated and analyzed.
- A considerable change in the consumers’ behaviour.
- A considerable change in the competitive landscape (like the entry of a big and powerful competitor).
- Any positive or negative news about your company and competitors.
- Changes in the economy, market conditions, etc all of which can affect your data.
You can be aware of such changes by being in the loop.
Being ‘in the loop’ means being aware of what is going around you, in your organization or in your client’s organization.
Being ‘in the loop’ means being present everywhere you should be, in your organization or in your client’s organization.
What that means is, you need to be present in every board meeting, every marketing team meeting and should be CC’d or BCC’d into every important email in which major decisions are being made about your:
- Company
- Website(s)
- Promotional campaign(s)
- Business processes
- Marketing budgets
- The way data is presently collected and integrated.
#19 Not maintaining a database of important changes the significantly affected your data
You should maintain a database of all the changes that significantly affect your data, every single day.
For example:
- Note down the day and time when you first installed the Facebook pixel code on your website.
- If the website was put into maintenance mode, then note down the day, time, and duration of the server outage.
- Note down the day and time when you first started or stopped a marketing campaign.
- Document all the changes made to your website.
- Note down the changes in the marketing budget or changes in the way data is currently being collected or integrated.
- Note down all the changes that may significantly affect your data either today or in the immediate future.
#20 Not understanding the maths and stats behind web analytics and conversion optimization
Analysing data without a basic understanding of maths and statistics will most likely result in drawing the wrong conclusions and losing money.
One of the best ways to interpret data accurately is by correctly using maths and statistics.
Learning maths and statistics is an excellent way to develop your logical and critical thinking. It makes you a better marketer and, of course, a better analyst.
The knowledge of maths and statistics will help you in accurately interpreting the data and quickly find anomalies in data.
For example, if you see that the reported conversion rate is made up of a very small data sample, you know instantly that it cannot be statistically significant (i.e. statistically meaningful).
Google Analytics reports are full of averages. And if you do not know how averages work then you can easily misinterpret them.
You can then get a below-average, or even poor, insight from your GA reports.
For example, one of the most misunderstood ratio metrics is the conversion rate.
Because of poor statistics skills, many optimisers have no idea that the conversion rate can also negatively correlate with sales and profit.
They think that the conversion rate always positively correlates with conversions (i.e. as conversion rate increases, sales always increase and so does profit). However, this is not always true.
No matter how bad your analytics data is, not using the correct maths and statistics will only make it worse.
Here are a few questions for your consideration:
Q1. When your website conversion rate jumps from 10% to 12%. Is this a 2% rise in conversion rate or a 20% rise in conversion rate?
Q2. Can you double your sales by simply doubling your marketing budget?
Q3. If the average time on your website is ten minutes. Does that mean website visitors actually spent ten minutes on average?
Q4. If the campaign A conversion rate is 1% and the campaign B conversion rate is 5%, does that mean campaign B is performing better than campaign A?
The corporate world is not very forgiving to mistakes made by optimizers. If we report that the jump in conversion rate from 10% to 12% means there is a 2% rise in conversion rate, our entire analysis becomes questionable.
We instantly create a shadow on the rest of our analysis. The thought that will instantly pop up in the mind of the recipient of our report will be “what else have they done wrong?”
The role of maths and statistics in the world of web analytics is not clear to many optimisers. Not many talk or write about the usage of maths and statistics in conversion optimisation.
That is why I have written an entire book called ‘Maths and Stats for Web Analytics and Conversion Optimization’ to fill this knowledge gap.
This expert guide will teach you how to leverage the knowledge of maths and statistics to accurately interpret analytics data and reports.
Related Articles
- Bare Minimum Statistics for Web Analytics
- Understanding A/B Testing Statistics to get REAL Lift in Conversions
- Google Analytics Averages – Learn to Analyze & Report above Average
- Best Types of Charts in Excel for Data Analysis, Presentation and Reporting
- KPI Meaning, Examples, Calculation & Dashboard Tutorial
- Maths and Stats behind Web Analytics – Beginners Guide
- Data Science Vs. Data Analytics – An In-Depth Comparison Of Similarities And Differences
My best selling books on Digital Analytics and Conversion Optimization
Maths and Stats for Web Analytics and Conversion Optimization
This expert guide will teach you how to leverage the knowledge of maths and statistics in order to accurately interpret data and take actions, which can quickly improve the bottom-line of your online business.
Master the Essentials of Email Marketing Analytics
This book focuses solely on the ‘analytics’ that power your email marketing optimization program and will help you dramatically reduce your cost per acquisition and increase marketing ROI by tracking the performance of the various KPIs and metrics used for email marketing.
Attribution Modelling in Google Analytics and BeyondSECOND EDITION OUT NOW!
Attribution modelling is the process of determining the most effective marketing channels for investment. This book has been written to help you implement attribution modelling. It will teach you how to leverage the knowledge of attribution modelling in order to allocate marketing budget and understand buying behaviour.
Attribution Modelling in Google Ads and Facebook
This book has been written to help you implement attribution modelling in Google Ads (Google AdWords) and Facebook. It will teach you, how to leverage the knowledge of attribution modelling in order to understand the customer purchasing journey and determine the most effective marketing channels for investment.