10 Techniques to Migrate from Data-Driven to Data-Smart Marketing
In the next few minutes, I will show you ten different techniques that will help you in migrating from being data-driven to being data-smart. I will also prove it to you how being ‘data-smart’ will give you an edge over your competition.
Don’t get me wrong. There is nothing wrong with being data-driven. It is still better than making all business decisions purely on faith or whatever your boss/client has to say. But being data-driven is just not good enough. You have to be ‘data-smart‘.
Data-driven marketing is all the rage these days. But just like conversion rate optimization, it badly needs an upgrade.
In the case of conversion rate optimization, the upgrade was ‘conversion optimization’ (yes the CRO without conversion rate).
The upgrade for data-driven marketing is “data-smart marketing” (or smart data marketing, whatever you prefer).
When we say or do data smart marketing, our actions and decisions are not purely data-driven. We don’t just blindly follow whatever a metric (like conversion rate) has to say or whatever a chart or report has to say. We look beyond data and make business decisions based on:
- Context (extremely important factor often overlooked in data-driven marketing)
- Collective know-how of the organization and industry
- All business and marketing activities which are outside the digital realm.
- Best practices of data analysis, interpretation, and statistics.
The fundamental difference between data-driven and data-smart marketers
Data-driven marketers tend not to look beyond data. They often disregard any claim which can’t be backed up with data.
They often work with the belief that the data and tools available to them somehow provide complete insight and if something can’t be collected and measured then it shouldn’t be taken into account while making important business decisions and calculating the business bottomline.
When the data is improperly used we tend to make poor business decisions and that too with a lot of confidence.
Related Post: Data Driven or Data blind and why I prefer being Data Smart
Data-smart marketers on the other hand use ‘smart data’ to make business and marketing decisions.
Smart data is simply the data that is used in a smart and intelligent manner. There is nothing special about this data in itself except it is used intelligently.
When we use smart data we take context into account, we take data collection issues into account and we follow the best practices of data analysis, data interpretation, and statistics.
Data-smart marketers know what their analytics tools and KPIs cannot do as what they can and where they should trade-off. They know when and how to make faith-based decisions.
Ok, now let us explore the 10 techniques.
Technique #1: Understand how the ‘average’ metric can be abused
There are many types of averages in statistics but the most common type are mean, median, and mode.
- The mean (also known as arithmetic mean) is simply an average of the numbers.
- The median is a middle number in a sorted list of numbers
- The mode is the number that is repeated more often than any other in a list of numbers
Now let us suppose someone conducted a study of 1500 posts from 5 different Facebook fan pages for 3 months and came up with the following results:
If you use mean as the average then it looks like Facebook posts are worth a lot in terms of reaching as many people with paid ads in other marketing channels.
If you use mode as the average then it looks like Facebook posts are not worth that much.
So if you are selling advertising, which type of average is more profitable for you to report? Obviously ‘mean’.
Now let us suppose someone conducted a second study of 1500 posts from 5 different Facebook fan pages for 3 months and came up with the following results:
Here, the median is more than the mean, so why not use the median here to increase the advertising value of posts.
This is just a small example.
You will often read studies and reports where a researcher will give you no explanation of the choice of average being used.
Is he using the average which helps him in reaching his conclusion? Maybe.
Takeaway:
It is very human to twist the data (either knowingly or unknowingly) to reach the conclusion one wants.
What is the solution?
The solution is, you first measure the spread of the data values in a data set and then decide whether or not you can trust the reported average value.
You can measure the spread either by looking at the distribution of values in a data set or by calculating spread through IQR, variance, or standard deviation.
Related Post: How to Analyze and Report above AVERAGE
Technique #2: Avoid making marketing decisions based on Conversion Rate
Somehow conversion rate has become an even more important metric than ROI in recent years for some digital marketers who sell ‘conversion rate optimization’ as a service to their clients.
Every second CRO agency boasts of improving the conversion rate of their clients by not less than two digits.
Improving conversion rate by three digits is not uncommon either:
“80% improvement in conversion rate”
“300% improvement in conversion rate”.
Sounds familiar?
Now the problem with such type of claims is that, many of these agencies remain silent about the impact of an increase in conversion rate on sales, cost, and gross profit.
You will rarely see claims made like this: “we improved the sales of our clients by 300%”. This is because increasing the conversion rate is much easier than actually increasing the sales volume and gross profit.
Example-1:
Website A Conversion Volume = 100
Website A Traffic = 10000 visits
So, Website A conversion rate = 100/10000 = 1%
Now decrease the website traffic from 10k to 5k (pause some of the paid campaigns, they are not performing well)
Now, Website A conversion rate = 100/5000 = 2%
So now I can make the claim that I increased the conversion rate of website A by 100%.
But does this improvement in conversion rate impact the business bottomline? Does it improve sales? The answer is NO.
Example-2:
Website A conversion rate: 1%
Website A cost per acquisition: £20
After conversion rate improvement
Website A conversion rate: 2%
Website A cost per acquisition: £30
You may argue that increase in conversion rate should decrease the acquisition cost. Well this is not always the case.
In fact there is no direct correlation between conversion rate and cost. Your acquisition cost can easily go up if you are getting more of average/low-value customers than your best customers.
Remember, conversion rate = conversion volume/traffic. Its calculation doesn’t take “cost” into account. So any increase or decrease in cost will not directly impact the conversion rate. That also means any increase or decrease in conversion rate will not directly impact the cost.
Example-3:
Website A conversion rate: 1%
Website A Sales: £200k
After conversion rate improvement
Website A conversion rate: 2%
Website A Sales: £150k
You may argue that increase in conversion rate should increase sales. Well this is not always the case.
In fact there is a weak positive correlation between conversion rate and sales as conversion rate doesn’t take ‘average order value’ into account, an important part of increasing sales.
Remember, conversion rate = conversion volumes/traffic.
Its calculation doesn’t take “average order value” into account. So any increase or decrease in average order value will not directly impact the conversion rate.
That also means any increase or decrease in conversion rate will not directly impact average order value.
Your sales can go down even after an improvement in conversion rate if there is a negative correlation between conversion rate and average order value or negative correlation between conversion rate and transactions.
I have explained all of these correlations in great detail in this post: Case Study: Why you should Stop Optimizing for Conversion Rate
When someone is just promoting the importance of an increase in conversion rate, we have no idea:
1. How the improvement in conversion rate actually impacted the business bottomline?
Maybe there was only a marginal improvement in sales.
Maybe there is no improvement in sales or maybe the sales actually declined.
2. How the conversion rate metric was calculated?
- Was conversion rate increased by increasing the conversion rate of the conversions which don’t really impact the business bottomline?
- Was conversion rate increased by decreasing the traffic?
- Was conversion rate increased by taking visitors into account instead of visits?
- Was conversion rate increased through some sneaky data segmentation?
- Was increase in conversion rate is a result of small data sample being used in testing?
3. Whether the conversion rate being promoted is a Goal conversion rate or e-commerce conversion rate?
It is one thing to improve goal conversion rate by 5% but a totally difficult ball game and much more difficult to improve ecommerce conversion rate by 5%.
4. Whether the reported conversion rate is in aggregate form or segmented?
If you have set up 5 goals and conversion rate of each goal is say 20%, then you would have 100% website conversion rate.
But does that mean your website is now converting every visitor into a customer? No.
5. When the conversion rate metric was calculated?
If during peak season, then you are bound to have a high conversion rate.
So you see there are so many factors that you need to take into account while playing with conversion rate metric. You can’t just blindly rely on the conversion rate to improve the business bottomline.
Takeaway:
It is very human to twist the data (either knowingly or unknowingly) to reach the conclusion one wants.
What is the solution?
The solution is to monitor conversion volume and especially acquisition cost during conversion optimization.
There should be a considerable increase in conversion volume and a considerable decrease in acquisition cost if conversion optimization has actually been carried out.
Don’t get blinded by a double/triple-digit increase in conversion rate. It doesn’t mean anything if there is little to no increase in conversion volume and gross profit.
Technique #3: Take effect size into consideration.
Consider the performance of three campaigns A, B, and C in the last month:
One look at the table above and many marketers will declare campaign B as the winner because it has the highest e-commerce conversion rate. But this is not the case.
Analysing data without a good knowledge of research design and statistics can lead to serious misinterpretation of data.
Data is not what you see is what you get. Data is what you interpret is what you get.
Here the sample size (4 transactions out of 20 visits) of campaign B is too small to be statistically significant. Had campaign B got 1 transaction out of 1 visit, its conversion rate would be 100%. Will that make its performance even better? No. So we can filter out campaign B performance here.
- A statistically significant result is the result which is unlikely to have occurred by chance.
- A statistically insignificant result is likely to have occurred by chance.
Now campaign A has a higher conversion rate than campaign C, so clearly campaign A is the winner? No.
At this point we can’t say with confidence whether the difference between the conversions rates of two campaigns is statistically significant.
We need to conduct a statistical test like a Z test to calculate the statistical significance of the difference in conversion rates of the two campaigns:
Let us suppose that after conducting a Z test, the statistical significance of the difference in conversion rates of the two campaigns turned out to be 98%.
Since statistical significance is more than 95%, many data-driven marketers will declare campaign ‘A’ as a winner and would recommend investing more in the campaign.
Here data smart marketers outsmart data-driven marketers as they tend to look beyond data. Since they know ‘data is not what you see is what you get’, they are less prone to making observational errors. They will go one step further and calculate the effect size (or size of the effect).
In statistics, an effect size is a measure of the strength of a phenomenon and is calculated as:
So even when the difference in the conversion rates of the two campaigns turned out to be statistically significant and we are now statistically confident that Campaign A has higher conversion rate, we should still be investing more in Campaign C as the effect size (here revenue) of Campaign C is much larger than that of Campaign A.
That is why you do not end your A/B test just because it is now statistically significant.
Statistical significance of 95% or higher doesn’t mean anything if there is little to no impact on effect size (conversion volume).
That is why you should optimize for conversion volume and not conversion rate.
- CRO is for data-driven marketers who just follow whatever data has to say.
- CO (CRO without conversion rate) is for data-smart marketers who look beyond data and follow the best practices of statistics.
The rule of thumb is that each variation you test should get at least 30 conversions in 30 days. The higher the conversion volume (i.e. effect size) per variation the better.
If you declare success and failure on the basis of statistical significance alone, then even after conducting several A/B tests and getting statistically significant results each time, there is always a high probability that you will still not see any considerable increase in your revenue.
So if you are making marketing decisions based on statistical significance alone you are not going to get optimal results.
You may even in some cases lose a significant amount of money.
Takeaway
Optimize for effect size i.e. conversion volume, acquisition cost, and gross profit.
Technique #4: Stop chasing KPIs instead solve customer’s problems
Since data-driven marketers tend not to look beyond data, they remain busy chasing KPIs like conversion rate:
“We have to improve the conversion rate by X”, “we have to improve sales by Y”.
On the other hand, data-smart marketers look beyond data and they don’t go around chasing KPIs.
Related Article: Beginner’s guide to Key Performance Indicators (KPIs) with Examples
They focus on solving their customers’ problems, one at a time. Because of that they primarily focus on surveys and not A/B testing.
A/B testing is for data-driven marketers. Surveys are for data-smart marketers.
However it would have been so much easier and less time consuming had he directly asked the customers about their problems through a simple survey in the first place.
That is how data-smart marketers outsmart data-driven marketers. They find and deploy solutions much faster because they look for solutions beyond data. They don’t focus on improving KPIs. They focus on solving customers’ problems.
Many blogs I read on conversion optimization, focus mainly on A/B testing. Like all you can do under conversion optimization is A/B testing.
Now the problem is, while it is cool to conduct A/B test, it is a crime against humanity ;) to conduct such tests without a solid hypothesis.
If your hypothesis is not based on qualitative data then it is not a hypothesis, it is your personal opinion.
A solid hypothesis is not based on what you think, your customers want to see but are based on what your customers have said they want to see.
Related Article: The little known details about Hypothesis in Conversion Optimization
So for example, if the majority of your customers are complaining about your shopping cart page, then you go ahead and test the page. You don’t test the page just because that is what you are supposed to do as a conversion expert.
This is a big difference between how a data-driven marketer solve a conversion problem and how a data-smart marketer solves the same problem.
They both solve the same problem but the latter solves the same conversion problem much faster.
Takeaways
- Solve for your customers and not for KPIs.
- Run more surveys and usability tests than A/B tests.
- Run surveys 24 hours a day, 7 days a week.
- Continuously collect customer feedback and act on them in a timely manner.
Technique #5: Take data sampling issues into consideration
If you see a yellow notification like this in Google Analytics (doesn’t matter whether it is GA standard or GA premium), you should immediately stop assuming that you are going to get any accurate data from your report.
There is a high probability that reported metrics from ‘conversion rate’, ‘revenue’ to ‘visits’ could be anywhere from 10% to 80% off the mark.
You can’t make business and marketing decisions from a report which is based on just 2.64% of the website’s total visits. For example Google Analytics may report your last month’s revenue to be £2 million when in fact it is only £900k.
Such inaccuracies in data occur because of bad data sampling.
To learn more about fixing data sampling issues in Google Analytics, check out this post: Google Analytics Data Sampling – Complete Guide
Data sampling issues are not limited to just Google Analytics. They can be found everywhere.
Most of the statistics are based on data samples and if you are not sure whether the selected sample is a good representative of all of the data then you could be looking at biased/inaccurate reports and analysis.
For example, say your client sells analytics software called ‘XYZ’ and he runs a survey in which he asks his clients to select the best analytics software among all the software available in the market.
The majority of his clients are likely to rate ‘XYZ’ as the best analytics software as they are already paying for it.
The problem with this scenario is with the selected sample, which is while being a representative sample is not random. It does not represent the average users of analytics software. It is like asking your employees “who is the best boss?”.
A good data sample would be random and would contain people of different ages and from all walks of life.
You will often see companies misleading consumers with advertising like:
“99.99% customer satisfaction rate”, “we are market leaders in ….”
All of these claims can be easily validated by looking at the sample size and sample quality which they often do not publish for scrutiny. So it is always a good practice to select a representative sample that is random and look at the sample size before you draw any conclusions.
You won’t get any worthwhile conclusions from bad samples, no matter how sophisticated your analysis was.
Takeaways
- Always look at the sample size before drawing any conclusion.
- Select a random sample from a representative population for conducting surveys.
Technique #6: Understand the manipulation of the y-axis and data points
This is a pretty common data visualization trick I often see in action.
Check out this chart which measures Facebook fan growth of my website from FanPageKarma tool:
One look at this chart and it looks like SEOTakeaways’ Facebook fan growth has skyrocketed in the last month.
But if you look closely, you can see that the y-axis doesn’t start at zero. It starts at 2500.
Actually in the last 1 month, SEOTakeaways’ Facebook fan base increased from 2514 to 2596. That is 3.26% increase in the fan base. But by truncating the y-axis and starting it from 2500, it looks the fan base has increased by several thousand percent.
Now if I draw the same chart with y-axis starting at 0, then you will see a completely different picture:
That doesn’t look very nice, does it? Let me amplify this change by starting y-axis at 2514 and ending it at 2596 and at the same time plotting just two data points (the very first: 2514 and the very last: 2596):
Now it looks like a truly phenomenal growth chart. Isn’t it?
Note how by plotting just two data points I have managed to remove any fluctuation (peak, valley) in the data trend. Now from the chart it looks like there has been a steady sharp growth in the Facebook fan base.
Same data visualization trick can be applied on column charts as well to amplify changes:
Here the conversion volume through SEO campaign has increased by only 4% in the last 3 months. But when you look at the chart, the change in conversion volume looks much bigger.
Here is how this change actually looks like:
What else can be done to amplify changes without being caught? Just hide the scale on the y-axis.
Without any scale on the y-axis, there is no way of knowing where the y-axis starts.
Takeaways
- Always check your chart for the truncated y-axis
- Always check your chart for hidden scales
- Do not trust charts with just a few data points.
- Statistics can be misleading depending upon how they are presented
Technique #7: Look at the accuracy and credibility of the data source
It is not very hard to fabricate meaningless numbers that tell a story you want people to believe. The government does that all the time.
For example, Obamacare Will Increase Health Spending By $7,450 For A Typical Family of Four
Where this number 7,450 comes from? How do you define a typical family? What are the criteria? How reliable is this number?
According to this article on BBC: How bad are US debt levels? The US has a total debt of almost $17 trillion which is expected to rise to almost $23 trillion in the next five years. Where this figure 17 trillion comes from? What is the data source and how reliable it is?
Dig deep and you will find it is largely an assumption.
There is even a US debt clock to scare people with big numbers.
Though this debt clock mentions data source, it doesn’t exactly tell you where these big numbers are being pulled from and how reliable are they? Can you really believe all these numbers?
The majority of news out there which talk in numbers have little to no credibility because:
a) They don’t mention their data source
b) They don’t mention their data collection methodology
c) Their data source has little to no credibility.
d) Their data source is outdated and no longer applicable.
Media talk in numbers because numbers generate credibility and people are less suspicious of a statistical claim than they would be for a descriptive argument.
For example:
“75% of undergraduates are unemployed.”
“Majority of undergraduates are unemployed”
Now which statement seems more believable? Obviously the one with numbers.
Throw numbers here and there and make your story look more scientific and well researched. After all who is going to bother to check the data source or the data collection methodology?
Here is how a well-defined data collection methodology looks like:
Takeaways
- Beware of meaningless fabricated numbers. They are everywhere.
- Always look for the data source
- Always check the credibility of the data source.
- Determine how the data has been collected.
- Determine how current the data source is.
- Look at a lot of different data sources. Do not rely on just one data source.
Technique #8: Always present data with context
If I say to you that my website conversion rate is 15%, does it tell you anything meaningful about the site performance? No.
- You don’t know whether 15% is good or bad conversion rate.
- You don’t know whether the conversion rate has increased or decreased in comparison to last month.
- You don’t know whether this conversion rate is a goal conversion rate or ecommerce conversion rate.
- You have no idea whether the reported conversion rate is in aggregated form or segmented.
In other words you are not aware of the context. Without context, data is meaningless.
Comparison adds context to data and makes it more meaningful. So if you want to measure the performance of your marketing campaign, then you need to compare its performance with last month’s performance.
Without such a comparison, you will never know whether or not you are making progress.
Consequently, the following report is not very useful:
You can make this report more useful by comparing it with last month’s performance.
Takeaways
- Beware of the data which has been presented without context. It is always open to misinterpretation.
- Comparison adds context to data and makes it more meaningful.
- SA standalone metric doesn’t tell you anything meaningful.
Technique #9: Use common senses while looking at a chart
Sometimes just using common sense does the trick. For example:
According to Fox news chart, 129% of Americans believe that scientists falsify Global warming data. 129% really? How reliable can this analysis be if the numbers don’t add up to 100%?
Here is another chart:
What is wrong with this chart? Well you can’t compare conversion rate and conversion volume like that. This is because they both have got different units of measurements.
Takeaways
- Do not blindly believe whatever a chart has to say.
- Look at the charts closely. Look for a truncated y-axis, missing scales, the number of data points plotted, and variable types.
- Do the basic maths and question the data if something doesn’t seem right.
Technique #10: Make faith-based decisions
Data-driven marketers do not make faith-based decisions.
Now the problem is while they avoid making such decisions, their clients/employers aka the entrepreneurs make such decisions all the time and you can’t really stop them from making such decisions.
Why? Because they know that they will fail in business if they stop making faith based decisions.
Let me give you one example. I left my well-paid job to start a tech startup.
Knowing 90% of all Tech Startups Fail, I should not have even considered taking such action based on data.
What if my business failed? What if I can’t pay the bills? What if I never get a job ever again?
But I had to overcome all of these fears and take a leap of faith. So I did what I had to. Nothing really bad happened. I have been a happy independent consultant for years now.
Had I been data-driven, that 90% failure rate would have stopped me at the dead start from taking any action.
I would have never become independent and I would still be working somewhere 9 to 5 and commuting 5 hours a day.
Likewise you often hear stories like “How to Quit Your Job, Move to Paradise and Get Paid to Change the World” about people who quit their job, sell everything, move to a foreign country, and live their dream lives.
How they are able to do all that? They are able to do that because they take a leap of faith.
My friend Danny Dover (a well-known SEO and author of the book: SEO Secrets) quit his 6 figure salary job to complete his bucket list and is now living a happy and fulfilling life. He travels all over the world throughout the year. For him the word “holiday” actually means coming back home.
How he is able to do all that? He is able to do all that because he took a leap of faith.
A leap of faith, in its most commonly used meaning, is the act of believing in or accepting something intangible or unprovable, or without empirical evidence.
These people don’t pursue their dreams on the basis of the likelihood of success or failure. They don’t go around and look for facts or research for market stats to make sure that they are making the right decision. They just go ahead and do it. They do what they believe in and what makes them happy, no matter how crazy it may sound to others.
Faith-based decisions are an important part of our lives. All major business decisions are largely faith-based, from hiring an employee, entering into a business partnership to acquiring a business.
All major life decisions are faith-based whether it is friendship, marriage, or having kids.
You can never venture into the unknown and be innovative and think outside the box if you can’t make decisions without data/facts.
Why I am telling you all this? I am presenting you the other side of the decision-making process.
If you are not an entrepreneur then you need to start thinking like one. Understand their thought process. Understand why sometimes they reject your recommendations even when they are backed up with data. Understand why sometimes they reject your whole analysis (no matter how accurate it may seem) and prefer making faith-based business decisions and following their gut instinct.
Takeaways
- Do not automatically dismiss any claim just because it can’t be backed up with data.
- Understand that the data and tools available to you do not provide complete insight. They are there to help you not to dissuade you from reality.
- Understand business exists outside the digital realm and your data collection tools.
- Know what your analytics tools and KPIs cannot do as what they can and learn where and when you should trade-off.
- Understand that faith-based decisions are necessary for survival of a business.
- Think like entrepreneurs and look at things from their perspectives.
Other articles on Maths and Stats in Web Analytics
- Beginners Guide to Maths and Stats behind Web Analytics
- How to Analyze and Report above AVERAGE
- What Matters more: Conversion Volume or Conversion Rate – Case Study
- The little known details about hypothesis in conversion optimization
- Is your conversion Rate Statistically Significant?
- Calculated Metrics in Google Analytics – Complete Guide
- Here is Why Conversion Volume Optimization is better than CRO
- Bare Minimum Statistics for Web Analytics
- Understanding A/B Testing Statistics to get REAL Lift in Conversions
- Data Driven or Data blind and why I prefer being Data Smart
- The Guaranteed way to Sell Conversion Optimization to your Client
- SEO ROI Analysis – How to do ROI calculations for SEO
In the next few minutes, I will show you ten different techniques that will help you in migrating from being data-driven to being data-smart. I will also prove it to you how being ‘data-smart’ will give you an edge over your competition.
Don’t get me wrong. There is nothing wrong with being data-driven. It is still better than making all business decisions purely on faith or whatever your boss/client has to say. But being data-driven is just not good enough. You have to be ‘data-smart‘.
Data-driven marketing is all the rage these days. But just like conversion rate optimization, it badly needs an upgrade.
In the case of conversion rate optimization, the upgrade was ‘conversion optimization’ (yes the CRO without conversion rate).
The upgrade for data-driven marketing is “data-smart marketing” (or smart data marketing, whatever you prefer).
When we say or do data smart marketing, our actions and decisions are not purely data-driven. We don’t just blindly follow whatever a metric (like conversion rate) has to say or whatever a chart or report has to say. We look beyond data and make business decisions based on:
- Context (extremely important factor often overlooked in data-driven marketing)
- Collective know-how of the organization and industry
- All business and marketing activities which are outside the digital realm.
- Best practices of data analysis, interpretation, and statistics.
The fundamental difference between data-driven and data-smart marketers
Data-driven marketers tend not to look beyond data. They often disregard any claim which can’t be backed up with data.
They often work with the belief that the data and tools available to them somehow provide complete insight and if something can’t be collected and measured then it shouldn’t be taken into account while making important business decisions and calculating the business bottomline.
When the data is improperly used we tend to make poor business decisions and that too with a lot of confidence.
Related Post: Data Driven or Data blind and why I prefer being Data Smart
Data-smart marketers on the other hand use ‘smart data’ to make business and marketing decisions.
Smart data is simply the data that is used in a smart and intelligent manner. There is nothing special about this data in itself except it is used intelligently.
When we use smart data we take context into account, we take data collection issues into account and we follow the best practices of data analysis, data interpretation, and statistics.
Data-smart marketers know what their analytics tools and KPIs cannot do as what they can and where they should trade-off. They know when and how to make faith-based decisions.
Ok, now let us explore the 10 techniques.
Technique #1: Understand how the ‘average’ metric can be abused
There are many types of averages in statistics but the most common type are mean, median, and mode.
- The mean (also known as arithmetic mean) is simply an average of the numbers.
- The median is a middle number in a sorted list of numbers
- The mode is the number that is repeated more often than any other in a list of numbers
Now let us suppose someone conducted a study of 1500 posts from 5 different Facebook fan pages for 3 months and came up with the following results:
If you use mean as the average then it looks like Facebook posts are worth a lot in terms of reaching as many people with paid ads in other marketing channels.
If you use mode as the average then it looks like Facebook posts are not worth that much.
So if you are selling advertising, which type of average is more profitable for you to report? Obviously ‘mean’.
Now let us suppose someone conducted a second study of 1500 posts from 5 different Facebook fan pages for 3 months and came up with the following results:
Here, the median is more than the mean, so why not use the median here to increase the advertising value of posts.
This is just a small example.
You will often read studies and reports where a researcher will give you no explanation of the choice of average being used.
Is he using the average which helps him in reaching his conclusion? Maybe.
Takeaway:
It is very human to twist the data (either knowingly or unknowingly) to reach the conclusion one wants.
What is the solution?
The solution is, you first measure the spread of the data values in a data set and then decide whether or not you can trust the reported average value.
You can measure the spread either by looking at the distribution of values in a data set or by calculating spread through IQR, variance, or standard deviation.
Related Post: How to Analyze and Report above AVERAGE
Technique #2: Avoid making marketing decisions based on Conversion Rate
Somehow conversion rate has become an even more important metric than ROI in recent years for some digital marketers who sell ‘conversion rate optimization’ as a service to their clients.
Every second CRO agency boasts of improving the conversion rate of their clients by not less than two digits.
Improving conversion rate by three digits is not uncommon either:
“80% improvement in conversion rate”
“300% improvement in conversion rate”.
Sounds familiar?
Now the problem with such type of claims is that, many of these agencies remain silent about the impact of an increase in conversion rate on sales, cost, and gross profit.
You will rarely see claims made like this: “we improved the sales of our clients by 300%”. This is because increasing the conversion rate is much easier than actually increasing the sales volume and gross profit.
Example-1:
Website A Conversion Volume = 100
Website A Traffic = 10000 visits
So, Website A conversion rate = 100/10000 = 1%
Now decrease the website traffic from 10k to 5k (pause some of the paid campaigns, they are not performing well)
Now, Website A conversion rate = 100/5000 = 2%
So now I can make the claim that I increased the conversion rate of website A by 100%.
But does this improvement in conversion rate impact the business bottomline? Does it improve sales? The answer is NO.
Example-2:
Website A conversion rate: 1%
Website A cost per acquisition: £20
After conversion rate improvement
Website A conversion rate: 2%
Website A cost per acquisition: £30
You may argue that increase in conversion rate should decrease the acquisition cost. Well this is not always the case.
In fact there is no direct correlation between conversion rate and cost. Your acquisition cost can easily go up if you are getting more of average/low-value customers than your best customers.
Remember, conversion rate = conversion volume/traffic. Its calculation doesn’t take “cost” into account. So any increase or decrease in cost will not directly impact the conversion rate. That also means any increase or decrease in conversion rate will not directly impact the cost.
Example-3:
Website A conversion rate: 1%
Website A Sales: £200k
After conversion rate improvement
Website A conversion rate: 2%
Website A Sales: £150k
You may argue that increase in conversion rate should increase sales. Well this is not always the case.
In fact there is a weak positive correlation between conversion rate and sales as conversion rate doesn’t take ‘average order value’ into account, an important part of increasing sales.
Remember, conversion rate = conversion volumes/traffic.
Its calculation doesn’t take “average order value” into account. So any increase or decrease in average order value will not directly impact the conversion rate.
That also means any increase or decrease in conversion rate will not directly impact average order value.
Your sales can go down even after an improvement in conversion rate if there is a negative correlation between conversion rate and average order value or negative correlation between conversion rate and transactions.
I have explained all of these correlations in great detail in this post: Case Study: Why you should Stop Optimizing for Conversion Rate
When someone is just promoting the importance of an increase in conversion rate, we have no idea:
1. How the improvement in conversion rate actually impacted the business bottomline?
Maybe there was only a marginal improvement in sales.
Maybe there is no improvement in sales or maybe the sales actually declined.
2. How the conversion rate metric was calculated?
- Was conversion rate increased by increasing the conversion rate of the conversions which don’t really impact the business bottomline?
- Was conversion rate increased by decreasing the traffic?
- Was conversion rate increased by taking visitors into account instead of visits?
- Was conversion rate increased through some sneaky data segmentation?
- Was increase in conversion rate is a result of small data sample being used in testing?
3. Whether the conversion rate being promoted is a Goal conversion rate or e-commerce conversion rate?
It is one thing to improve goal conversion rate by 5% but a totally difficult ball game and much more difficult to improve ecommerce conversion rate by 5%.
4. Whether the reported conversion rate is in aggregate form or segmented?
If you have set up 5 goals and conversion rate of each goal is say 20%, then you would have 100% website conversion rate.
But does that mean your website is now converting every visitor into a customer? No.
5. When the conversion rate metric was calculated?
If during peak season, then you are bound to have a high conversion rate.
So you see there are so many factors that you need to take into account while playing with conversion rate metric. You can’t just blindly rely on the conversion rate to improve the business bottomline.
Takeaway:
It is very human to twist the data (either knowingly or unknowingly) to reach the conclusion one wants.
What is the solution?
The solution is to monitor conversion volume and especially acquisition cost during conversion optimization.
There should be a considerable increase in conversion volume and a considerable decrease in acquisition cost if conversion optimization has actually been carried out.
Don’t get blinded by a double/triple-digit increase in conversion rate. It doesn’t mean anything if there is little to no increase in conversion volume and gross profit.
Technique #3: Take effect size into consideration.
Consider the performance of three campaigns A, B, and C in the last month:
One look at the table above and many marketers will declare campaign B as the winner because it has the highest e-commerce conversion rate. But this is not the case.
Analysing data without a good knowledge of research design and statistics can lead to serious misinterpretation of data.
Data is not what you see is what you get. Data is what you interpret is what you get.
Here the sample size (4 transactions out of 20 visits) of campaign B is too small to be statistically significant. Had campaign B got 1 transaction out of 1 visit, its conversion rate would be 100%. Will that make its performance even better? No. So we can filter out campaign B performance here.
- A statistically significant result is the result which is unlikely to have occurred by chance.
- A statistically insignificant result is likely to have occurred by chance.
Now campaign A has a higher conversion rate than campaign C, so clearly campaign A is the winner? No.
At this point we can’t say with confidence whether the difference between the conversions rates of two campaigns is statistically significant.
We need to conduct a statistical test like a Z test to calculate the statistical significance of the difference in conversion rates of the two campaigns:
Let us suppose that after conducting a Z test, the statistical significance of the difference in conversion rates of the two campaigns turned out to be 98%.
Since statistical significance is more than 95%, many data-driven marketers will declare campaign ‘A’ as a winner and would recommend investing more in the campaign.
Here data smart marketers outsmart data-driven marketers as they tend to look beyond data. Since they know ‘data is not what you see is what you get’, they are less prone to making observational errors. They will go one step further and calculate the effect size (or size of the effect).
In statistics, an effect size is a measure of the strength of a phenomenon and is calculated as:
So even when the difference in the conversion rates of the two campaigns turned out to be statistically significant and we are now statistically confident that Campaign A has higher conversion rate, we should still be investing more in Campaign C as the effect size (here revenue) of Campaign C is much larger than that of Campaign A.
That is why you do not end your A/B test just because it is now statistically significant.
Statistical significance of 95% or higher doesn’t mean anything if there is little to no impact on effect size (conversion volume).
That is why you should optimize for conversion volume and not conversion rate.
- CRO is for data-driven marketers who just follow whatever data has to say.
- CO (CRO without conversion rate) is for data-smart marketers who look beyond data and follow the best practices of statistics.
The rule of thumb is that each variation you test should get at least 30 conversions in 30 days. The higher the conversion volume (i.e. effect size) per variation the better.
If you declare success and failure on the basis of statistical significance alone, then even after conducting several A/B tests and getting statistically significant results each time, there is always a high probability that you will still not see any considerable increase in your revenue.
So if you are making marketing decisions based on statistical significance alone you are not going to get optimal results.
You may even in some cases lose a significant amount of money.
Takeaway
Optimize for effect size i.e. conversion volume, acquisition cost, and gross profit.
Technique #4: Stop chasing KPIs instead solve customer’s problems
Since data-driven marketers tend not to look beyond data, they remain busy chasing KPIs like conversion rate:
“We have to improve the conversion rate by X”, “we have to improve sales by Y”.
On the other hand, data-smart marketers look beyond data and they don’t go around chasing KPIs.
Related Article: Beginner’s guide to Key Performance Indicators (KPIs) with Examples
They focus on solving their customers’ problems, one at a time. Because of that they primarily focus on surveys and not A/B testing.
A/B testing is for data-driven marketers. Surveys are for data-smart marketers.
However it would have been so much easier and less time consuming had he directly asked the customers about their problems through a simple survey in the first place.
That is how data-smart marketers outsmart data-driven marketers. They find and deploy solutions much faster because they look for solutions beyond data. They don’t focus on improving KPIs. They focus on solving customers’ problems.
Many blogs I read on conversion optimization, focus mainly on A/B testing. Like all you can do under conversion optimization is A/B testing.
Now the problem is, while it is cool to conduct A/B test, it is a crime against humanity ;) to conduct such tests without a solid hypothesis.
If your hypothesis is not based on qualitative data then it is not a hypothesis, it is your personal opinion.
A solid hypothesis is not based on what you think, your customers want to see but are based on what your customers have said they want to see.
Related Article: The little known details about Hypothesis in Conversion Optimization
So for example, if the majority of your customers are complaining about your shopping cart page, then you go ahead and test the page. You don’t test the page just because that is what you are supposed to do as a conversion expert.
This is a big difference between how a data-driven marketer solve a conversion problem and how a data-smart marketer solves the same problem.
They both solve the same problem but the latter solves the same conversion problem much faster.
Takeaways
- Solve for your customers and not for KPIs.
- Run more surveys and usability tests than A/B tests.
- Run surveys 24 hours a day, 7 days a week.
- Continuously collect customer feedback and act on them in a timely manner.
Technique #5: Take data sampling issues into consideration
If you see a yellow notification like this in Google Analytics (doesn’t matter whether it is GA standard or GA premium), you should immediately stop assuming that you are going to get any accurate data from your report.
There is a high probability that reported metrics from ‘conversion rate’, ‘revenue’ to ‘visits’ could be anywhere from 10% to 80% off the mark.
You can’t make business and marketing decisions from a report which is based on just 2.64% of the website’s total visits. For example Google Analytics may report your last month’s revenue to be £2 million when in fact it is only £900k.
Such inaccuracies in data occur because of bad data sampling.
To learn more about fixing data sampling issues in Google Analytics, check out this post: Google Analytics Data Sampling – Complete Guide
Data sampling issues are not limited to just Google Analytics. They can be found everywhere.
Most of the statistics are based on data samples and if you are not sure whether the selected sample is a good representative of all of the data then you could be looking at biased/inaccurate reports and analysis.
For example, say your client sells analytics software called ‘XYZ’ and he runs a survey in which he asks his clients to select the best analytics software among all the software available in the market.
The majority of his clients are likely to rate ‘XYZ’ as the best analytics software as they are already paying for it.
The problem with this scenario is with the selected sample, which is while being a representative sample is not random. It does not represent the average users of analytics software. It is like asking your employees “who is the best boss?”.
A good data sample would be random and would contain people of different ages and from all walks of life.
You will often see companies misleading consumers with advertising like:
“99.99% customer satisfaction rate”, “we are market leaders in ….”
All of these claims can be easily validated by looking at the sample size and sample quality which they often do not publish for scrutiny. So it is always a good practice to select a representative sample that is random and look at the sample size before you draw any conclusions.
You won’t get any worthwhile conclusions from bad samples, no matter how sophisticated your analysis was.
Takeaways
- Always look at the sample size before drawing any conclusion.
- Select a random sample from a representative population for conducting surveys.
Technique #6: Understand the manipulation of the y-axis and data points
This is a pretty common data visualization trick I often see in action.
Check out this chart which measures Facebook fan growth of my website from FanPageKarma tool:
One look at this chart and it looks like SEOTakeaways’ Facebook fan growth has skyrocketed in the last month.
But if you look closely, you can see that the y-axis doesn’t start at zero. It starts at 2500.
Actually in the last 1 month, SEOTakeaways’ Facebook fan base increased from 2514 to 2596. That is 3.26% increase in the fan base. But by truncating the y-axis and starting it from 2500, it looks the fan base has increased by several thousand percent.
Now if I draw the same chart with y-axis starting at 0, then you will see a completely different picture:
That doesn’t look very nice, does it? Let me amplify this change by starting y-axis at 2514 and ending it at 2596 and at the same time plotting just two data points (the very first: 2514 and the very last: 2596):
Now it looks like a truly phenomenal growth chart. Isn’t it?
Note how by plotting just two data points I have managed to remove any fluctuation (peak, valley) in the data trend. Now from the chart it looks like there has been a steady sharp growth in the Facebook fan base.
Same data visualization trick can be applied on column charts as well to amplify changes:
Here the conversion volume through SEO campaign has increased by only 4% in the last 3 months. But when you look at the chart, the change in conversion volume looks much bigger.
Here is how this change actually looks like:
What else can be done to amplify changes without being caught? Just hide the scale on the y-axis.
Without any scale on the y-axis, there is no way of knowing where the y-axis starts.
Takeaways
- Always check your chart for the truncated y-axis
- Always check your chart for hidden scales
- Do not trust charts with just a few data points.
- Statistics can be misleading depending upon how they are presented
Technique #7: Look at the accuracy and credibility of the data source
It is not very hard to fabricate meaningless numbers that tell a story you want people to believe. The government does that all the time.
For example, Obamacare Will Increase Health Spending By $7,450 For A Typical Family of Four
Where this number 7,450 comes from? How do you define a typical family? What are the criteria? How reliable is this number?
According to this article on BBC: How bad are US debt levels? The US has a total debt of almost $17 trillion which is expected to rise to almost $23 trillion in the next five years. Where this figure 17 trillion comes from? What is the data source and how reliable it is?
Dig deep and you will find it is largely an assumption.
There is even a US debt clock to scare people with big numbers.
Though this debt clock mentions data source, it doesn’t exactly tell you where these big numbers are being pulled from and how reliable are they? Can you really believe all these numbers?
The majority of news out there which talk in numbers have little to no credibility because:
a) They don’t mention their data source
b) They don’t mention their data collection methodology
c) Their data source has little to no credibility.
d) Their data source is outdated and no longer applicable.
Media talk in numbers because numbers generate credibility and people are less suspicious of a statistical claim than they would be for a descriptive argument.
For example:
“75% of undergraduates are unemployed.”
“Majority of undergraduates are unemployed”
Now which statement seems more believable? Obviously the one with numbers.
Throw numbers here and there and make your story look more scientific and well researched. After all who is going to bother to check the data source or the data collection methodology?
Here is how a well-defined data collection methodology looks like:
Takeaways
- Beware of meaningless fabricated numbers. They are everywhere.
- Always look for the data source
- Always check the credibility of the data source.
- Determine how the data has been collected.
- Determine how current the data source is.
- Look at a lot of different data sources. Do not rely on just one data source.
Technique #8: Always present data with context
If I say to you that my website conversion rate is 15%, does it tell you anything meaningful about the site performance? No.
- You don’t know whether 15% is good or bad conversion rate.
- You don’t know whether the conversion rate has increased or decreased in comparison to last month.
- You don’t know whether this conversion rate is a goal conversion rate or ecommerce conversion rate.
- You have no idea whether the reported conversion rate is in aggregated form or segmented.
In other words you are not aware of the context. Without context, data is meaningless.
Comparison adds context to data and makes it more meaningful. So if you want to measure the performance of your marketing campaign, then you need to compare its performance with last month’s performance.
Without such a comparison, you will never know whether or not you are making progress.
Consequently, the following report is not very useful:
You can make this report more useful by comparing it with last month’s performance.
Takeaways
- Beware of the data which has been presented without context. It is always open to misinterpretation.
- Comparison adds context to data and makes it more meaningful.
- SA standalone metric doesn’t tell you anything meaningful.
Technique #9: Use common senses while looking at a chart
Sometimes just using common sense does the trick. For example:
According to Fox news chart, 129% of Americans believe that scientists falsify Global warming data. 129% really? How reliable can this analysis be if the numbers don’t add up to 100%?
Here is another chart:
What is wrong with this chart? Well you can’t compare conversion rate and conversion volume like that. This is because they both have got different units of measurements.
Takeaways
- Do not blindly believe whatever a chart has to say.
- Look at the charts closely. Look for a truncated y-axis, missing scales, the number of data points plotted, and variable types.
- Do the basic maths and question the data if something doesn’t seem right.
Technique #10: Make faith-based decisions
Data-driven marketers do not make faith-based decisions.
Now the problem is while they avoid making such decisions, their clients/employers aka the entrepreneurs make such decisions all the time and you can’t really stop them from making such decisions.
Why? Because they know that they will fail in business if they stop making faith based decisions.
Let me give you one example. I left my well-paid job to start a tech startup.
Knowing 90% of all Tech Startups Fail, I should not have even considered taking such action based on data.
What if my business failed? What if I can’t pay the bills? What if I never get a job ever again?
But I had to overcome all of these fears and take a leap of faith. So I did what I had to. Nothing really bad happened. I have been a happy independent consultant for years now.
Had I been data-driven, that 90% failure rate would have stopped me at the dead start from taking any action.
I would have never become independent and I would still be working somewhere 9 to 5 and commuting 5 hours a day.
Likewise you often hear stories like “How to Quit Your Job, Move to Paradise and Get Paid to Change the World” about people who quit their job, sell everything, move to a foreign country, and live their dream lives.
How they are able to do all that? They are able to do that because they take a leap of faith.
My friend Danny Dover (a well-known SEO and author of the book: SEO Secrets) quit his 6 figure salary job to complete his bucket list and is now living a happy and fulfilling life. He travels all over the world throughout the year. For him the word “holiday” actually means coming back home.
How he is able to do all that? He is able to do all that because he took a leap of faith.
A leap of faith, in its most commonly used meaning, is the act of believing in or accepting something intangible or unprovable, or without empirical evidence.
These people don’t pursue their dreams on the basis of the likelihood of success or failure. They don’t go around and look for facts or research for market stats to make sure that they are making the right decision. They just go ahead and do it. They do what they believe in and what makes them happy, no matter how crazy it may sound to others.
Faith-based decisions are an important part of our lives. All major business decisions are largely faith-based, from hiring an employee, entering into a business partnership to acquiring a business.
All major life decisions are faith-based whether it is friendship, marriage, or having kids.
You can never venture into the unknown and be innovative and think outside the box if you can’t make decisions without data/facts.
Why I am telling you all this? I am presenting you the other side of the decision-making process.
If you are not an entrepreneur then you need to start thinking like one. Understand their thought process. Understand why sometimes they reject your recommendations even when they are backed up with data. Understand why sometimes they reject your whole analysis (no matter how accurate it may seem) and prefer making faith-based business decisions and following their gut instinct.
Takeaways
- Do not automatically dismiss any claim just because it can’t be backed up with data.
- Understand that the data and tools available to you do not provide complete insight. They are there to help you not to dissuade you from reality.
- Understand business exists outside the digital realm and your data collection tools.
- Know what your analytics tools and KPIs cannot do as what they can and learn where and when you should trade-off.
- Understand that faith-based decisions are necessary for survival of a business.
- Think like entrepreneurs and look at things from their perspectives.
Other articles on Maths and Stats in Web Analytics
- Beginners Guide to Maths and Stats behind Web Analytics
- How to Analyze and Report above AVERAGE
- What Matters more: Conversion Volume or Conversion Rate – Case Study
- The little known details about hypothesis in conversion optimization
- Is your conversion Rate Statistically Significant?
- Calculated Metrics in Google Analytics – Complete Guide
- Here is Why Conversion Volume Optimization is better than CRO
- Bare Minimum Statistics for Web Analytics
- Understanding A/B Testing Statistics to get REAL Lift in Conversions
- Data Driven or Data blind and why I prefer being Data Smart
- The Guaranteed way to Sell Conversion Optimization to your Client
- SEO ROI Analysis – How to do ROI calculations for SEO
My best selling books on Digital Analytics and Conversion Optimization
Maths and Stats for Web Analytics and Conversion Optimization
This expert guide will teach you how to leverage the knowledge of maths and statistics in order to accurately interpret data and take actions, which can quickly improve the bottom-line of your online business.
Master the Essentials of Email Marketing Analytics
This book focuses solely on the ‘analytics’ that power your email marketing optimization program and will help you dramatically reduce your cost per acquisition and increase marketing ROI by tracking the performance of the various KPIs and metrics used for email marketing.
Attribution Modelling in Google Analytics and BeyondSECOND EDITION OUT NOW!
Attribution modelling is the process of determining the most effective marketing channels for investment. This book has been written to help you implement attribution modelling. It will teach you how to leverage the knowledge of attribution modelling in order to allocate marketing budget and understand buying behaviour.
Attribution Modelling in Google Ads and Facebook
This book has been written to help you implement attribution modelling in Google Ads (Google AdWords) and Facebook. It will teach you, how to leverage the knowledge of attribution modelling in order to understand the customer purchasing journey and determine the most effective marketing channels for investment.