10 Analysis Techniques that Fool Data Driven Marketers

 

In the next few minutes I will show you 10 different techniques that can be used intentionally or unintentionally to mislead data driven marketers with the data they adore so much.

I will also prove it to you how being ‘data smart’ will give you an edge over your competition.

Actually the main purpose of this article is to avoid being fooled by data driven marketing itself.

Don’t get me wrong. There is nothing wrong with being data driven. It is still better than making all business decisions purely on faith or whatever your boss/client has to say.

But being data driven is just not good enough. You have to be “data smart”. 

Data driven marketing is all the rage these days. But just like conversion rate optimization, it badly needs an upgrade. In case of conversion rate optimization, the upgrade was ‘conversion optimization’ (yes the CRO without conversion rate).

The upgrade for data driven marketing is “data smart marketing” (or smart data marketing, whatever you prefer).

 

When we say or do data smart marketing, our actions and decisions are not purely data driven. We don’t just blindly follow whatever a metric (like conversion rate) has to say or whatever a chart or report has to say.

We look beyond data and make business decisions based on:

  1. Context (extremely important factor often overlooked in data driven marketing)
  2. Collective know how of the organization and industry
  3. All business and marketing activities which are outside the digital realm.
  4. Best practices of data analysis, interpretation and statistics.

 dataDriven-dataSmart

Data driven marketers tend not to look beyond data. They often disregard any claim which can’t be backed up with data.

They often work with the belief that the data and tools available to them somehow provide complete insight and if something can’t be collected and measured then it shouldn’t be taken into account while making important business decisions and calculating the business bottomline.

When the data is improperly used we tend to make poor business decisions and that too with lot of confidence.

Related Post: Data Driven or Data blind and why I prefer being Data Smart

Data smart marketers on the other hand use ‘smart data’ to make business and marketing decisions.

Smart data is simply the data which is used in a smart and intelligent manner.

There is nothing special about this data in itself except it is used intelligently.

 

When we use smart data we take context into account, we take data collection issues into account and we follow the best practices of data analysis, data interpretation and statistics.

Data smart marketers know what their analytics tools and KPIs cannot do as what they can and where they should trade off. They know when and how to make faith based decisions.

einstein-quotes

Ok, now let us explore the 10 analysis techniques that fool the data driven marketers.

 

Analysis Technique #1: Abusing Averages

There are many types of averages in statistics but the most common type are: mean, median and mode.

The mean (also known as arithmetic mean) is simply an average of the numbers.

Median is a middle number in a sorted list of numbers

The mode is the number that is repeated more often than any other in a list of numbers

Now let us suppose someone conducted a study of 1500 posts from 5 different Facebook fan pages for 3 months and came up with following results:

 advvalue-posts

If you use mean as the average then it looks like Facebook posts are worth a lot in term of reaching as many people with paid ads in other marketing channels. If you use mode as the average then it looks like Facebook posts are not worth that much.

So if you are selling advertising, which type of average is more profitable for you to report? Obviously ‘mean’.

Now let us suppose someone conducted a second study of 1500 posts from 5 different Facebook fan pages for 3 months and came up with following results:

 advvalue-post2

Now here median is more than the mean, so why not use median here to increase the advertising value of posts.

This is just a small example. You will often read studies and reports where a researcher will give you no explanation of the choice of average being used. Is he using the average which helps him in reaching his conclusion? May be.

Takeaway

It is very human to twist the data (either knowingly or unknowingly) to reach the conclusion one wants.

What is the solution?

The solution is, you first measure the spread of the data values in a data set and then decide whether or not you can trust the reported average value.

You can measure the spread either by looking at the distribution of values in a data set or by calculating spread through IQR, variance or standard deviation.

Related Post: How to Analyze and Report above AVERAGE

 

Analysis Technique #2: Making marketing decisions based on Conversion Rate

Somehow conversion rate has become even more important metric than ROI in recent years for some digital marketers who sell ‘conversion rate optimization’ as a service to their clients.

Every second CRO agency boast of improving the conversion rate of their clients by not less than 2 digits. Improving conversion rate by 3 digits is not uncommon either:

“80% improvement in conversion rate”
“300% improvement in conversion rate”.
Sounds familiar?

Now the problem with such type of claims is that, many of these agencies remain silent about the impact of increase in conversion rate on sales, cost and gross profit.

You will rarely see claims made like this: “we improved the sales of our clients by 300%”

This is because

increasing the conversion rate is much easier than actually increasing the sales volume and gross profit.

 

Example-1:
Website A Conversion Volume = 100
Website A Traffic = 10000 visits
So, Website A conversion rate = 100/10000 = 1%
Now decrease the website traffic from 10k to 5k (pause some of the paid campaigns, they are not performing well)
Now, Website A conversion rate = 100/5000 = 2%

So now I can make the claim that I increased the conversion rate of website A by 100%. But does this improvement in conversion rate impact the business bottomline? Does it improve sales? The answer is NO.

 

Example-2:
Website A conversion rate: 1%
Website A cost per acquisition: £20
After conversion rate improvement
Website A conversion rate: 2%
Website A cost per acquisition: £30

You may argue that increase in conversion rate should decrease the acquisition cost. Well this is not always the case. In fact there is no direct correlation between conversion rate and cost. Your acquisition cost can easily go up if you are getting more of average/low value customers than your best customers.

Remember conversion rate = conversion volume/traffic.

Its calculation doesn’t take “cost” into account. So any increase or decrease in cost will not directly impact the conversion rate. That also means any increase or decrease in conversion rate will not directly impact cost.

 

Example-3:
Website A conversion rate: 1%
Website A Sales: £200k
After conversion rate improvement
Website A conversion rate: 2%
Website A Sales: £150k

You may argue that increase in conversion rate should increase the sales. Well this is not always the case. In fact there is weak positive correlation between conversion rate and sales as conversion rate doesn’t take ‘average order value’ into account, an important part of increasing sales.

Remember conversion rate = conversion volumes/traffic.

Its calculation doesn’t take “average order value” into account. So any increase or decrease in average order value will not directly impact the conversion rate. That also means any increase or decrease in conversion rate will not directly impact average order value.

Your sales can go down even after improvement in conversion rate, if there is a negative correlation between conversion rate and average order value or negative correlation between conversion rate and transactions.

I have explained all of these correlations in great detail in this post: Case Study: Why you should Stop Optimizing for Conversion Rate

 

When someone is just promoting the importance of increase in conversion rate, we have no idea:

1. How the improvement in conversion rate actually impacted the business bottomline?
May be there was only a marginal improvement in sales. May be there is no improvement in sales or may be the sales actually declined.

 

2. How the conversion rate metric was calculated?

  1. Was conversion rate increased by increasing the conversion rate of the conversions which don’t really impact the business bottomline?
  2. Was conversion rate increased by decreasing the traffic?
  3. Was conversion rate increased by taking visitors into account instead of visits?
  4. Was conversion rate increased through some sneaky data segmentation?
  5. Was increase in conversion rate is a result of small data sample being used in testing?

3. Whether the conversion rate being promoted is a Goal conversion rate or e-commerce conversion rate?

It is one thing to improve goal conversion rate by 5% but a totally difficult ball game and much more difficult to improve e-commerce conversion rate by 5%.

4. Whether the reported conversion rate is in aggregate form or segmented?
If you have set up 5 goals and conversion rate of each goal is say 20%, then you would have 100% website conversion rate. But does that mean your website is now converting every visitor into customer? No.

5. When the conversion rate metric was calculated?
If during peak season, then you are bound to have high conversion rate.

So you see there are so many factors that you need to take into account while playing with conversion rate metric. You can’t just blindly rely on conversion rate to improve the business bottomline.

Takeaway 

It is very human to twist the data (either knowingly or unknowingly) to reach the conclusion one wants.

What is the solution?

monitor-conversionVolume

The solution is, monitor conversion volume and especially acquisition cost during conversion optimization.

There should be considerable increase in conversion volume and considerable decrease in acquisition cost if conversion optimization has actually been carried out.

Don’t get blinded by double/triple digit increase in conversion rate. It doesn’t mean anything if there is little to no increase in conversion volume and gross profit.

 

Analysis Technique #3: Overlooking Effect Size

Consider the performance of three campaigns A, B and C in the last 1 month:

campaign-performance

One look at the table above and many marketers will declare Campaign B as the winner because it has the highest e-commerce conversion rate. But this is not the case.

Analysing data without good knowledge of research design and statistics can lead to serious misinterpretation of data.

Data is not what you see is what you get. Data is what you interpret is what you get.

Here the sample size (4 transactions out of 20 visits) of campaign B is too small to be statistically significant.

Had campaign B got 1 transaction out of 1 visit, its conversion rate would be 100%. Will that make its performance even better? No.

So we can filter out campaign B performance here.

Statistically significant result is the result which is unlikely to have occurred by chance.

Statistically insignificant result is likely to have occurred by chance.

Now campaign A has higher conversion rate than campaign C, so clearly Campaign ‘A’ is the winner? No.

At this point we can’t say with confidence whether the difference between the conversions rates of two campaigns is statistically significant.

 

We need to conduct a statistical test like Z test to calculate the statistical significance of the difference in conversion rates of the two campaigns:

campaign-performance2

Let us suppose that after conducing ‘Z test’, the statistical significance of the difference in conversion rates of the two campaigns turned out to be 98%. Since statistical significance is more than 95%, many data driven marketers will declare Campaign ‘A’ as a winner and would recommend to invest more in the campaign.

Here data smart marketers outsmart data driven marketers as they tend to look beyond data. Since they know ‘data is not what you see is what you get’, they are less prone to making observational errors.

They will go one step further and calculate the effect size (or size of the effect).

In statistics, an effect size is a measure of the strength of a phenomenon and is calculated as:

effect-size

So even when the difference in the conversion rates of the two campaigns turned out to be statistically significant and we are now statistically confident that Campaign A has higher conversion rate, we should still be investing more in Campaign C as the effect size (here revenue) of Campaign C is much larger than that of Campaign A.

 

satistical significance

That is why you do not end your A/B test just because it is now statistically significant.

  • Statistical significance of 95% or higher doesn’t mean anything, if there is little to no impact on effect size (conversion volume).

  • That is why you should optimize for conversion volume and not conversion rate.

  • CRO is for data driven marketers who just follow whatever data has to say.

  • CO (CRO without conversion rate) is for Data smart marketers who look beyond data and follow the best practices of statistics.

Rule of thumb is that each variation you test should get at least 30 conversions in 30 days. Higher the conversion volume (i.e. effect size) per variation the better.

If you declare success and failure on the basis of statistical significance alone, then even after conducting several A/B tests and getting statistically significant results each time, there is always a high probability that you will still not see any considerable increase in your revenue.

So if you are making marketing decisions based on statistical significance alone you are not going to get optimal results. You may even in some cases loose significant amount of money.

Takeaway

satistical significance2

So optimize for effect size i.e. conversion volume, acquisition cost and gross profit.

 

Analysis Technique #4: Chasing KPIs instead of solving customers’ problems

Since data driven marketers tend not to look beyond data, they remain busy chasing KPIs like conversion rate: “We have to improve conversion rate by X”, “we have to improve sales by Y”.

On the other hand data smart marketers look beyond data and they don’t go around chasing KPIs.

They focus on solving their customers’ problems, one at a time. Because of that they primarily focus on Surveys and not A/B testing.

A/B testing is for data driven marketers.

Surveys is for data smart marketers. 

 chasing-kpis1

 chasing-kpis2

However it would have been so much easier and less time consuming had he directly asked the customers about their problems through a simple survey in the first place.

feedback

That is how data smart marketers outsmart data driven marketers.  They find and deploy solutions much faster because they look for solutions beyond data. They don’t focus on improving KPIs. They focus on solving customers’ problems.

Many blogs I read on conversion optimization, focus mainly on A/B testing. Like all you can do under conversion optimization is A/B testing.

Now the problem is, while it is cool to conduct A/B test, it is a crime against humanity ;) to conduct such tests without a solid hypothesis.

If your hypothesis is not based on qualitative data then it is not a hypothesis, it is your personal opinion.

A solid hypothesis is not based on what you think, your customers want to see but is based on what your customers have said they want to see.

So for example if majority of your customers are complaining about your shopping cart page, then you go ahead and test the page. You don’t test the page just because that is what you are supposed to do as a conversion expert.

This is a big difference between how a data driven marketer solve a conversion problem and how a data smart marketer solve the same problem. They both solve for the same problem but the latter solve the same conversion problem much faster.

Takeaway

  • Solve for your customers and not for KPIs.

  • Run more surveys and usability tests than A/B tests.

  • Run surveys 24 hours a day, 7 days a week.

  • Continuously collect customer feedback and act on them in a timely manner. 

 

Analysis Technique #5: Overlooking data sampling issues

small-sample

If you see a yellow notification like this in Google Analytics (doesn’t matter whether it is GA standard or GA premium), you should immediately stop assuming that you are going to get any accurate data from your report.

There is a high probability that reported metrics from ‘conversion rate’, ‘revenue’ to ‘visits’ could be anywhere from 10% to 80% off the mark.

You can’t make business and marketing decisions from a report which is based on just 2.64% of the website total visits.

 

So for example Google Analytics may report your last month revenue to be say £2 million when in fact it is only £900k.

Such inaccuracies in data occur because of bad data sampling.

bad-data-sampling

To learn more about fixing data sampling issues in Google Analytics, check out this post: Google Analytics Data Sampling – Complete Guide

Data sampling issues are not limited to just Google Analytics. They can be found everywhere.

Most of the statistics is based on data samples and if you are not sure whether the selected sample is a good representative of all of the data then you could be looking at biased/inaccurate reports and analysis.

For example say your client sells analytics software called ‘XYZ’ and he runs a survey in which he ask his clients to select the best analytics software among all the softwares available in the market.

Majority of his clients are likely to rate ‘XYZ’ as the best analytics software as they are already paying for it. The problem with this scenario is with the selected sample, which is while being a representative sample is not random. It does not represent the average users of analytics softwares.

It is like asking your employees, who is the best boss.

 

A good data sample would be random and would contain people of different ages and from all walks of life.

You will often see companies misleading consumers with advertising like: “99.99% customer satisfaction rate”, “we are market leaders in ….”

All of these claims can be easily validated by looking at the sample size and sample quality which they often do not publish for scrutiny.

So it is always a good practice to select representative sample which is random and look at the sample size before you draw any conclusions.

You won’t get any worthwhile conclusions from bad sample no matter how sophisticated your analysis was.

Takeaways

  1. Always look at the sample size before drawing any conclusion.

  2. Select random sample from representative population for conducting surveys. 

 

Analysis Technique #6: Amplifying changes by manipulating Y axis and data points.

This is a pretty common data visualization trick I often see in action.

Check out this chart which measures Facebook fan growth of my website from FanPageKarma tool:

fan-base-growth

One look at this chart and it looks like SEOTakeaways’ Facebook fan growth has skyrocketed in the last one month. But if you look closely, you can see that the Y-axis doesn’t start at Zero. It starts at 2500.

Actually in the last 1 month, SEOTakeaways’ Facebook fan base increased from 2514 to 2596. That is 3.26% increase in fan base. But by truncating the Y axis and starting it from 2500, it looks the fan base has increased by several thousand percent.

Now if I draw the same chart with Y axis starting at 0, then you will see a completely different picture:

fan-base-growth2

That doesn’t look very nice. Does it? Let me amplify this change by starting Y axis at 2514 and ending it at 2596 and at same time plotting just two data points (the very first: 2514 and the very last: 2596):

fan-base-growth3

Now it looks like a truly phenomenal growth chart. Isn’t it?

Note how by plotting just two data points I have managed to remove any fluctuation (peak, valley) in the data trend. Now from the chart it looks like there has been a steady sharp growth in Facebook fan base.

Same data visualization trick can be applied on column charts as well to amplify changes:

seo-conversion-volume

Here the conversion volume through SEO campaign has increased by only 4% in the last 3 months. But when you look at the chart, the change in conversion volume looks much bigger.

Here is how this change actually looks like:

seo-conversion-volume2

What else can be done to amplify changes without being caught? Just hide the scale on the y-axis.

 seo-conversion-volume3

Without any scale on the y axis, there is no way of knowing where the y axis starts from.

Takeaways

  1. Always check your chart for truncated y-axis

  2. Always check your chart for hidden scales

  3. Do not trust charts with just few data points.

  4. Statistics can be misleading depending upon how they are presented

 

Analysis Technique #7: Overlooking the accuracy and credibility of the data source

It is not very hard to fabricate meaningless numbers that tell a story you want people to believe.

Government does that all the time. For example, Obamacare Will Increase Health Spending By $7,450 For A Typical Family of Four

Where this number 7,450 comes from? How do you define a typical family? What is the criteria? How reliable is this number?

According to this article on BBC: How bad are US debt levels?, US has a total debt of almost $17 trillion which is expected to rise to almost $23 trillion in the next five years. Where this figure 17 trillion comes from? What is the data source and how reliable it is?

Dig deep and you will find it is largely an assumption.

There is even a US debt clock to scare people with big numbers.

clock

 

Though this debt clock mention data source, it doesn’t exactly tell you where these big numbers are being pulled from and how reliable are they? Can you really believe all these numbers?

Majority of news out there which talk in numbers have little to no credibility because:

a) They don’t mention their data source

b) They don’t mention their data collection methodology

c) Their data source has little to no credibility.

d) Their data source is outdated and no longer applicable.

 

Media talk in numbers because number generates credibility and people are less suspicious of a statistical claim than they would be for a descriptive argument. For example:

“75% of undergraduates are unemployed.”

“Majority of undergraduates are unemployed”

Now which statement seems more believable? Obviously the one with numbers.

Throw numbers here and there and make your story look more scientific and well researched. After all who is going to bother to check the data source or the data collection methodology?

 

Here is how a well-defined data collection methodology looks like: 

survey-methods

Source: http://www.gallup.com/poll/150353/self-reported-gun-ownership-highest-1993.aspx

 

Takeaways

  1. Beware of meaningless fabricated numbers. They are everywhere.

  2. Always look for the data source

  3. Always check the credibility of the data source.

  4. Determine how the data has been collected.

  5. Determine how current the data source is.

  6. Look at lot of different data sources. Do not rely on just one data source. 

 

Analysis Technique #8: Presenting data without context

If I say to you that my website conversion rate is 15%, does it tell you anything meaningful about the site performance? No.

You don’t know whether 15% is good or bad conversion rate. You don’t know whether the conversion rate has increased or decreased in comparison to last month. You don’t know whether this conversion rate is goal conversion rate or ecommerce conversion rate.

You have no idea whether the reported conversion rate is in aggregated form or segmented.

In other words you are not aware of the context.

Without context, data is meaningless.

Comparison adds context to data and make it more meaningful.

So if you want to measure the performance of your marketing campaign, than you need to compare its performance with the last month performance. Without such comparison, you will never know whether or not you are making progress.

Consequently following report is not very useful:

comparison

You can make this report more useful by comparing it with last month performance.

comparison2

Takeaways

  1. Beware of the data which has been presented without context. It is always open to misinterpretation.

  2. Comparison adds context to data and make it more meaningful.

  3. Standalone metric doesn’t tell you anything meaningful. 

 

Analysis Technique #9: Blindly following the charts and not using Common sense

Sometimes just using the common sense do the trick. For example:

common-sense

Source: http://mediamatters.org/blog/2009/12/08/fox-news-fiddles-with-climate-change-polling/157839

According to Fox news chart, 129% Americans believe that scientist falsify Global warming data. 129% really? How reliable this analysis could be if the numbers don’t add up to 100%?

Here is another chart:

cv-cr

What is wrong with this chart? Well you can’t compare conversion rate and conversion volume like that. This is because they both have got different units of measurements.

Takeaways

  1. Do not blindly believe whatever a chart has to say.

  2. Look at the charts closely. Look for truncated y axis, missing scales, number of data points plotted and variable types.

  3. Do the basic maths and question the data if something doesn’t seem right. 

 

Analysis Technique #10: Not making faith based decisions

Data driven marketers do not make faith based decisions. Now the problem is while they avoid making such decisions, their clients/employers aka the entrepreneurs make such decisions all the time and you can’t really stop them from making such decisions.

Why? Because they know that they will fail in business if they stop making faith based decisions.

Let me give you one example.

I left my well paid job to start a tech start up. Knowing 90% of all Tech Startups Fail, I should not have even considered taking such action. What if my business failed? What if I can’t pay the bills? What if I never get a job ever again?

But I had to overcome all of these fears and take the leap of faith. So I did what I had to.

Nothing really bad happened. I have been a happy independent consultant for years now.

Had I been data driven, that 90% failure rate would have stopped me at the dead start from taking any action. I would have never become independent and I would still be working somewhere 9 to 5 and commuting 5 hours a day.

 

Likewise you often hear stories like “How to Quit Your Job, Move to Paradise and Get Paid to Change the World” about people who quit their job, sell everything, move to a foreign country and live their dream lives.

How they are able to do all that? They are able to do that because they take a leap of faith.

My friend Danny Dover (a well-known SEO and author of the book: SEO Secrets) quit his 6 figure salary job to complete his bucket list and is now living a happy and fulfilling life.

He travels all over the world throughout the year. For him the word “holiday” actually means coming back home.

How he is able to do all that? He is able to do all that because he took a leap of faith.

A leap of faith, in its most commonly used meaning, is the act of believing in or accepting something intangible or unprovable, or without empirical evidence.

Source: http://en.wikipedia.org/wiki/Leap_of_faith

These people don’t pursue their dreams on the basis of likelihood of success or failure. They don’t go around and look for facts or research for market stats to make sure that they are making the right decision. They just go ahead and do it. They do what they believe in and what make them happy, no matter how crazy it may sound to others.

Faith based decisions are important part of our lives. All major business decisions are largely faith based from hiring an employee, entering into a business partnership to acquiring a business. All major life decisions are faith based whether it is friendship, marriage or having kids.

You can never venture into the unknown and be innovative and think outside the box if you can’t make decisions without data/facts.

 

Why I am telling you all this? I am presenting you the other side of the decision making process. If you are not an entrepreneur then you need to start thinking like one. Understand their thought process.

Understand why sometimes they reject your recommendations even when they are backed up with data.

Understand why sometimes they reject your whole analysis (no matter how accurate it may seems) and prefer making faith based business decisions and following their gut instinct.

Takeaways

  1. Do not automatically dismiss any claim just because it can’t be backed up with data.

  2. Understand that the data and tools available to you do not provide complete insight. They are there to help you not to dissuade you from reality.

  3. Understand business exist outside the digital realm and your data collection tools.

  4. Know what your analytics tools and KPIs cannot do as what they can and learn where and when you should trade off.

  5. Understand that sometimes faith based decisions are necessary for survival of a business.

  6. Think like entrepreneurs and look at things from their perspectives.

 

Other article you will find informative: Facebook Analytics – Super Duper Guide

 

Subscribe to my blog
Join my free newsletter and learn to avoid the analytics mistakes everyone seems to be making over and over again.

 

About the Author:

My business thrives on referrals, so I really appreciate recommendations to people who would benefit from my help.Please feel free to endorse/forward my LinkedIn Profile to your clients, colleagues, friends and others you feel would benefit from SEO, PPC or Web Analytics.