Gone are the days when data analysis was left to the mathematicians.
Thanks to a meteoric rise in data-driven marketing we’ve all become, in the words of Tom Fishburne, ‘DIY amateur data scientists’.
As marketers we’re now spending huge portions of our days measuring open rates, click-through percentages, page growth, engagement, and on and on it goes.
Your department heads LOVE it — impressive figures and hard-to-argue with ‘science’ can be on a powerpoint in the big wigs’ boardroom in no time at all.
And with such a wealth of data at our fingertips, why wouldn’t we use it to guide our decision making?
But unlike pasta, in this case there is such a thing as ‘too much of a good thing’.
Because all this data means it’s never been easier to cherry pick the numbers that support whatever it is we want to prove or whomever it is we want to impress.
While we think of numbers as being free from the interpretation that comes with our more creative pursuits, the truth is far more subjective. We all know 78% of statistics are totally made up, right?
If there’s a bias at play when we work with data, the data we select and how we interpret it can skew the results and lead to some less-than-stellar business calls. Remember when the algorithms responsible for voice recognition couldn’t detect the female voice? Yikes.
Just like the infamous voice recognition mishap, selection bias occurs when the sample selected doesn’t represent the entire target group. Want to know what percentage of the Western Australian population do their weekly grocery shop online but only include those who live in Perth in your sample? You just ignored a whole bunch of people. Why?
This is what happens when you give more weight to data that supports what you want to find than what you don’t want to find. It’s the same reason we tend to stop reading a newspaper article as soon as we’ve realised it doesn’t align with our view of the world. We’re human, and our natural instinct is to seek out content that affirms our views, not disputes it. There’s almost always another side and another way to look at the data.
You’ve written a super creative, super fun subject line you just know is going to kill it. However just to be sure, your boss has asked you to test it against something that simply ‘does what it says on the tin’. Two hours pass and your line is the winner! But is it? After calling it early and gloating to your teammates, the results in the coming days start to look a little (or a lot) different…
Time bias is when we stop analysing data prematurely, i.e., when the results have skewed in our favour. Just let time do its thing and hold off for a moment before you put all your eggs in the preliminaries.
Outliers can skew data in a really big way. For example — if you’re looking at a pricing restructure, you’ll probably want to know what the average income of your customers is. Gina Rinehart a regular of yours? Beware of how much this could warp your data and interpret the results as such before you price out your entire customer base.
Perhaps the biggest error and the hardest to fix, modeling bias is when there’s a bias at play from the very beginning of your data analysis. Subconsciously or not, you could wind up asking the wrong questions to the wrong people at the wrong time and make a whole big mess.
So ask yourself, are you using the numbers to guide the decisions you make as a marketer, or to back up something you’ve already decided?
So there you have it! Keep an eye out for these data tricks and the next time you see a stat pulled up in a boardroom remember to ask: “Where did this data come from? Who was in the sample? How long did it take to get there? And most importantly, were these the results you were expecting?”