I just delivered a research project for a broadcaster in a relatively small market. The study contained the opinions of 600 people.
Now this market, like virtually all markets, has its radio usage measured by Nielsen – in this case, by diaries.
Do you know how long it takes Nielsen to recruit a sample in this market as large as the sample in my research project?
Two years.
That’s right. The sample sizes in markets like this one – and markets like yours – are almost laughably small. In fact, we would all laugh if our direct incentive was to do anything but cry.
Recently Tracy Johnson posted a piece about this problem in a much larger market where the total in-tab for a meaningful demo like, say, Women 18-34, was 147 respondents.
Now if you take those 147 “voters” and divide them by age and sex and ethnicity and then spread out their behaviors over many dayparts, dozens of stations, and, potentially, dozens of online streams, you have data which is militantly opposed to accuracy and is, rather, an illusion. And not a very good one, at that.
In too many cases, you’re better off believing in the historical accuracy of Game of Thrones than in the accuracy of your ratings.
But it’s not just an illusion, it’s a dangerous one.
Dangerous because the more your clients understand about the intricacies of the ratings system, the more likely they are to be appalled – particularly in the presence of precise metrics from online radio players like Pandora and Spotify and digital natives like Google and Facebook.
When I can go through Facebook’s ad creation process and arrive at a specific number of consumers who will be impacted by my messaging with no estimates or random guesses required, what is the long term effect of this on attitudes about media measurement?
When broadcasters rush (in some cases) to simulcast their over-the-air station with their online stream in the hope that this “Hail Mary” pass at some point brings a PPM device close enough to a stream to register an otherwise inexplicable leap in audience share which vanishes as quickly as it materialized, what is the message we’re sending to the advertiser?
That we want to help them achieve their goals and move more product?
Or that we want to play a dangerous shell game at their expense and the expense of our brands?
What’s the answer? Not just more sample. But more emphasis on effectiveness. A stronger obligation to make the buy work for the client – and that buy can work no matter how big or small Nielsen says your ratings are.
Isn’t this why so many spoken word stations perform much better in revenue than their ratings would suggest?
Prove and argue effectiveness. Because as accountability proliferates, more and more clients will be buying what works at the right price rather than whatever the ratings magician pulls out of his magic hat.
Time to download your Nielsen numbers.
Abracadabra!
Comments