Let’s suppose you’re a Christian station in a middle-sized Western market.
And suppose you use a particular kind of online callout, the kind where your audience can, at their convenience, sign up to vote on your music as part of your listener club. All volunteers are welcome.
This is an outstanding notion, since it invites the participation of your audience in the station itself, regardless of whether or not you use the data.
Of course, your concern has always been about making sure that you’re getting a good representation from regular folks on these panels. We know they tend to be fans, but that’s okay. We also know there’s the occasional competitor (or competitors) in the sample, but that’s okay. Research isn’t perfect, after all. Anyway, this is what the research folks call a “convenience” sample.
Now let’s suppose that you discover that a lot of the registered phone numbers in your online callout are nonexistent.
And suppose further that you do a little poking around and discover that a huge majority of the IP addresses originate not from your market, but from outside of it. And not just from outside of it, but from Nashville, TN, home to – you guessed it – the labels.
And then let’s suppose you call your label and tell them you’re onto them and they’d better stop loading up your testing with fake responses and, while they’re at it, they’d better stop paying subcontractors to jam your request lines with fake requests.
And suppose the label said “we’d never, ever do any of that.”
And then let’s suppose the problem vanishes overnight.
Coincidentally.
This, of course, is a true story.
While it’s against the law for the labels to bribe program directors, it’s perfectly legal for them to pay subcontractors to deceive program directors.
Even the Christian labels.
This is not to say that online research is inherently bad. Not at all. But bad research is inherently bad.
And, sometimes, labels are inherently bad.
Comments