I’ve been on vacation in a remote corner of India, so you’ll forgive me for not paying more attention to the recent Coleman study of the impact of spots on listeners and the stations they listen to.
Personally, I find these regular studies of the effects of advertising on listenership to be tiresome. Why are we studying what spots do to listeners instead of what spots do for listeners? Why are we trying to convince advertisers that spots still reach audiences rather than convincing them that spots compel audiences to action?
Basically, the study indicates that the number of folks listening through the spots is almost equal to the number listening before the spots – 93% of the original audience, to be specific.
However Coleman and Arbitron are quick to point out that this is not the SAME audience. That is, folks are tuning out and tuning in mid-spot. Thus the average number of listeners is very nearly the same, but what about the amount of listening by those who tune in? What about the rate of tune-out among those where were listening until the break but not during it? What about the impact on a brand over the long term when listeners tune in and hear one commercial after another? What does all of this have to do with ratings, after all, if anything? And isn’t that what matters most when spots come on?
If I flip over and hear a spot, do I stay with the station until the content portion of the hour? Or do I hop away? We can say that shorter breaks are better than longer ones for sustaining audience levels, but we can’t say that over the long run they’re better for ratings, since repeated exposure to break after break does not a happy listener make.
It doesn’t take a survey for us to know that consumers don’t listen to our stations for spots, after all. “Fewer commercials” is a big reason why listeners choose stations which have fewer commercials over those which don’t, just as it’s a big reason why Pandora and other online radio services are happy destinations for folks who want a break from typical radio spot volumes.
Even the Coleman study notes that it “does not address how the airing of commercials affects a radio station’s overall brand or why one station may perform better in the ratings than another.” But are there any questions more important than those?
Plenty of practical experience proves that fewer interruptions correlate to higher ratings. There isn’t a broadcaster in America that can’t point to evidence of this from his or her own firsthand knowledge.
So this Arbitron study shows that when you balance the folks who tune out because they don’t like what they hear with the folks who tune in hoping to hear something other than a commercial and disappointed to discover otherwise, the overall reach is nearly unchanged. Fabulous.
But does that make the listeners happy? Does that induce them to come back for more? Does that reflect why they’re at our stations in the first place? Does that fulfill their expectations? Will that keep them pinned to our stations in an era where commercial-free or -light alternatives abound?
Or is this fundamental “93%” outcome simply a statistical artifact of constant tune-in and tune-out, constant dial-scanning, constant churn of the type the PPM device is so good at measuring for the influential few listeners who are lucky enough to have one.
“What happens when the spots come on” is a lot less important than what happens when the spots go off, because that’s when you’re left with the content that’s worth tuning in for in the first place.
Let’s see THAT study, Arbitron.
Comments