The headline sure is encouraging. According to the Coleman Insights/Arbitron/Media Monitors study, What Happens When the Spots Come On, radio stations hold 93% of their lead-in audience through the average stop set.
But, before you start loading up on long stop sets, there’s something of a devil lying in the details here. When they presented the research, Coleman and Arbitron were careful to note that the 93% doesn’t refer to how many listeners stay tuned throughout the stop set but rather to the overall audience levels prior to and immediately following the break. The actual proportion of listeners who stay tuned through the average stop set is lower than 93%. The problem is we don’t know exactly how much lower.
Radio stations experience a natural audience churn that happens independent of any tune-out triggers. Listeners get in or out of their car, enter or leave the room where the radio is on, or do any number of other activities that begin or end their listening session and have nothing to do with their reaction to what they just heard.
This natural churn is also captured in the results of the Coleman/Arbitron/Media Monitors study. As a result, it effectively dilutes the impact of spots on listeners who tuned in at the beginning of the stop set.
Imagine that we have a radio station with ten listeners tuned in via PPM. After the first spot, one of the ten listeners switches to another radio station while another one ends their radio listening session (let’s say by leaving their car). Meanwhile, still another listener turns on their radio for the first time that day. We now have nine listeners, eight of whom were there before the first spot. (That’s 90% of the number of listeners we had when we started, while just 89% of the listeners still listening to radio are still with us.) Once again, after the second spot, one listener starts their listening session with us, another one ends their radio listening session altogether, while a third listener switches to another station. But, this time, a listener also switches from another station to our station. (Maybe they were running spots too.). All in all, two listeners have jumped on and two have jumped off after the second spot. We still have nine listeners (90% of the overall audience level of when we started), but our station audience now holds just six of the eight listeners—or 75% of those who are still listening to radio.
Of course, this just an example. The amount of natural audience churn and the number of listeners switching to and from one station to another during the stop set is not called out in the research. Jaye Albright, who drills into the granular detail of all things PPM for her clients in Canada using BBM’s minute-by-minute data, tells me she was recently looking at 25-54 switchers to a major market Country station and found it’s roughly a 4-8% “in” and 4-8% “out” in every single minute.
This is not to say the results of the Coleman/Arbitron/Media Monitors results aren’t meaningful. It’s good to know that, regardless of tune-out, overall audience levels stay quite high during the average stop set, even if we should also consider the net effect—how much the audience might have grown if we weren’t running any spots. It’s also helpful to note the differences in audience levels by demo, format and ethnicity—News/Talk, Urban and Hispanic stations for example should be taking these results to the bank.
Coleman, Arbitron, and Media Monitors also quite rightly point out that the study says nothing about the impact of high commercial loads on the brand.
The research partners promise further analyses in 2012. Hopefully, we’ll get more insight into how spots and other content affect not just audience levels but actual switching behavior.
Fred Jacobs invoked the 80:20 rule this past week to blog about the value of conducting research on loyal, committed listeners. He makes some great points. It also struck me that he might be on to something that could help close the PPM engagement gap.
PPM is re-writing the rulebook for North American radio by placing more emphasis on exposure, and less on engagement. Listeners no longer have to be sufficiently engaged in a station to remember listening to it, they only need to be exposed to the signal. And so it is that low engagement, mass appeal music machines tend to do better in PPM than in diary.
That’s not all bad of course. Radio and its advertisers get a better measure of how many people actually hear the station and the ads it runs.
But there’s also a problem: as PPM leads radio away from engagement and towards exposure, more and more advertisers are heading in the opposite direction. They’re upping their spend on digital and social media precisely because these media specialize in engagement vs. exposure. By putting all of its apples in the PPM basket, radio risks falling even farther behind in losing those engagement dollars. Arbitron appeared to be on the way to closing the engagement gap with their “Radio Affinity” research project, then abruptly shelved it late last year.
That leaves it to stations to reach out to their loyal audience base, and not only find out what would get them to listen longer but also what would encourage them to engage with advertisers. Doable? I think so.
In case you skipped the trades for some July sun this week, Edison Research issued a release Thursday (07/28) that generated some heat. An audience analysis by Edison Research indicates that 18-34 listening to Pandora has reached between .7 and .9 ratings points in the top US radio markets. This would effectively rank Pandora among each market’s leading 18-34 FM stations (see RAIN’s analysis).
This triggered considerable hand-wringing and denial from radio, but also some questions about the data which, at least so far, are unanswered. Mary Beth Garber of Katz Radio (who admittedly is backing its own horse in this race) questioned the market geography of the study, and whether it provides an “apples-to-apples comparison.” In the release, Edison says they converted data provided by Pandora into the kind of AQH estimates used by Arbitron. Beyond that, few details on the methodology or data source are provided, with nothing on either the Edison or the Pandora website.
It’s easy to see why Pandora would commission this analysis. Placing their numbers on traditional radio yardsticks reinforces the legitimacy of the service to media buyers and shareholders—even if it’s open to debate whether Pandora constitutes “radio” or just another way of listening to music. And there’s no reason to suspect that the numbers have been goosed; Pandora’s dominance in the space is well documented.
But a little more transparency in the methodology would be that much more convincing.