I have been itching to write this post for a long time--perhaps 10 years. Stunningly, people are still making media decisions based on data that is 15 to 37 years old and that overstated performance in the first place. The topic is the murky world of TV Reach & Frequency.
To begin, let's define a few terms. Reach was a percentage of people exposed to a TV campaign in a particular time frame (often a four week flight). Frequency was simply a statistic that calculated the number of times those people who were "Reached", had an opportunity to see and hear the commercial. Multiply Reach x Frequency and you calculated Rating Points. This became the basic media equation that ruled our lives in media planning for a couple of generations.
Along the way came refinements. The erudite Herbert Krugman brought us the "three hit theory". The first time a person saw a commerical they said to themselves "what is it", the second time, "what of it" and by the third time they knew enough to make a purchasing decision. After that, the viewer (potential consumer) began to disengage from the advertising.
Others took this theory further. Alvin Achenbaum talked about the "heart of the frequency distribution" and how optimal media performance was often had by clustering 3-8 messages to as many people as possible within a brand's purchase cycle. This had us looking intently at the patterns of frequency distributions and often drove daypart mix selection.
There was also a remarkable article by Howard Kamin in the February 1978 issue of Journal of Advertising Research in which he talked about people may need 12 exposure opportunities to deliver three authentic hits. He also the raised the issue of why Reach & Frequency estimates were always so much higher than recall data.
Media researchers talked about these issues in very lively debates. They also discussed the leading formulae to calculate the projections-- Modal, short form Modal, and Metheringham. (Media researchers tended not to be chick magnets)
The Television Bureau of Advertising (TVB) put out a series of curves in 1971 and then updated them again in 1978. They were very helpful in determining some daypart mixes and to give clients numbers for sales meetings where they could say this year's campaign in Chicago would reach 93% of target males with an average frequency of 12.3.
The reality is that reach was never that high. All reach & frequencies (R&F) ever did was provide an estimate of exposure opportunities, not actual delivery. Just as Nielsen rating points never took into account the level of viewer attentiveness, neither did on line R&F's.
So far, so good. And then something happened. What happened is not much. Many in the industry tried to get services such as IMS and Telmar to develop market sensitive reach & frequency estimates. The theory was that a schedule in Los Angeles with more viewing options would have lower reach than one in Ottumwa, Iowa with two stations and low cable penetration. They were cooperative but agencies and their clients were not willing to step up to the plate and pay for their development.
As time went on, TV began to fragment. The now ancient curves popped up in small and mid-sized agencies in notebooks and early computer simulations. Yet, few made adjustments for the changes in the viewing patterns of America.
This past fall, I had a conversation with an executive at a company which does billing systems for many agencies. He candidly told me that the R&F's his systems provides are from curves built in 1992 or 1993. "A lot has changed since then", I commented. He agreed but said that stations cumes (total reach across the week) had not declined that much so what difference did it make. I agree about the cumes but who buys a spot every 15 minutes on a station? That is how you can truly deliver cume. As average ratings continue to plummet in broadcast and cable, reach of a 200 point schedule has to decline a bit. Separately, an executive at well known broadcasting giant said that his group is allowed to do several empirical reach & frequency analyes each year on the Nielsen Plus system that actually goes into meter households and follows viewers day to day. Reach in these "empirical" analyses is at least 20 points below where it was 15 years ago even for aggressive schedules.
A Further Insult
Some people also still show clients Intermedia Reach & Frequency data. This one is a really absurd. How do you combine TV reach with radio, for example? Very simple. Use a sophomoric formula known as random overlap (A+B)-(AxB) where A is TV reach and B is radio reach. I often do them in my head in meetings which astounds people. It is pretty simple arithmetic, really. Random overlap may be fairly close but no one knows what reality is. Also, a basic tenet of research is that different methodologies yield differing results. All media are measured differently and the audience estimates are just that--estimates. If we use random overlap we seem to be multiplying error as each medium has a unique methodology in its measurement and no two media intersect the same way. Also, some wags blend, spot TV, network cable, and local cable together as if they were three separate and distinct media types. It is still all television, folks.
Be very wary of anyone who tries to make media decisions based on R&F data. Steer clear of those who look all too closely at intermedia projections. They want media magic not solid thinking and execution. And, if someone is touting a 92% reach, look out. The curves they used to come up with the projection may be older than you are!
Only twice in my long career has anyone ever questioned the validity of what came from a computer printout. The underpinnings of 90% of broadcast R&f's are obsolete yet few know or care.
Longtime U.S. Senator, Secretary of State, and diplomat Henry Clay was famous for saying "I would rather be right than president". A better Clay quote is "Statistics are no substitute for judgement."
As digital R&F data gets better and better, TV data is largely stuck a generation ago. Be careful!