Mel Sokotch

7. How to pre-test advertising without getting burned

A senior vice president from one of the large research services made a presentation at the agency and anxiously opened up with the comment that she knew "we considered her one of the evil doers." It's this attitude that causes most advertising research to be presented on client turf, where the audience is typically a lot more civil and attentive.

But the fact is, the research services almost always provide good, actionable information about how customers react to advertising. Every so often, however, their findings make no sense, or they'll be contradictory, or even flat wrong. That's why it's so important to remember that we're dealing here with statistics and probability, not with absolute certainty.

In 1948, some very sophisticated statisticians infamously projected Thomas Dewey, not Harry Truman, the election winner. The statistical methodologies were duly tightened up. But in 2000, it happened again, when the TV networks, based on "reliable exit polling," initially projected Al Gore the winner.

All that said, launching a costly advertising campaign without testing it first makes little sense. But neither does putting blind faith into a one-time exposure. Some judgment is always required. To get the best results, three basic issues must be carefully addressed.

  1. The test ad must be representative of the finished ad.
  2. The research methodology must approximate the real world.
  3. The resulting metrics should never be taken at face value.
You might consider these issues standard, and you'd be right, but much too often they're ignored, with consequences that are never good. Let's examine each:

Contact | About | Blog | Seminars | Consulting | Subscribe | Bio | Privacy | Buy Book