Over the past several months, WireSpring has been evaluating various strategies for "measuring" digital signage. I'm putting this term in quotes because, like virtually everyone else, we still aren't sure what we're trying to measure, or how to best go about it. With our clients asking about the available options, we hoped to see if we could settle the issue once and for all. Unfortunately, after talking with numerous companies and evaluating many of the options available today, I feel like we're not much further along than when we first began our research. But we did uncover something interesting: Far more important than the actual method of measurement, or even the decision of what should be measured, is the understanding of the value of the data that will be collected. In other words, even if we can get accurate metrics, are these really worth anything to ad buyers and other decision makers?
Not sure what I mean about the "value of the data?" Well, I'll put it this way: If I were to randomly ask 1,000 readers of this article why measuring the reach or impact of digital signage is important, I'd probably get ten or maybe twenty answers. (You know the common ones: it makes ad space more valuable because people enjoy and act upon the ads, it helps in ROI calculations, or it contributes to the other measurement goals we discussed at last year's Digital Signage Expo.) Now, what if I were to ask the same 1,000 people how much that was worth to them? Chances are I'd get close to 1,000 different answers. And that, folks, is the number one problem that must be solved before getting serious about measurement. There are lots of different things that we can measure, and lots of ways to measure them. But when push comes to shove, the value of the measurement is the critical factor in making the business decision to expend resources on the process.
Let me give you an example: a few months ago, we were working with a client who manages an advertising-driven network in about 50 retail stores. The client paid for about 60% of the upfront installation costs, and is responsible for 100% of the maintenance costs. They're also responsible for 100% of advertising sales, although the retailer helps by providing access to their vendors. (There's no co-op marketing program at this retailer, however.) For several years, they've been selling ad space at about a 30% subscription rate, which has allowed them to be profitable and grow organically (albeit at a fairly slow pace) into other adjacent vertical markets. In search of ways to either sell more space on their network or else make more money on existing clients, they started looking into measurement options. Right from the beginning, this proved to be more difficult than expected. For example, as I pointed out, how would measurement allow them to make more money on existing customers? Those customers -- most of whom had been advertising for months or even years on the network -- had already demonstrated that they valued having their ads show up in-store. Whether or not they had formally determined the effectiveness of the medium (and I'm guessing that some did and others didn't), their continued participation meant they were satisfied with its overall performance, regardless of how they "measured" it. Thus, it seemed unlikely they'd find any newfound value in getting monthly data with some nebulous metric like impressions, opportunities to see (OTS) or "engagement factor," and even more unlikely that they'd be willing to pay much for the privilege.
As for signing up new customers, our client was unsure of whether they had missed out on any opportunities because of their lack of formal measurements. While coming up with a good metric and demonstrating the ability to produce results would certainly improve their sales pitch, it would only make a big difference to those advertisers who valued their particular metric as much as they did. So, for example, if the network company did decide to measure OTS, but their advertisers had no notion of what an "opportunity to see" was worth to them, it would give them little additional incentive to participate on the network. This isn't to say that research programs like Nielsen's PRISM and the ongoing work by POPAI don't hold significant value for our industry. Rather, the point is that network owners may have to do some legwork to help advertisers translate these newfound metrics into terms they know and understand.
Before you jump down my throat screaming "just measure sales lift already!" there are two things I'd like to point out. First, there are lots of ways to gauge sales lift, and second, believe it or not, sometimes that's not the right thing to measure. I'll start with the second point, again through an anecdote. Just the other day I was speaking with another client who had just started testing a digital signage campaign to advertise a brand new Unilever product. Unilever, who is clearly not a novice when it comes to product launches, indicated that for the first month or two after the product launch, the only thing they wanted our client to measure was product recall. From decades of experience they learned that creating buzz and being memorable was more important for establishing the product as a success than early sales numbers. So in this case, the advertiser knew what they wanted to measure and they obviously found that measurement to be valuable. But because of the nature of the measurement, our client used a series of decidedly low-tech methods to gather their data, since this particular metric doesn't lend itself well to technology-driven solutions. (For a few examples of the methods that work, see my article on retail media tracking from 2006.)
But what about those cases where we do want to measure sales lift? What's the best way to do it? There are numerous options ranging from hand counting to register receipts to path and gaze tracking -- but what works best? After months of studying the problem, I'm still convinced that if you want to look at sales trends, the only way to do it is studying the sales themselves. That means setting up controlled, A-B split tests and measuring results using real data from sales receipts. I know this answer is going to upset a lot of people who have been hoping that some new and wonderful technology will come along to make the entire process of measurement easier and more automated. Unfortunately, every other alternative we've looked at has either introduced uncertainties that spoil the data, or relies on an alternate measurement as a proxy for the real thing.
Still, this is one area of the industry that's changing quickly, and I'll be the first to admit that we've probably excluded a few measurement techniques. That's where you come in:
Have you had a good (or bad) experience trying to measure the effectiveness of your digital signs? What about your static POP displays, posters, and other out-of-home ads?
I'd love to hear your first hand reports -- leave your thoughts and comments below!
RSS feed for comments to this post