The Digital Signage Insider

Testing Digital Signage Content: The Need for a New Approach

Published on: 0000-00-00

Does the color of the content on your screens matter? Should you hang them in portrait or landscape orientation? Where should you put them? How big should they be? These are the kinds of questions that we get asked every day as we work on digital signage projects. From the earliest budget planning stages to feeding mature networks with new content, day after day, month after month, and year after year, we continually make educated guesses about how to derive the most value from our networks. So a few years ago, we started searching for a more objective, data-driven approach that we could rely on to make content better and networks more valuable. We've written extensively in the past about making digital signage content that really works -- stuff that communicates messages quickly, effectively, and in a way that's most likely to get noticed and remembered. But even our fairly simplistic best practices for pairing colors, testing contrast and timing scenes relied on expensive research techniques like exit intercepts, in-store polling and pouring over mountains of data. They're simply not practical options for most DOOH networks -- or for WireSpring, if we want to keep conducting this kind of research and publishing the results for free. So, we decided to look for a better way. Over the next three or four articles, I'll tell you about the research problems we faced, the new approach we came up with, and even some of the more surprising answers we got when we asked some of the questions above.

In search of an affordable and scalable testing method

We wanted an approach that was practical, scalable and objective, and that could yield real, actionable results. It would also have to be a system that worked with the creative process, and that could be repeated over and over as content development tasks went on. Content creation gets a lot of lip service in our industry. But as we began our research in this vein, it quickly became apparent that there was very little attention (and even less research effort) really being given to it. With little more than a vague goal of "figuring out how to make digital signage better," we stated our challenge: come up with a better, faster and cheaper way to conduct research on the effectiveness of digital signage content.

It's not that there aren't ways to do content research right now. In fact, I've written a couple of articles in the past about how content creators can study their work from a viewer's perspective, and make simple tweaks to ensure that the content works on-screen. The problem is that those techniques can only go so far, and the other options available right now simply aren't right for most companies in this space. There are a few high-end research companies and services like DS-IQ that can help companies make decisions, but they require a lot of up-front integration, and are simply out of reach for most agencies and clients. Real-world testing is frequently a logistical nightmare. Making and testing content in a trial-and-error fashion is expensive, time consuming, and depending on your host venues, may not be possible at all. As one customer succinctly put it: "By the time you've figured out what you really need to test, you've already blown your budget."


Amazon Mechanical Turk is a trademark of Amazon.com, Inc.
or its affiliates in the United States and/or other countries.
Ever since I had the opportunity to tour Frito Lay's neuromarketing lab a few years ago, I've been thinking about ways to make advanced research techniques accessible to the digital signage market. The lab used computer simulations to test different product packages, slogans, shelf layouts -- as well as reproducing real in-store environments for test subjects to navigate and interact with. While that kind of real-world simulation is well outside of most companies' budgets, it seemed like our medium really should be ideally suited to some kind of computer simulation. But it was still unclear how tests could be run, who would run them, and who the test subjects would be. To begin with, I knew that we would need:
  • A low-cost solution that would let us first test several thousand control cases, so we could establish baseline accuracy and precision measures.

  • Something that could be easily modified to run different kinds of tests.

  • An approach that could be set up and dismantled quickly, without needing to schedule something months or weeks in advance.
And when we did our research, we felt our best chance of making this project work was to base it on Amazon's crowdsourcing framework, known as Mechanical Turk.

Enter Amazon's Mechanical Turk

Amazon Mechanical Turk is a web portal where "Requesters" can set up simple jobs that tens of thousands of "Workers" around the world can choose to do. For each task they complete, Workers earn a small reward, usually a few cents. The system is ideally suited to jobs that are hard for a computer to do, but easy for a person to do -- for example, looking at a photo to see if there's a red car, a man with a tattoo... or what's on a digital sign.

We wanted to use the system to ask workers about digital signage images, our thought being that we could show images of just content, or content on screens, or content on screens in retail, office, hotel or restaurant environments, and then ask the Workers questions about what they saw. Because Mechanical Turk is so inexpensive compared to traditional polling techniques, we figured we could do it a couple of thousand times, check our results, and do it a couple thousand times more just to be certain.

Unfortunately, the Mechanical Turk environment doesn't offer a lot of the tools we needed to run the kinds of tests we wanted to do. And we knew going in that no computer simulation would ever be a perfect substitute for real-world testing. So, we came up with a list of questions that we wanted to solve before moving forward with the research:
  • How can we control the Worker's "environment"?

  • Do we need to simulate the venue as well as the content?

  • Do we need to test with video/moving images, or can we use stills?

  • How can we test whether the results are accurate?

  • How big must a sample be to be significant?

  • What kinds of things can we definitely not test this way?

  • Are there hidden costs?
What we tested

After a few weeks of kicking ideas around and talking to researchers in similar fields, we eventually settled on a list of variables, a set of control experiments, and a framework for conducting our tests inside of Mechanical Turk. For the first batch of tests, we had several variables that we were interested in looking at. We started with the low-hanging fruit: common questions in the industry, all of which are expensive to test in the real-world using current techniques:
  • Color and contrast combinations

  • Number of words present on screen

  • Brightness of screen environment

  • Screen orientation (portrait/landscape)

  • Amount of visual "clutter" in environment

  • Distraction effect of people near the screen

  • Size of screen relative to active attention area
A lot of people ask us about these topics on a regular basis, and based on past research, we already know the "answers" to some of the questions. This lets us use them as controls to determine (to some degree, at least) how accurate the rest of our results are.

Analyzing the results

Whew. With such a lengthy lead-in, we'll have to save the more interesting stuff -- like setting up the tests, analyzing the results, and finally answering some of the "big" questions -- for next week. I guarantee you'll learn something you didn't know before. An example? Well, let's just say that if you were planning to deploy your next batch of screens in portrait orientation so that they'd be more eye-catching, you might want to reconsider. Why? Well, go read up on what we've previously written about the active attention zone, and then tune in to next week's blog article!

Subscribe to the Digital Signage Insider RSS feed


Looking for more articles and research? Our newest articles can always be found at Digital Signage Insider, but there are hundreds of additional research articles in our historical articles archive.


You may also be interested in M2M Insider: our blog about M2M and the Internet of Things.