Looking for the other articles in this series?
If you missed any of the previous articles, here's a handy list:
- Testing Digital Signage Content: The Need for a New Approach
- Testing Digital Signage Content: Experimental Controls and Setup
- Testing Digital Signage Content: Color, Length and Lighting
- Testing Digital Signage Content: Orientation, Clutter and Size

Find the place where art and science meet to yield the best-performing digital signage content.
So, what did we learn?
After completing all of the experiments, we learned the following about our approach:
- Despite numerous control tests, it still "feels" weird to test real-world scenarios on a computer screen.
- While we won't know about accuracy of everything we tested without doing further real-world research, we know that we have precise and statistically significant results that parallel some past real-world tests we've run.
- Whereas we originally thought testing more than one variable at once would cause problems, it actually helped us uncover significant trends (e.g. the severity of the clutter issue).
Takeaway #1: Visual clutter in your environment is BAD.
Unfortunately, our #1 finding is the one that you as a digital signage content professional or network owner probably have the least ability to influence. Simply put, the more complex the visual environment your signs are placed in, the more difficult it becomes to make sure your messages are seen and remembered. If you're a store owner with a private-labeled network, that's something to think about, since you can act on it. The rest of us, though, are faced with tough choices, like trying to make the messages "louder," or resorting to some other, similar tactic to get noticed.
Maybe those "clean store" policies have some legs...
Takeaway #2: The higher the contrast, the better the visibility.
This is one of those "duh" concepts that people still ignore, and it echoes one of the first major conclusions we drew when we began researching ways to make digital signage content more effective a number of years ago. Simply put, using very high-contrast color and pattern schemes makes text easier to read and separates important graphics from background "noise." Both of these things make your message easier to see -- both up close and from a distance -- which improves your odds of being remembered later.
Takeaway #3: Landscape, not portrait.
This result still surprises me, and as I mentioned earlier in this series, it's unclear why landscape-oriented signage was more memorable than portrait-oriented signage. My favorite theory invokes the active attention zone, a concept we talked about a few years ago. In short, since we only pay attention to a small portion of what we look at, and since our field of view is so much wider than it is tall, it may be that a horizontally-oriented screen fits more naturally into our perspective, so we give it more attention.
Takeaway #4: Shorter message = better recall.
Another surprising result was just how much of a difference taking a word or two out of a call-to-action or other message can make when it comes time to remember it later. Now, from a practical perspective, I don't know how important this really is. Often, a sign's whole purpose is just to get a viewer to acknowledge that an offer exists, rather than getting them to recognize a specific offer. That being said, though, we saw significant differences in recall even within the confines of the "seven plus or minus two" axiom (which states that people are generally pretty good at remembering lists of between five and nine items, but get a lot worse when you go higher than that).
Takeaway #5: A little bigger ≠ a lot better.
Even quintupling the amount of space that a message occupied had little effect when it still only took up a small amount of the active attention zone. So, if you want to save a few dollars and go with 46" screens instead of 50" ones, there probably won't be any measurable real-world impact on the performance of your signs.
Limitations and caveats
We're pretty pleased with this first set of results, and look forward to using Mechanical Turk to test out more scenarios in the future. That said, we definitely did run into some technical limitations during this run that we'd like to resolve. Chief among them is figuring out how to calibrate things like "brightness" and "clutter" against real world units, and coming up with a better way of creating test images or videos that can test only one variable without being too artificial. We also need to work on the "active attention zone" concept a bit more, and come up with a better means for measuring how much area is really inside the zone, and what that really means for things like recognition and recall.
Future tests and research
Given time and a good excuse, we'd also like to expand our test platform to make it an even better representation of real-world viewing scenarios. Some of the things worth considering include:
- Using live footage shot in different venues and digitally remapping our test content later.
- Testing for the presence of the "vampire effect" with faces inside the content, not just in the venue.
- Doing more work on size and active attention zone, including how to simulate depth on a 2D screen (or perhaps coming up with a new set of Mechanical Turk jobs explicitly for people with 3D monitors and glasses).
- Testing longer messages and multiple messages per screen to see how taxing the viewer's memory affects recognition and recall.
- Using prolonged exposures instead of the sub-1-second exposures from our first batch of tests to simulate a more "captive" audience.
What did you think of this series? Are there other research topics that you'd be interested in learning about or participating in? Leave a comment below (or send me an email) to let me know!
Comments
RSS feed for comments to this post