In fact, the very first comment -- from Wharton marketing professor Peter Fader -- was the most interesting to me. He notes that, "few (if any) industries have a better opportunity to formally test minor innovations than retailers do, yet retailers don't do a good job of taking advantage of this resource. Retailers need to 'think small,' in the spirit of Edward Filene, rather than relying on elaborate business development plans or (worse yet) inertia." When you stop and think about the implications of this seemingly obvious statement, it's staggering. Instead of (or perhaps in addition to) going for the multi-year, multi-million dollar ERP, CRM or VM/SD project that could boost the bottom line by double-digits (or fail miserably), retailers of all shapes and sizes should be conducting an ongoing series of small-scale experiments to test every aspect of the in-store experience, from shopping carts to signage. A retail chain with historical operating records and as few as 10 or 12 stores can generate statistically significant results from a well-controlled shopper marketing experiment, and in some cases multiple experiments could even be run in parallel without affecting each other's outcome.
There are literally hundreds of micro-experiments that could influence consumer behavior. For example, adjusting the store temperature or lighting color, or canting shelves or signage by a few degrees are all tests that can be done in little time and on a small budget. With a few months of performance data, it would be relatively straightforward to tell what impact (if any) such changes had on the bottom line. Since these aren't exactly earth-shattering modifications, retailers don't have to worry about them causing catastrophic damage. And since these mini-experiments are quite simple and require little in the way of capital investment or infrastructure changes, if they don't work for a particular retailer, they can easily be reverted or replaced with a new experimental condition. Even better, there's the potential for a cumulative positive effect, since each new experiment can run atop the successful configuration determined by the last experiment. For example, say that Grocer X changes the color temperature of their lights and finds that while sales of most products remained constant, whole grain cereals were boosted by 0.25%. The company could then make that change across all its stores and move on to the next mini-experiment, perhaps reorganizing the stock on the cereal aisles to put the child-oriented (read: sugar-loaded) products on the bottom shelves, where little kids can easily reach for them and throw them in the cart while mom's not looking. Doing so might result in another small net gain, which, atop the previous one, starts to add up. With the ability to aggregate positive changes and dismiss negative ones, the industrious retailer could learn a lot about its customers in a short period of time, and at a very low risk.
These kinds of mini-experiments make even more sense when digital media networks are already in place, since changes can be made remotely and monitored automatically. I've spoken with numerous retailers who are being very avant-garde and deploying digital signs, kiosks, motion analysis systems, and massive back-end customer databases, but are hesitant to run simple split tests to measure and improve the effectiveness of their retail media and POP displays. While doing a split test with physical POP displays can be expensive (and logistically complicated), a test with different digital sign screen layouts or varied prize values for a sweepstakes/prize kiosk can be easily constructed, monitored and evaluated for effectiveness. Even with a proven high-traffic screen location, small changes to content can dramatically increase the ROI on a campaign. Think you're on to something halfway through your test? Why not make additional changes to a subset of your experimental group and see if you get your expected results? Because the changes are small, it's unlikely that they'll have a significant negative impact on your store's performance. It's a low-risk endeavor, the results of which might even be applicable elsewhere in your business. For example, imagine finding that the $2,500 sweepstakes being promoted at your in-store product information kiosks proved to be just as effective at luring contestants as the $25,000 sweepstakes you were using previously? And what if it proved even more effective to give some kind of low-value prize (a pen, a t-shirt, a pair of fuzzy dice, whatever) to one in every hundred contestants? On top of the in-store traffic and data gathering gains, you could take these results and apply them (at least as an experimental condition) in future mass mail campaigns, circulars, etc.
Sure there's extra work involved, but the data that can be amassed by running even a few simple split-test experiments can be impressive. There are some important things to keep in mind while running your shopper marketing tests, though:
- First, determine how big or small of an experiment you'll have to run to get meaningful results. If you don't know how many stores, days, or sales you'll need to get a statistically significant result, there's a good chance that the "results" you get won't be valid.
- Second, determine what you're going to be testing -- or what you want to be testing -- prior to setting up the experiment. Going out and changing a bunch of things in your stores blindly isn't going to accomplish anything, and won't give you meaningful results. Pick a few variables that you expect to produce some outcome, and stick with them until you've seen a result.
- And finally, if you don't succeed, try, try again. Micro-experiments should be quick, cheap and low-risk, so ambitious retailers can expect to reap the rewards of ongoing in-store experimentation over the long term, even if some tests don't go as well as planned in the very beginning of the process.