Once upon a time it was fashionable to produce specialized POP displays that used a short-throw projector to light up a roughly human-shaped piece of 3M Vikuity film. When a recorded image of a person was displayed, it kinda, sorta looked like a live person talking, if you squinted and cocked your head just the right way. These "virtual mannequins" never really took off in a big way, probably because they were expensive, took up a fair amount of floor space, and, once you got past the novelty, looked terrible. To wit:
However a new approach using a ridiculously impractical (for now) 216 projectors to simulate a 3d object on a flat screen. By recording the initial subject using an array of cameras instead of just one, and then using a computer to crunch the resulting data down into small slices, each projectors can display the light for just a small portion of the subject at a specific viewing angle. The result is a compelling and convincing 3D-looking display on a 2D surface:
Given the cost and complexity of putting together the initial footage, and the massive amount of floor space that the display device takes up, I think that practical applications for the technology are currently somewhat limited. An article over at Gizmodo suggests that museums and other educational contexts might benefit the most, especially in cases where adding a human touch would be beneficial, but it might be too expensive to keep actual humans on staff. The summary article from the researchers at USC suggests similar applications.
I sometimes think, though, that these researchers work on these projects simply because they can and they're awesome.