A new kind of three-dimensional display developed at HP Labs plays hologram-like videos without the need for any moving parts or glasses. Videos displayed on the HP system hover above the screen, and viewers can walk around them and experience an image or video from as many 200 different viewpoints—like walking around a real object.
The screen is made by modifying a conventional liquid-crystal display(LCD), the same kind of display found in most phones, laptops, tablets, and televisions. Researchers hope these 3D systems will enable new kinds of user interfaces for portable electronics, gaming, and data visualization. The work, carried out at HP Labs in Palo Alto, Calif., relies on complex physics to make 3D displays that are as thin as half a millimeter.
Conventional 3D—the type found in movie theaters—provides the viewer with only one perspective. The key to making a multiview 3D display is reproducing all the light rays reflecting off an object from every angle and to get a different image to the left and right eye of the viewer. Some systems for producing multiview 3D images require rapidly spinning mirrors; others use systems of lasers and multiple graphics processors.
The HP display uses nanopatterned grooves, which HP researcher David Fattal, who led the work, calls “directional pixels,” to send light off in different directions. This requires no new moving parts, and the patterns are built into an existing display component, the backlight.
A conventional LCD uses a sheet of plastic or glass that’s covered in bumps that scatter white light and direct it through the display’s color filters, polarizers and shutters to the viewer. The new 3D display builds on optics research demonstrating how the path, color, and other properties of light can be manipulated by passing it through materials patterned at the nanoscale.
The HP display replaces the randomly scattering bumps in a normal LCD with deliberately patterned grooves. Each “directional pixel” has three sets of grooves that direct red, green and blue light in one particular direction. The number of directional pixels determines the number of viewpoints the display can produce. Light from the pixels then passes through a conventional array of liquid crystal shutters that pass or block the light to make a moving image—just like in a conventional LCD.
The HP researchers showed that they could make static images with 200 viewpoints, or videos with 64 viewpoints and 30 frames per second—so far. The number of viewpoints in the video system has been limited by their ability to put the backlight together with the nanopatterned liquid-crystal shutters in the lab. Fattal says the system should ultimately be easy to manufacture, because it’s a modified LCD. The work is described today in the journal Nature.
Science fiction has provided no shortage of visions of futuristic computer interfaces that allow people to manipulate data, images, and maps by waving their hands through streams of holograms. The technology for tracking gestures is pretty well developed, says Fattal—systems like Microsoft’s Kinect are available off the shelf. All that’s lacking, he says, are practical systems for producing high-quality 3D images that can be viewed from multiple positions around a screen.
There has been very little innovation in the basic physics for making 3D images since early in the 20th century, says Gordon Wetzstein, a researcher at the MIT Media Lab’s Camera Culture group. Wetzstein was not involved with the work. Most 3D televisions and other systems on the market use old optical tricks—special glasses to filter part of the image for the left or right eye, for example—to create the illusion of depth. He says the new display “is transforming a technology that’s been around for 100 years.”
Fattal acknowledges that producing content for the new display requires 200 different images. Some of this image data can be reconstructed digitally—it’s not necessary to have 200 cameras—but for the foreseeable future, the most promising applications for the displays will be in showing computer-generated images. “A 3-D interface for a cell phone or laptop might display different windows next to each other, or architects could use a tablet to show a 3D model to a customer, instead of building a physical model,” Fattal says. “Or you might use a smart watch to view Google Maps in 3D.”
David Krum, codirector of the University of Southern California’s Mixed Reality Lab, says many computer scientists are now working on content development for 3D systems. Part of the challenge, he says, is understanding human perception and which light rays can be left out while still creating the perception of a 3D image for the viewer. Without addressing this, mobile 3D will create big bandwidth and data-storage burdens, he says.
Source : Mashable