Issue: Volume: 24 Issue: 5 (May 2001)

Degrees of freedom



By Mark Hodges

A heightened sense of 3D has been available for some time to wearers of shutter glasses and head-mounted displays, but manufacturers are only starting to commercialize display technology that does not require users to wear special viewing devices. These autostereoscopic displays, as they are called, generate separate images for the left and right eyes and project them to the observer through filtering screens, creating a stereoscopic-or parallax-view. By making 3D computer imaging a more seamless part of everyday work and play, autostereoscopic systems should be attractive for a wide range of applications, including molecular modeling, scientific visualization, medical training, computer-aided design, collaborative design and manufacturing, virtual teleconferencing, gaming, and virtual portraiture.
A 3D holographic video display developed at MIT's Media Lab projects a cylindrical shape that viewers can alter with a haptics-based virtual lathe. (Photos by Webb Chappell.)




But while existing commercial 3D autostereoscopic displays can produce images with impressive depth qualities, the "sweet spots"-the areas that provide satisfactory views of the stereo 3D image-are limited, thus observers must remain relatively still. Now, research labs around the world are working on prototypes that track the movements of observers and provide them with individual 3D perspectives of an object as they move about. New systems also are under development that integrate haptic and autostereoscopic technologies, making it possible for users to touch, manipulate, and alter stereo 3D images.

At New York University's Media Research Laboratory, computer scientists have built an autostereoscopic display that allows a single viewer to move freely while retaining an undistorted perspective of the simulated image. The 3D effect is visible over an angular volume of 20 degrees both horizontally and vertically and at a distance of 1 to 4 feet from the monitor. Project leader Ken Perlin, a professor of computer science at NYU, believes the system is the first autostereoscopic display to allow free movement within this range.

"It enables a graphic image to assume many of the properties of a true three-dimensional object," Perlin says. "An unencumbered observer can walk up to an object and look at it from an arbitrary distance and angle, and the object will remain in a consistent spatial position."

NYU doctoral researcher Chris Poultney adds, "The effect is as though you are looking at a solid object suspended in front of you, usually centered on the backplane of the display. As you look around the object, features come into view or become obscured, just as you would expect with a real object."

The system consists of a thin screen hanging from metal bars several inches in front of a 19-inch LCD monitor. The LCD produces left- and right-eye views on alternating vertical columns of the monitor, while the thin screen allows light from the display to pass through vertically striped filters so the views can be seen by the appropriate eye. This approach, known as the parallax barrier method, creates a stereoscopic effect in which the image from the display appears to float between the viewer and the outer screen.
A display developed at NYU's Media Research Lab allows single viewers to see 3D images without glasses while moving about freely.




Conventional parallax barriers use a fixed set of filters to transmit light beams with slightly different views of the same image to each eye. The NYU system modifies this design by creating a dynamic parallax barrier whose vertically striped filters change widths as the observer moves. It also uses a retro-reflective eye-tracking camera that sends this location information back to the system computer. A demonstration of this principle is available on the World Wide Web at www.mrl.nyu.edu/~perlin/demos/autoshutter-talk.html.

Because the display must be updated with each of the viewer's movements, rendering latency has been a major design challenge. Perlin's group addressed the problem by using a custom-made display with a so- called "pi-cell" liquid crystal material that significantly increases the switching speeds of the light shutter.

Another common limitation of autostereoscopic displays has been image resolution, because of the action of the parallax barrier that splits the images into left- and right-eye perspectives. In the NYU system, the barrier is manipulated in such a way that the observer gets a complete view of the image over three rapidly alternating phases.

Perlin says that his group met its basic goals of developing an autostereoscopic system that offers low-rendering latency and produces images without artifacts. With several hundred thousand dollars in funding from a "major CRT manufacturer," he says, the Media Research Lab has now begun to develop a full-color version. He estimates that commercialization will begin in two to three years, and his goal over the next five years is to develop a research prototype that enables multiple viewers in a large room to look at the same display with viewer-specific 3D perspectives.

Perlin expects that the display system's initial customer base will be the computer-aided design and molecular modeling communities. But he believes that after the display has been commercialized long enough for the price to drop to several hundred dollars per unit, it will also become attractive to the game market.

Another possible application for Perlin's technology is virtual teleconferencing, in which participants see 3D digital replicas of one another. Jaron Lanier, chief scientist for the National Tele-Immersion Initiative, doesn't want the participants of such teleconferences to have to wear special viewing devices (see "Tomorrow's Teleconferencing," pg. 36, January 2001). "To watch a movie, you don't care if you're wearing glasses, but for tele-immersion you want to see each other," Lanier says. Currently, he favors Perlin's dynamic parallax barrier approach. "There's a prototype," he adds. "I'm feeling pretty optimistic."

Some autostereoscopic display systems allow moving viewers to see different perspectives of images by creating a fan-like series of continuous sweet spots. The problem with this approach is that the observer becomes painfully aware of an abrupt shift, known as "flipping," when moving from one viewing perspective to another, even when the sweet spots are relatively close to one another.

A group of researchers-Yoshi hiro Kajiki of the Telecommunications Advancement Organization, Hiroshi Yoshikawa of Nihon University, and Toshio Honda of Chiba University-has designed a prototype monochromatic autostereoscopic display that allows viewers to move from side to side while 45 different perspectives of a 3D video image shift from one viewing zone to another with reduced scene flipping. The reason for the relative smoothness of these transitions is that each perspective is located only 0.5 degrees apart-an angular difference so narrow that multiple views of the image overlap on the observer's retina at all times.
The 3D holographic video display system at MIT's Media Lab allows users to touch and alter autostereoscopic 3D images with a force-feedback haptic stylus from SensAble Technologies. (Photo by Webb Chappell.)




This multiview stereoscopic display requires multiple light sources to produce the necessary 45 beams without creating image resolution problems. It would be difficult to use conventional projectors to generate perspectives with such a narrow parallax, so the researchers opted to use small semiconductor light sources with beam-shaping optics to generate narrow cones of light. The beams of this focused light array are scanned onto a single pixel-size point that is divided and projected through two lenses and a scanning mirror into the viewing zone. Observers can see images that have a maximum width of approximately 7 inches. The transition between perspectives remains smoothest for viewers positioned within two feet of the display.

Researchers are also experimenting with autostereoscopic displays that can track several observers at once and give each the correct 3D view of an image. At MIT's Media Lab, a research group led by Stephen Benton has developed a system that produces an 8-inch (diagonal) 3D image that can be seen by several observers in a viewing zone approximately 17 inches wide.

In this system, a liquid-crystal display (LCD) transmits an image with left- and right-eye views presented in alternating stripes on the screen. Beams of projected light are polarized and then transmitted by a second LCD through a large field lens in stereo pairs to each eye. With the front LCD close to the zone where the two views fuse into a 3D image, observers do not experience the eye strain that occurs when the stereo zone and the display panel are too far apart.

The system uses a black-and-white tracking camera located beneath the output LCD screen to find and track one to three viewers at a time. A computer processes the camera images with the help of face-locating software and an algorithm that predicts the position of observers based on their movements. Ultimately, the computer locates each viewer's right eye and feeds the information to the viewer-tracking LCD, which transmits light for display from narrow vertical strips of the panel corresponding to the locations of each observer's eyes.

The system responds to fast movements as long as they are smooth, but it sometimes loses viewers when they make abrupt, jerky movements, report Benton and his colleagues from MIT and Ericsson Telecom AB (Stockholm, Sweden). The researchers also say that there is a noticeable slowdown in system performance when three viewers must be tracked. But Benton contends that work by his group and others aimed at improving tracking systems shows promise. "Many machine vision labs around the world are working on the general problem of user tracking, and we intend to piggyback on their successes."

Haptic technology, which adds the sense of touch to computing, is a natural mate to autostereoscopic displays in that observers not only can see an image in three dimensions, they also can directly manipulate it and even alter it. This technology has application in medical training-where it would allow students to practice surgical procedures with enhanced realism-as well as in scientific visualization, education, and entertainment.

One research prototype under development by Hideki Kakeya of the Communications Research Laboratory in Tokyo creates a virtual workbench on which the viewer can manipulate 3D images. Such an environment would threaten severe eye strain if there were a great difference between the display and the point at which a stereoscopic effect could be seen. Kakeya and his colleagues avoided this problem by projecting the image from the display through special Fresnel lenses that focus light more efficiently than standard lenses.
Researcher Hideki Kakeya of Japan's Communications Research Lab combined Fresnel lenses and haptics technology to create the 3D Workbench.




The image changes according to the position of the observer's head. As the viewer's eye movements are tracked, the system computer moves a mobile X-Y positioning table to shift the point at which parallax image beams are filtered from the display to the Fresnel lenses. This requirement for moving parts makes the current design unsuitable for commercial flat-panel monitors. But Kakeya says that in principle the positioning table can be replaced by electronically controlled liquid crystal filters.

The advantage of this system, he says, is in creating a comfortable work zone, within arm's reach, where viewers using a haptic device can easily touch and manipulate objects. The cost of production is low, and researchers are working to modularize the system and build it into a smaller, box-like display.

Holography offers another promising technique for creating images whose 3D qualities can be seen without the aid of special glasses. Unlike conventional photography, which records only patterns of relative brightness, holograms capture both the brightness and the wave patterns of light, providing enough information about an object to render it in three dimensions. A hologram is created by illuminating an object with a laser beam and recording the reflection on a photographic plate or film. The reflected light combines with light from a reference laser beam transmitted directly onto the holographic recording material to generate an interference pattern. When the crests and troughs of the waves from the two light sources match up, they reinforce each other and create areas of light. The resulting image appears to be three dimensional because observers must refocus to examine both the foreground and background.

A holographic stereogram is created when a series of holograms showing the object from slightly different angles are joined together-in effect storing multiple 3D perspectives in the same image. Computer scientists are now using light-field models to generate digital holographic stereograms of real and synthetic objects. One such example is a recently developed exhibit known as "HoloSpace," built by University of Texas computer scientist Emilio Camahort and his colleagues at the holographic printing company, Zebra Imaging (Austin, TX). HoloSpace shows the potential of new advances in digital holography by featuring test holograms for applications in collaborative design and engineering, 3D-animated portraiture, and advertising. The holograms in the exhibit offer 3D viewing zones of 110 degrees horizontally and 98 degrees vertically.
Holographic technology makes it possible to create large 3D images, such as this automobile hologram produced by Zebra Imaging, for use in advertising or collaborative design and engineering.




Camahort says that light-field modeling allows the data for holograms to be stored, then subjected to computer processing routines that make possible quality improvements such as anti-aliasing, special illumination, different viewpoints for multiple observers, and the illusion of animation in static objects. "We are producing holograms that no one has ever made before," he says.

Camahort says that one major limitation of digital holography is the lighting used to illuminate them. Holographic stereograms are easily distorted if lighting is used improperly, and unless lights are steeply angled, the hands of observers interacting with the holograms can block pieces of the projected image. One solution being pursued by Camahort and his colleagues at Zebra Imaging is to build illumination sources into the hologram.

Another research challenge is the resolution of digital holograms, he says. A stereogram may seem clear when viewed from a distance of about 15 feet or more, but it will show pixelation patterns when seen from closer range. To enhance resolution, researchers are trying to segment holographic images into smaller and smaller units, known as hogels, which function like pixels in a digital image. "As resolution increases, speed decreases by the same factor," Camahort says.

Holography is a computationally demanding technique, because a wide range of viewing perspectives must be incorporated into the same image. The most pressing long-term limitation of digital holography is to compute images fast enough for interactive manipulation and transformation. "Over the years, a variety of shortcuts have evolved to speed up what would otherwise be a lengthy computation," says Steve Benton of the MIT Media Lab, adding that a typical hologram can be recomputed in "roughly four minutes" on a two-processor graphics workstation. One alternative approach has been to segment holograms into hogels, which separately render pieces of the whole image and thus reduce the time needed for computation. Despite such processing economies, the time needed to update holographic images remains too slow, he says, for "satisfying interactive use."
University of Texas and Zebra Imaging researchers are using new lighting models to make high-res holograms.




The problem is most severe for applications that require the complete recomputation of an image, such as when it is rotated by the user. In some cases, however, it will be possible to update only the hogels in a holographic image that have been changed.

This hogel updating approach has been used by two other researchers at MIT's Media Lab, Wendy Plesniak and Ravikanth Pappu, who have developed a system that allows a 3D hologram to be visually inspected, touched, and modified in near-real time with SensAble Technologies' Phantom haptic interface. In this mixed-reality system, the operator's hand, a haptic stylus, and the 3D hologram are all visible in the viewing zone.

In their doctoral dissertation project, Plesniak and Pappu first designed static holograms of a hemispherical shape and a maze composed of blocks using the same 3D geometric description for both the haptics and holographic models. Illumination was provided by light-emitting diodes hung at a steep angle so that the user's hand and the haptic stylus would not block the light and keep parts of the image from being seen.

Plesniak and Pappu report several problems with the interaction between the visual and tactile representations of the hemisphere and maze. One sensory conflict was the misregistration of the holographic and haptic surfaces, which caused the user to touch an edge of the block or hemispherical object before the stylus had reached. A second problem involved "occlusion violations," in which objects in the background might appear to be in the foreground.

The researchers followed up this study by developing a holographic video (holovideo) display that pre-computes a cylindrical shape and changes its appearance based on a haptics-based virtual lathe. In this dynamic system, a workspace resource manager integrates the work of the haptics and holovideo modules.

Plesniak and Pappu report that there is still a delay of approximately one-half second in updating the display. "The operator can see the stylus tip penetrating into the holographic surface before the surface is apparently subtracted away," the researchers say. The solution to this problem, they believe, is to develop higher bandwidth spatial light modulators, more efficient data compression techniques, improvements in computation speed, and higher bandwidth data pipelines.

It may be close to another decade before real-time holographic video systems are ready for commercial use. The same timeline could be applied to development of larger viewing areas, in which multiple viewers would see individualized projections of the same image. It is likely, however, that within a few years, autostereoscopic systems with wider viewing zones and observer tracking will hit the market. When these developments occur, the images we now call 3D may suddenly seem flat, and we'll move into a world where computer monitors seem to be more like windows and less like screens.

Mark Hodges is a contributing editor for Computer Graphics World. He can be reached at mark.hodges@gtri.gatech.edu.


3D Workbench Project * www.crl.go.jp/jt/jt321/kake/auto3d.html
MIT Media Lab Spatial Imaging Group * www.media.mit.edu/groups/spi/
NYU Media Research Laboratory * www.mrl.nyu.edu
Zebra Imaging * www.zebraimaging.com