Issue: Volume: 24 Issue: 11 (November 2001)

Showcasing Augmented Reality



One of the long-standing obstacles to successful virtual and augmented reality applications is the difficulty users have of "getting into" the digital environment. Head-mounted displays (HMDs) are plagued by an unbalanced ratio between quality optics (high-resolution, large field of view) and ergonomics factors. Currently, HMDs that provide good quality are heavy and cumbersome, while lightweight HMDs tend to provide low-quality images in exchange for comfort. For augmented reality applications, HMDs suffer further from focal-length discrepancies. In other words, an image that is overlaid on a real object doesn't appear at the same depth as the real object. Depending on the optics used, the images appear at a constant depth in front of the user's eyes. This means that the eyes have to continuously shift focus between depth planes, or view either the object or the digital imagery out of focus.

Conventional projection-based VR systems (walls, workbenches, caves, domes) bypass most of these problems, but they have limitations of their own, including their inability both to support individual view perspectives of multiple users and to integrate real and virtual objects.
The Virtual Showcase projection-based augmented reality display uses a mirror configuration on top of a projection screen to mix real and virtual objects.




In an effort to overcome these problems, researchers at the Fraunhofer Center for Research in Computer Graphics (CRCG), in cooperation with researchers at Bauhaus University in Germany and the Vienna University of Technology, are developing a new projection-based augmented reality display. Called the Virtual Showcase, the high-resolution display environment supports the shared interactive representation of virtual and real objects for individual or multiple users. The researchers' goal is to create a technology framework for "ambient intelligent landscapes," says CRCG scientist Oliver Bimber, "where the computer acts as an intelligent server in the background, and users can focus on their tasks rather than on operating computers."

The Virtual Showcase consists of two parts: a convex assembly of half-silvered mirror beam splitters and a projection screen for graphics display. To date, the researchers have built Virtual Showcases with two different mirror configurations. The first consists of four half-silvered mirrors assembled as a truncated pyramid. The second prototype uses a single mirror sheet set up to form a truncated cone. The mirror assemblies are placed on top of the projection screen. "Users can see real objects inside the showcase through the half-silvered mirrors merged with the graphics displayed on the screen," says Bimber. The contents of the showcase are lit with a controllable light source, while view-dependent stereoscopic graphics are presented to observers wearing standard shutter glasses controlled by infrared emitters. Users' head movement can be tracked using either optical or electromagnetic tracking devices. The pyramid-shaped prototype supports up to four viewers, who can view the showcase simultaneously from four different sides. The cone-shaped prototype offers a seamless surround view of the display. Currently, the display graphics are rendered using a single, standard PC with an off-the-shelf 3D graphics card. To accelerate collaborative interaction, the system could be configured with one PC per user.
Virtual representations and real objects share space in the Virtual Showcase. The digital images can react to single or multiple users.




The CRCG team is quick to point out that the Virtual Showcase is not specifically intended to supplant other display devices, but rather to serve as an application-specific alternative. In fact, in some instances, the technology would be ineffective. For example, says Bimber, the Virtual Showcase is not designed for mobile applications, nor does it support direct manipulation of the virtual display. "With HMDs, the virtual augmentations can be virtually touched. This is not possible with the Virtual Showcase, because the device itself represents a physical barrier. As long as virtual objects don't come out of the showcase, which is possible, they can only be manipulated indirectly, using remote tools or real props."

The Virtual Showcase is still in the prototype stage, with a number of technical issues still to be resolved, including, says Bimber, determining a way to render the digital content on low-cost hardware at interactive rates. "The images displayed on the screen are optically deformed by the mirrors," says Bimber. "This deformation has to be pre-distorted before the image is displayed so the user, who sees the reflection of the image in the mirror, doesn't see a deformed image. This is quite time-consuming."

The stereo component adds to the complexity of the task. "The virtual content has to be rendered twice per frame for each view, so that each user perceives depth through the stereo images. In the case of 4 users, 8 images have to be generated per frame at interactive rates." To speed the calculations, the researchers have developed algorithms that benefit from current PC hardware as much as possible. They are also looking into software solutions, such as offering progressive rendering methods that adapt to the speed of the hardware, as well as combining high-quality interactive techniques, such as image-based and point-based rendering.
Because it is conceptually similar to a conventional showcase, the Virtual Showcase is more intuitive than many VR and AR displays. Users instinctively know how to interact with objects in the display.




There are also a number of challenges for the future that haven't been looked at yet, says Bimber. These include human-centered and intuitive interaction for the Virtual Showcase and the development of an authoring tool. Currently, the researchers are combining the Virtual Showcase with the RenderWare game engine to support 3D animations. The applications that stand to benefit the most from the Virtual Showcase setup include interactive mixed (real/virtual) exhibits and entertainment, as well as scientific visualization. Also, says Bimber, "the automobile industry and the military have expressed interest in the device, but exclusively for VR/multi-user purposes only." In fact, the US Army sponsors the project in this country. At the main Fraunhofer facility in Germany, the European Union sponsors ongoing Virtual Showcase research for the cultural heritage (museums) domain.

More information on the Virtual Showcase can be found at www.crcg.edu/research/projects/vs.php3.




Diana Phillips Mahoney is chief technology editor of Computer Graphics World.