Issue: Volume: 23 Issue: 8 (August 2000)

Unreal Virtual Reality



Diana Phillips Mahoney

Early proponents of virtual reality sold the technology on the promise that virtual worlds would eventually be indistinguishable from real ones and thus would enable users to experience everything from a journey to ancient ruins to simulated heart transplant surgery. While waiting (and waiting) for the technology to live up to its extolled potential, some VR diehards have begun to consider other ways to exploit the value of the technology in its present state. Researchers at Princeton University have done this by taking the reality out of virtual reality. The group has developed a rendering system that generates real-time nonphotorealistic (NPR) virtual environments.

While paradoxical at first impression, nonphotorealistic VR has numerous advantages over its realistic counterpart. "Certainly NPR will never replace photorealism, but it can do a lot of useful things that can't be done with photorealistic environments," says principal researcher Allison Klein. One example is its ability to convey emotional information. "Depending on how the artist shades things, or the style of rendering the artist picks-charcoal or oil paints, for example-a scene can evoke a scary or dark and gloomy feel or a cheerful, bright feeling." Additionally, a sketchy rendering can provide a work-in-progress tone. "The classic example is the architect's sketch. There's a reason architects pick that sort of soft colored pencil look. It's to convey an organic look, a design not cast in stone," says Klein.

And unlike a photorealistic scene, in which all of the objects are rendered with the same level of general detail and thus appear to possess a uniform level of importance, nonphotorealistic environments can be more subjective. "The artist can draw the user's eye toward semantically important information and away from unimportant things," says Klein. "The artist spends a lot more time and detail on the parts that are important. If a person in the middle of the room is the focus, the artist may use just a few strokes to suggest the room that the person is in."
The use of non-photorealistic filters applied to an image-based model provides more control over the aesthetics of virtual worlds than can be achieved with traditional photorealistic rendering techniques.




Nonphotorealism can also compensate for problems with at tempts at realism. "Human heads and bodies are still difficult. They've come a long way from Max Headroom, but you can still tell when you're looking at a computer-generated person. And if the facial muscles don't move quite right, for example, it can be disturbing, even if it's only a subtle effect," says Klein. In contrast, viewers tend to be much more forgiving with nonphotoreal characters. "We're already using our imaginations to say, 'Hey, this is a person.' The expectations are not as high." The same holds true for other environmental effects, such as illumination incongruities. "In a photoreal model, these are obvious and distracting. With NPR, people tend not to notice."

Finally, nonphotorealism is just plain fun. "You get to play around with your images. You're able to experiment with different styles and looks. Obviously, you can't play around with reality too much," says Klein.

In order to exploit the potential of nonphotorealistic rendering in virtual environments, however, the researchers had to overcome significant challenges, the most daunting of which was creating a system that would support real-time interaction-the heart and soul of virtual reality.

"The nonphotorealistic techniques that have existed up until now have typically relied on off-line processing because of their computational intensity, and thus have not been intended for real-time," says Klein. The processing load is a consequence of the technology's reliance on stroke-based textures to achieve an artistic look. To be effective, the strokes have to be rendered in a range of sizes, and there has to be coherence among the strokes from frame to frame. Without frame-to-frame coherence, says Klein, "you either see the strokes squirming around or, in the worst case, they're actually moving so fast that they become noise or just look pixelated, losing the artistic effect entirely."

The researchers addressed these challenges using an image-based rendering (IBR) approach. By relying on photographs or pre-rendered images, IBR enables the generation of visually complex scenes at interactive frame rates while maintaining frame-to-frame coherence. The combination of IBR with nonphotorealistic rendering is particularly beneficial, since it can be used to visually mask artifacts caused by undersampling (a problem associated with image-based rendering) and to reduce the viewer's expectation of realism in general.
Nonphotorealistic rendering diminishes a user's expectation of realism in a scene, providing the developer with more creative license to mask artifacts and less-than-perfect lighting and camera angles.




To enable interactive frame rates at run time, the first two steps in the creation of a nonphotorealistic image-based representation are off-line procedures. The researchers first construct an IBR model of a scene from a series of photographs or rendered images, and then extract photorealistic textures for each of the model's surfaces. The textures are filtered to achieve a nonphotorealistic look. Critical to the filtering step is the ability to preserve control over the final stroke size. "As the user moves toward a surface, the strokes on that surface must change in order to maintain an appropriate size on the image plane," says Klein.

This effect is achieved through the use of MIP-mapping, a technique in which large textures are smoothed at a cascading series of resolutions, typically to prevent texture aliasing. In this application, the MIP-map levels are independently artistically filtered, resulting in slowly changing strokes that blend as the user moves through a scene. Because the stroke sizes stay constant across MIP-map levels, this technique also allows the strokes to vary within a range of sizes appropriate for the viewing plane. The goal, says Klein, "is to achieve constant stroke size and frame-to-frame coherence."

The off-line activities are the computational workhorses, paving the way for real-time interaction during the run-time phase. For the latter, the system loads all of the filtered MIP-map levels for all surfaces into texture memory at start-up and employs an OpenGL-based texture-mapping application, which draws surfaces of the 3D model using the pre-loaded data for every novel viewpoint.

The nonphotorealistic VR system was developed primarily as a proof-of-concept application, says Klein. "What we've said with it is, 'Hey, it can be done, and it can be done with hardware already on most systems." She readily admits, however, that the tool is artistically primitive. "There are a lot of things that can be done to make it look nicer. Artists would want more tools, more control over certain aspects."

Once refined, the technology could see application in a range of areas, including architectural walkthroughs as well as in games and other entertainment programs. "You can imagine some kind of children's game that takes place in a dark and spooky dungeon, but you might not want it to look too spooky, so you moderate that effect," says Klein. "Or you could think of a walk-through of the Van Gogh museum, rendered in Van Gogh style."

Nonphotorealistic VR will never replace photorealistic environments, nor is it intended to. It simply represents another branch of the technology-one that could make the wait for the real thing more tolerable.

Diana Phillips Mahoney is chief technology editor of Computer Graphics World.