Photorealism is an elusive goal in computer graphics-one that can be closely approached, but not fully attained. The technical challenges of cloning physical reality are compounded by the perceptual ones linked to human observational capabilities. However, while much attention has been focused on the technical issues, relatively little has been given to the perceptual ones. In fact, contends Fredo Durand, a researcher in the Graphics Group within the Computer Science Laboratory at the Massachusetts Institute of Technology (MIT), it is precisely researchers' longstanding indifference to perceptual issues that has kept significant advances in photorealistic rendering at bay. "Computer graphics has long mainly ignored the fact that images are eventually viewed by people. It has been a very mechanical field in which human factors have been largely neglected."
In recent years, however, the tide has begun to turn, he says. "People are realizing that the way images are viewed also matters." Evidence of this is the increased attention being devoted to a technique called tone mapping-the success of which depends on an understanding of human visual perception.
Tone mapping is a photographic concept whereby light values of a scene are mapped onto the photographic media to create an image in which the overall lighting appears similar to that of the physical scene. In computer graphics, tone mapping refers to the process by which raw image data (floating-point values of pixel-color components) is mapped to the computer monitor. Unfortunately, the range of light intensity in a real scene dwarfs that which can be achieved with typical display media. While the former exhibits lighting contrast ranging up to 1012, a typical monitor is limited to 1 to 100 intensity range. This discrepancy can be "hidden" in some applications, says Durand, by tweaking certain color parameters to make sure the scene "looks good" and doesn't present too much contrast. This approach falls apart, however, in applications requiring physical lighting simulations or those with scenes defined by significant luminance transitions, such as when moving from outside to inside in an architectural simulation.
|Interactive tone mapping can be used to simulate the recovery of visual sensitivity when lights are turned on in a gallery room. In dark conditions, the image is first colorless (no cone vision), then completely white because of the dazzling effect, after|
To simulate the sensation of realism in such cases, tone-mapping techniques have to be employed to bring the perception of "real" and digital closer together. In general, the tone-mapping process involves creating numerical models of the intensity ranges of actual and synthetic scenes, then calculating and mapping luminance values to the digital scene that will best mitigate the difference.
Typically, tone-mapping is a post-processing operation done on static images. This approach, however, precludes the real-time rendering needed for a truly realistic walkthrough of a simulated environment. Durand and MIT colleague Julie Dorsey have developed a new approach that enables interactive tone mapping in real time. In addition, the system uses a computational model of the dynamics of human visual adaptation to approximate the process by which our vision adjusts to certain "unsteady" luminance states, including dazzling chromatic changes and the slow recovery of sensitivity when going from bright to dark conditions. In contrast, most approaches deal with single images in a steady state of adaptation, says Durand.
The interactive tone-mapping system consists of a four-phased process: estimating the average intensity seen by the viewer, mapping the colors on the fly, adding flares for light sources, and simulating the loss of acuity in dark conditions. The average-intensity estimation is the first pass. To accomplish this, says Durand, "we render the geometry using a logarithm of light intensity and compute the average of the image. This average luminosity is used to update the simulated state of the human visual system and the corresponding tone-mapping operator."
During the first pass, a lookup table is computed to relate the tone-mapping operator to the viewer's simulated adaptation state. This information is used for the on-the-fly color mapping, in which the scene is re-rendered and the color of each vertex of the scene is mapped based on its physical intensity to a displayed tone.
Next, flares that correspond to the scattering of light inside the eye are computed and added to the scene. And, finally, a hardware-rendered blur is added to simulate darkness-induced acuity loss. The amount of blur, says Durand, is determined using psychophysics data that measures human acuity based on scene luminance.
Because of the complexity of the human visual system and the lack of a comprehensive visual-perception model, the bulk of the researchers' energies have been spent reviewing and integrating the vast, fragmentary (often contradictory) literature on the topic. "Our main work has been to turn all the available theories and data into a comprehensive computational model," says Durand.
Having built the model, the researchers are turning their efforts toward expanding the technology. "The main challenge and limitation of the approach is that we simulate a single adaptation state for the whole image. The human visual system reacts more locally: All of the receptors in the retina do not have the same adaptation state," says Durand. "Since our gaze is constantly scanning the image, the interaction between local adaptation and these gaze movements is currently intractable." To address this limitation, the developers plan to develop a statistical model of gaze movements.
Currently, the researchers are focused on tone-mapping and human perception at night and during twilight-conditions that are notoriously difficult to simulate. "Few convincing images of night scenes have been done. [Night] perception is very complex. Rods and cones interact, often nonlinearly. This is illustrated by the difficulties of night photography, the success of which is reserved to skilled photographers and only in very specific conditions."
Once refined, these techniques could benefit a number of applications, says Durand. Among them, as noted, are architectural walkthroughs. "In this case, interactive tone mapping is important not only to cope with the potentially large luminance contrast between exterior and interior but also to simulate chromatic adaptation," says Durand. "For example, when we enter a neon-illuminated room, we first notice how harsh and cold the lighting is, but we quickly discount it. Our model exhibits similar behavior."
The technology will also be of significant value in the development of simulations in which accuracy of environmental conditions is critical, including some driving and flight simulations and hazard-training applications in which dazzling light or other effects might impair operator vision. "Taking into account the dynamics of visual adaptation is crucial for such applications, since the time course of adaptation greatly affects the visibility conditions," says Durand. "And since displays do not have a high enough contrast, dazzling or slow-dark adaptation have to be simulated somehow."
Game developers may also be interested in interactive tone mapping. "Games already use some kind of dynamic visual adaptation effects, such as dazzling light when the sun is in view," says Durand. "This is usually done by tweaking the gamma-ramp lookup table, which gives reasonably good results. Our approach is more systematic and grounded, but currently slower."
|Understanding and reproducing the dynamics of visual adaptation from light to dark situations is critical for such applications as driving simulations in order to reproduce the effects of real-world visibility conditions.|
Durand can envision interactive tone mapping eventually being incorporated into a plug-in or, as he puts it, "a visual adaptation/time-dependent tone-mapper black box of some kind." Such a development will depend on the evolution of graphics hardware. "If more precision is given in the frame buffer, as needed for powerful real-time shading languages, then interactive tone mapping can easily be performed as a post-process using a look-up table to map the frame buffer." More information on this research is available at http://www.graphics.lcs.mit.edu. Diana Phillips Mahoney is chief technology editor of Computer Graphics World.