Issue: Volume: 23 Issue: 1 (January 2000)

A New Light on Rendering



The most obtrusive of these limitations is the difficulty of making changes to the geometry or lighting conditions of image-based rendered scenes. This is because the lighting in photographs is fixed, so scenes built with the photographs can only be rendered with the same illumination as the original photographs. This also means that changes to the geometry cannot be accommodated because, in the real world, changing objects, which reflect light, would alter the lighting in a scene. Generating a new rendering to accurately reflect geometric changes or to vary the lighting conditions of the scene requires re-computing the interaction of light with the surfaces in the scene. This is possible only if the reflectance properties (such as the diffuse color and shininess) of every surface is known before the image is re-rendered. Unfortunately, such information is not readily available from the scene geometry or from photographs.

While attempts have been made to estimate reflectance properties of real surfaces from dense measurements of isolated surface areas, most of these have seen little success, because reflectance properties of real scenes change over space and time.



In an effort to bypass the shortcomings of traditional im age-based rendering, researchers in the Computer Science Division of the University of California at Berkeley have recently developed a promising approach whereby the reflectance properties of an entire scene can be predicted based on data from photographs, and the surfaces can be illuminated based on their actual properties, rather than those of isolated samples. With the technique, the re searchers are able to simulate the realistic appearance of a rendered scene under various lighting conditions, including those resulting from changes to the geometry. The resulting image parallels that which would be captured in a photograph if one were taken under the desired conditions.
A clock tower rendered using image-based techniques is viewed under various lighting conditions. Researchers simulated the lighting variations using radiance data acquired from the original photograph of the tower.




The heart of the process is an algorithm designed to achieve inverse global illumination by recovering the reflectance properties of all surfaces in a real scene. This information together with the scene's geometry serves as the basis for a lighting-independent model of the scene, which can then be rendered using traditional methods.

As the name suggests, inverse global illumination is the inverse of traditional global illumination, which uses known geometry, lighting, and reflectance properties to produce radiance maps, or rendered images. In contrast, inverse global illumination uses known geometry, lighting, and radiance maps to determine reflectance properties. The two approaches do not oppose or supplant one another, but rather are complementary, says Yizhou Yu, a lead researcher on the project. "By running global illumination on recovered reflectance information, we can obtain more realistic results than ever."



The input to the global inverse illumination method is a geometric model of the scene and a set of calibrated photographs in which the direct illumination properties are known. The algorithm hierarchically partitions the scene into a polygonal mesh and constructs estimates of the radiance and irradiance of each patch from the photographic data. The algorithm computes the expected location of specular highlights and analyzes those areas to recover the reflectance parameters for each region, which serve as the basis for high-resolution reflectance maps for each surface.
A room initially rendered using image-based techniques has been re-rendered using the inverse global illumination system. The top image shows the room under original lighting conditions, which provided the information needed to generate the same image und




In developing the illumination approach, one of the fundamental objectives was to create a process that would be effective using only a limited set of photographs rather than one that required a photograph of every point of every surface from many angles. The problem, however, is that a sparse set of photographs provides limited radiance information, because each surface point can only be observed from a small number of angles. That information is not enough to determine the reflectance distribution for each surface from all directions. To deal with this, the researchers limited the system to the recovery of low-parameter reflectance models. In addition, the system allows the diffuse reflectance of objects, called the albedo, to vary over a surface. Directional reflectance properties, such as specular reflectance and roughness, remain constant over each area and are specified as part of the geometry-recovery process.

The physics of mutual illumination also presented a challenge. In a real scene, surfaces receive light not only from light sources, but also indirectly from the rest of the environment. Thus, the radiance of an observed surface is a function of the light sources, the geometry of the scene, and the reflectance properties of all of the scene's surfaces. It's impossible to get a precise measure of the radiance for every point in the scene. Instead, the inverse global illumination technique estimates the incident radiances of surfaces in a scene based on radiance data from photographs and image-based rendering. Through an iterative optimization process, the incident radiances are re-estimated to predict the reflectance properties of the surfaces in the scene.

The advantages to this method for recovering and applying reflectance information, says Yu, "include improved photorealism of the rendered images because the process recovers surface properties from the real world, and greater flexibility than that provided by previous image-based rendering work for allowing arbitrary modifications to structure and lighting."

Yu believes inverse global illumination will be particularly useful in the development of virtual and augmented reality applications in which the ability to realistically re-create a digital version of reality or to integrate real and virtual objects is critical to the effectiveness of the experience. The techniques should be appealing to creative professionals in a wide range of areas, including architectural design, site planning, lighting design, and visual effects for movies and games. For example, says Yu, "The director of a movie may want to shoot a scene under a specific dramatic lighting condition that might be quite difficult to achieve in reality. Our technique can be used to generate a computer-simulated model of the scene with the desired lighting."

Yu notes that in its current state, the inverse global illumination method his team has developed under the direction of professor Kitenkia Malik is "quite comprehensive and sophisticated, and would take some time for others to learn to apply it."

In an effort to make the inverse global illumination method more accessible to a broad audience, Yu and his colleagues are in the process of improving and refining the underlying techniques, which they hope to release as freeware, possibly as soon as this spring. More information on the group's research efforts is available at the project Web site, www.cs.berkeley.edu/~yyz/research/igi.html.

Diana Phillips Mahoney is chief technology editor of Computer Graphics World.