DIANA PHILLIPS MAHONEY
Geometric primitives are the mainstay of traditional computer-graphics rendering. Most digital models, regardless of the technique used to create them, are eventually broken down into a mesh comprising thousands of triangles or other polygonal shapes, which is then rendered by the graphics subsystem. The efficiency of this approach can be compromised as the models become more complex. The more complicated the image, the more polygons needed to describe it, and the more processing power and time needed to render it, thus minimizing the advantages of advanced modeling techniques, such as NURBS, implicit surfaces, and subdivision surfaces, which enable the development of increasingly sophisticated forms.
The clever use of texture mapping lessens the computational drain, but highly organic shapes typically require a large number of textures applied in multiple passes to approximate realistic surfaces. In addition, textured polygons are not suitable for rendering such effects as smoke, clouds, and fire.
In an effort to bypass the limitations of polygonal rendering for real-time interactive applications, researchers at Mitsubishi Electric Research Laboratory (MERL) in Cambridge, Massachusetts, and at the Swiss Federal Institute of Technology in Zurich, Switzerland, have developed a new rendering technique based on what they call surface elements, or surfels. According to MERL researcher Hanspeter Pfister, a surfel is an alternative graphics primitive that can be used to create complex shapes with low rendering cost and high image quality. The surfel approach relies on a point-sample rendering algorithm, whereby surfel objects are represented as a dense set of surface-point samples rather than triangles or higher-order polygonal patches. Each point sample, or surfel, stores shape and shade at tributes for the representative object surface.
The surfel technique comprises two steps: geometry sampling and surfel rendering. In the sampling phase, geometric objects and their textures are converted to surfels. Unlike conventional point-sample methods, surfel shape and shade are sampled separately, so geometry and texture information are distinct. The sampling step can also include such texturing techniques as bump and displacement mapping.
The sampling process itself is slow-it can take up to an hour per surfel object to complete-but because it is a preprocessing function, the the system's subsequent rendering performance is not affected.
Once the surfel data has been sampled, raycasting is used to generate data groupings called layer depth images (LDI), which store multiple surfels representing every ray/surface intersection point along each ray. The LDIs are arranged orthogonally in groups of three, and each of these groups, or blocks, makes up a layered depth cube (LDC), the collection of which are ar ranged hierarchically. Next, a three-to-one reduction step reduces the LDCs to single LDIs, which are then rendered using conventional perspective-projection techniques.
A number of standard data-optimization functions are implemented to accelerate the rendering process, and the researchers developed a "visibility-splatting" technique, which, in conjunction with a conventional z-buffer, is employed to contend with a visibility problem common to point-sample methods. "A challenge for any point-sample rendering algorithm is the occurrence of holes in the output image be cause of the effects of magnification and perspective projection," says Pfister. The visibility-splatting method detects these holes, and 2D image filters reconstruct a continuous image from the visible surfels by interpolating between different levels of the surfel data, shading each visible surfel, then filling holes and antialiasing.
In most situations, surfel rendering can hold its own compared to polygonal methods. Exceptions include extreme close-ups, which tend to blur the image, and applications in which the surfels themselves don't approximate the object surface well. For example, says Pfister, "after compression or in areas of high curvature, some surface holes may appear during rendering."
Additionally, surfels are not well suited to the representation of flat surfaces, such as walls or scene backgrounds. In such cases, says Pfister, "large, textured polygons provide better image quality at lower rendering cost than surfels can."
Where the surfel approach shines brightest is in the rendering of models defined by rich, organic shapes or high surface detail and in applications amenable to preprocessing, such as interactive games. "The artists only need to create one model using their modeling primitive of choice, such as polygons, NURBS, or implicit functions, and they can use any number of textures for bump or displacement mapping," says Pfister. "We then create a surfel model with built-in level of detail without the hassles of cracks or polygon-simplification algorithms."
|Polygons be gone! An alternative graphics primitive called a surfel, or surface element, is at the heart of a new point-sample technique that enables the fast rendering of high-quality images for interactive applications. An object's conversion to a surfe|
Another application area ripe for surfel treatment is in the creation of what the researchers term "3D images" of non-synthetic objects-the extension of 2D images into three dimensions. "We're using the surfel representation to store 3D images of real-life objects, and we're implementing a system for acquiring, transmitting, and displaying these 3D images," says Pfister.
Among the advantages of using surfels to achieve this is the fact that the resulting 3D images can be compressed using existing image-compression standards, such as JPEG, and that the surfel-based images are amenable to progressive transmission and rendering-features that are particularly important to applications running on portable devices. "A high-quality 2D image can be transmitted first for quick preview, and the 3D image becomes available after that," notes Pfister.
As a nascent technology, surfel rendering is still somewhat limited in what it can achieve. Currently, the approach only supports rigid-body animations and opaque surfaces. "Deformable objects are difficult to represent with surfels and the current data structure, so we can't animate elastic or squishy objects, such as the human face," says Pfister.
Pfister stresses that surfel rendering is meant to complement, not replace, the existing graphics pipeline. "It's positioned between conventional geometry-based approaches and image-based rendering," he says.
In addition to enhancing the surfel rendering capabilities, including adding transparency, the MERL researchers are interested in developing surfel-rendering hardware, as well as integrating data-compression techniques into the system to reduce the storage demands. "There's a tradeoff between storing a surfel model and storing geometry," says Pfister. "In a sense, surfels are similar to fragments in polygon rendering, except that we produce and store them during preprocessing." The researchers are looking into the application of wavelet compression for storage models. The group is also working on a complete demo system for the acquisition, compression, and display of 3D images of real-life objects.
The researchers' ultimate goal with the surfel approach, according to Pfister, "is to tap into new markets where legacy 3D technology does not work well, such as 'portable' graphics for cell phones, hand-helds, eMedia, and eCommerce." Diana Phillips Mahoney is chief technology editor of Computer Graphics World.