Issue: Volume: 23 Issue: 2 (Feb 2000)

A New View on Volumes - 2/00



The undisputed value of being able to view and manipulate volume-rendered representations of complex 3D data is diminished somewhat in the eyes of researchers who could most benefit from the capability because of the computational intensity traditionally associated with the task.

Unlike conventional 3D graphics, volume rendering displays visual images directly from a sampled 3D scalar field typically acquired via one of several technologies, including magnetic resonance imaging, computed tomography, ultrasound imaging, seismic imaging, and radar systems. Traditional graphics approaches rely on an indirect representation of object surfaces and boundaries that is achieved by fitting geometric primitives to the samples.

The obvious advantage of direct volume rendering is that the whole volume of data is represented, potentially providing visual access down to the smallest detail of the internal composition of the object or phenomena being investigated. The obvious disadvantage is that gaining and exploiting such access is computationally expensive and often prohibitive. Because of this, researchers often look to the indirect methods for displaying volumetric data. These typically rely on the "marching cubes" algorithm, which "marches" through all of the cubic cells intersected by the isosurface, then calculates the intersection points along the edges and produces a triangular mesh to fit to these vertices. Unfortunately, al though such an approach enables interactive rendering, it only does so through the use of specialized graphics hardware. Additionally, this approach does not support cutting operations, which are critical to many of the applications for which volume datasets are utilized, particularly medical imaging.




While direct surface rendering techniques do exist, many of these are deficient in some way. For example, some require the generation of all of the intermediate frames between two different perspectives on the data, even when the viewing direction changes drastically, which is a very costly operation. Other methods extract and convert boundary voxels or volume elements (the conceptual equivalent of pixels projected into 3D space) to geometric primitives, which then must be projected by the same specialized graphics hardware re quired for indirect surface rendering.




Seeking to build a system that incorporates the best of both direct and indirect volume rendering and the worst of neither, researchers in the Institute of Computer Graphics at the Vienna University of Technology in Austria have developed a rendering technique for the fast display of isosurfaces of directly rendered volume data. The technique is noteworthy because it does not require the use of special hardware and it supports cutting operations. Also, it does not require the all-too-familiar trade-off between image quality and rendering speed.

Under the direction of researcher Balazs Csebfalvi, the group developed an algorithm that identifies and eliminates all of the voxels that are invisible from a specific viewing perspective. It then stores the remaining surface points into a data structure optimized for a standard volume-rendering technique called fast shear-warp projection. Shear-warp projection is a two-step operation. The first step is the shearing or compositing phase in which the algorithm streams through the 3D volume data and projects the volume to form a distorted intermediate (composited) image. In the second phase, a 2D warp transforms the intermediate image into a final undistorted one. Shear-warp factorization is generally considered the fastest volume-rendering algorithm that does not compromise image quality.

In the implementation of this algorithm by Csebfalvi and his colleagues, the first step involves preprocessing the input data. Initially, the data is segmented by identifying "empty" and "not-empty" voxels, and the empty voxels are eliminated. Next, the algorithm extracts those boundary voxels that are possibly visible from a certain domain of viewing angles. This step is fundamentally different from other boundary-extraction techniques through which all of the exterior boundary voxels (versus only the possibly visible ones) are selected. The latter approach involves more complicated preprocessing and results in a lower-quality data reduction process.

Once the boundaries are extracted, they are stored in a data structure containing information necessary to the rendering process, including the original data value (which is needed for the grayscale rendering of the cutting planes), the color, the position vector, and the approximated gradient vector (for view-dependent shading). Because the shaded colors and gradient estimations are precalculated for only the extracted boundary voxels rather than all of the original data, preprocessing time is optimized.

Next, the extracted boundary voxels are rendered using the shear-warp projection method. Each voxel is mapped to one pixel, and an intermediate image in which neighboring voxels are mapped to neighboring pixels is generated. This is done by adding the 2D offset vector of each given slice (calculated in advance) to each voxel location. The use of this intermediate image avoids the appearance of holes that might normally show up as a result of mapping each voxel to one pixel.

This system is also unique in that the boundary voxels are sorted by Z-depth. The slices are projected in a descending depth order, thus hidden voxels can automatically be removed by overwriting the pixel values in the intermediate image. This approach is preferred over the more familiar use of a Z-buffer in which all of the surface points are stored in a single list, requiring the extra step of checking the depth value in the Z-buffer, decreasing computational efficiency.

After the intermediate image is generated, it is projected onto the final image using a 2D warp operation. Scaling factors are built directly into the 2D warp matrix, thus the size of the final image is not dependent on that of the original volume. For the final image, locations are mapped onto sample points computed from the four closest pixels using bilinear interpolation. The resulting image quality is slightly less than that which would be achieved through a trilinear interpolation method, says Csebfalvi, but not significantly so.

For each pixel of the intermediate image, the system stores the approximated surface normal of the boundary voxel that is visible from the given pixel. This stored data enables the system to alter the shading of the image depending on the perspective from which is being viewed.
Unlike models generated using indirect surface-rendering methods, those created with direct volume rendering allow the investigation of the internal structure of the object through the use of cutting planes.




Finally, the system supports cutting planes by rendering the intersected voxels using the original density values. It shades the surface points according to the estimated normal vectors. This feature is of particular significance to medical imaging applications, says Cseb falvi. "Radiologists are used to looking at 2D image slices, so even with a 3D image, they want to see cross sections," he says. This would be impossible with surface-based techniques and computationally im practical with existing direct volume-rendering techniques.

Csebfalvi refers to his group's rendering method as a "poor man's rendering technique," because it does not require specialized hardware to achieve interactive frame rates thanks to the efficient boundary-voxel extraction process. Thus, the algorithm can conceivably be implemented on low-end hardware without compromising either image quality or speed. This will be a boon to education applications particularly, he says, because it means that universities do not have to install special graphics cards in every PC.

The researchers are developing the algorithm in C++ and have been testing it on an SGI Indy workstation, but the main goal, says Csebfalvi, "is to make interactive volume rendering available to PC users." Toward this end, the group is considering building the algorithm into a commercial system.

Diana Phillips Mahoney is chief technology editor of Computer Graphics World.