Issue: Volume: 23 Issue: 6 (June 2000)

Speed Up the Volume



In a computational world not constrained by monstrous datasets and processing limitations, direct volume rendering would loom above all other rendering techniques as an unmatched visualization solution. Every digital object would have a complete internal and external 3D representation, any element of which users could interactively investigate in real time. None of the volumetric data would need to be sacrificed in order to enable interaction, nor would any visual perspective be limited by the processing load required to achieve it. In computer graphics circles, it would be a virtual nirvana.

Unfortunately, in the real world, volume rendering continues to taunt researchers who desperately want to exploit the full potential of the visualization technology but who continue to be limited by the lack of resources to do so. Despite the huge gains in processing power and speed over the past decade, displaying, interacting with, and navigating through every volume element, or voxel, in a dataset of N3 algorithmic complexity is still computationally impossible.

Researchers are not deterred by this reality, and thus are feverishly exploring hardware and software options to bypass it, some of which have already come to commercial fruition.

On the hardware front, efforts have focused on the development of graphics chips, cards, and boards dedicated specifically to voxel-based graphics acceleration. The RT Viz/Mitsubishi PC-based VolumePro board, for example, can render 2563 volumes at 30hz frame rates, a previously impossible task. While such groundbreaking technology makes volume rendering more accessible both computationally and financially, hardware acceleration does not meet the needs of all volumetric applications. One example of where it falls short is its lack of support for perspective projection, which enables the image perspective to change based on the users' viewpoint, such as is necessary for interactive navigation through a volume. Additionally, hardware cannot yet support the integration of polygons and volumes, it cannot render irregular or curvilinear volume data, nor can it achieve raytracing (the projection of multiple rays per pixel to achieve refraction, shadows, and global illumination effects).
To render large volume datasets, such as this voxelized F15 aircraft, at interactive rates without compromising image quality, the researchers at SUNY have developed a fast raycasting algorithm that uses multi-resolution hierarchies. If the high-resolutio




Because of these and other hardware limitations, researchers are also exploring software-based volume-rendering techniques, typically focusing on the application of accelerated algorithms running on high-performance computers. A promising effort in this regard comes from the Center for Visual Computing (CVC) at the State University of New York (SUNY) at Stony Brook, where researchers have developed a high-performance, pre sence-accelerated raycasting technique that enables interactive rendering rates for 2563 volumetric datasets. The algorithm employs a range of optimization techniques that enable it to support both parallel and perspective projection, as well as such "costly" operations as interactive classification, whereby the user can interactively manipulate opacity levels to explore the volume data.

Raycasting technology itself is not new. It's a standard, though traditionally time-consuming, approach to volume rendering in which light rays representing the user's view of a volume are projected through the image plane in which the data is contained. As each ray passes through the plane to the volume, color and opacity data for each volume element is extracted from the ray to create that section of the volume. The values along the ray are combined with those of other rays to display the final image. The process is slow and computationally expensive, because it involves projecting a ray from each pixel in the plane through the volume and calculating the data value of each pixel along each ray, including those for all of the empty space between the projected pixel and the object.

The SUNY researchers save time and computations by using presence-acceleration techniques, through which the projected rays are programmed to start "working" only when they hit the object. This is achieved through a boundary-object approach, whereby objects inside the volume are surrounded with tightly fit boxes. The intersection of the rays with the bounding cell is calculated, and the actual volume traversal along each ray is programmed to begin from the first intersection point. Where each ray intersects a cell on the object, the data is sampled and the voxel is colored and shaded accordingly.
The opacity of a volume-rendered lobster can be interactively manipulated to see what's under the shell, thanks to data-compression and optimization techniques built into SUNY's accelerated volume-rendering algorithm.




Using this raycasting approach, says CVC director and project advisor Arie Kaufman, "you save yourself from computing all of the empty voxels, which could be a major chunk of your volume."

To further reduce the requisite memory space and access time, the algorithm compresses the boundary dataset using run-length encoding, an operation that skips runs of non-boundary cells. It also supports multiresolution capabilities, so that templates describing the volume at low, medium, and high levels of detail can be interchanged depending on the intended application. The low-level-of-detail templates, for instance, are used for perspective projection as a way to balance the demands of calculating dynamic viewpoints.

Although it can handle all types of volume data, SUNY's ac celerated raycasting algorithm ren ders opaque objects more efficiently than it does translucent ones. "Our method conducts dense sampling along each ray until the accumulated opacity reaches [a boundary cell]. The more transparent the object, the lower its opacity, and the farther we go into the object. With an opaque object, once we reach its boundary, we stop ray traversal," says principal re searcher Ming Wan.

In the hopes of minimizing such "drag," the researchers are investigating more effective data-compression techniques. "We need to find a way to get a high-speed calculation [of translucent objects] because we want to use the algorithm in very large datasets, such as the Visible Human, where the estimation of presence information will be huge," says Wan.

In order to attain interactive rates, the accelerated raycasting algorithm is run in parallel on multiprocessors. Currently, the re searchers are implementing it on an SGI Power Challenge, which is a bus-based shared-memory MIMD (Multiple Instruction, Multiple Data) machine with 16 processors.

The reliance on high-performance computing is both an advantage and disadvantage of software-based volume rendering. On the plus side, it provides the power and speed to get the job done. Unfortunately, the expense of such technology is prohibitive for many applications. Because of this, says Kaufman, "we're taking [the algorithm] to the next level of moving it into hardware acceleration." For the SUNY researchers, that means linking it to the next generation of their CUBE architecture, from which Mitsubishi's VolumePro technology was born.

Also on the R&D agenda are plans to develop improved techniques for achieving perspective projections. While the current algorithm provides adequate support for perspective viewing, says Wan, "we want a faster, less complicated approach. We want to make the volumes more useful for virtual-reality applications, which require real-time interaction with accurate [high level-of-detail] perspective views."

Not unlike the graphics revolution of the late '70s, when the use of discreet raster graphics supplanted continuous vectors, another major change is on the horizon, says Kaufman. "The discreet representation of information by voxels is beginning to replace continuous surface representation with polygons. The process will be more of an evolution than a revolution, but it is going to happen."

The impact of such a transformation, Kaufman believes, will be felt across application areas. "Medical imaging is an obvious one, but we're also seeing people in many other areas, such as oil and gas exploration, entertainment, and even airport security, express interest in the technology," he says. "Imagine that instead of taking X-rays of luggage at the airport, CT machines will scan each piece and instantly construct volumetric models of the suitcases and their contents. The operator will be able to manipulate the view and change the translucency to get a better look. The sky is the limit for what we'll be able to achieve."

Diana Phillips Mahoney is chief technology editor of Computer Graphics World.