|Issue: Volume: 24 Issue: 7 (July 2001)
Sifting Through Volumes
The benefits of direct volume rendering in scientific and medical visualization are well known. Because the technique involves mapping all of the volume elements in a given dataset directly into the image plane, it enables the visualization of inner structures of solid objects, as well as the accurate representation of amorphous phenomena, such as clouds, fluids, and gases-none of which can easily be achieved using traditional surface-rendering tools. However, the fact that direct volume rendering takes advantage of all of the volume elements in a dataset is not only its primary advantage, but also its chief disadvantage. The huge amount of data places a significant burden on computational re sources, so although "all" of the information for a given volume dataset is theoretically available, the processing requirements limit the practical use of the information.
Because of this shortcoming, researchers are seeking ways to exploit the value of direct volume rendering while minimizing the pro cessing overload. One such promising effort comes out of the Vienna University of Technology, where researchers Jirí Hladuvka and Eduard Gröller have devised a technique for automatically identifying objects of significance in a volume dataset. The resulting subset of the original dataset can then be displayed and explored at a fraction of the computational cost.
The automated saliency-identification technique is not the first to define a subset of a volume for display and interaction. A number of methods exist, for example, that identify boundaries of structures within a volume dataset by looking at changes in the intensity of the signal gradient (gradient magnitude). Such information is useful for locating edges in an image, but it is difficult to compute and results in a still-large dataset. Other techniques extract isosurfaces from the original dataset, but this requires that the user specify which aspect of the data he or she wants to look at. In such cases, salient information can easily be overlooked. And still other approaches sample data randomly for reduction, with no eye toward the quality of the data being sampled.
|A lobster is volume-rendered using conventional gradient-magnitude techniques (left) and a new method for identifying salient structures (right). From top to bottom, the images show results using 2, 4, and 6 percent of the total dataset.|
The Vienna researchers have taken a different approach. "The problem addressed in our work is 'how' to select the voxels to determine which ones from the volume should be chosen in order to have an interpretable result after visualization," says Hladuvka. The answer they've come up with is a two-stage process in which a filtering technique identifies boundaries and narrow structures within a dataset, and a "saliency" function analyzes the data to identify the voxels that meet a predefined standard in terms of satisfactory content for visualization.
The filtering technique is conceptually similar to the gradient magnitude methods mentioned above, but instead of representing objects by computing both the internal and external sides of its boundaries, the new algorithm represents objects using only the internal side of their boundaries, effectively halving the number of necessary voxels.
The saliency of the resulting data is gauged using a specialized matrix that prioritizes the voxels based on their intensity relative to a predefined standard. Both the boundary filter and the saliency function are computed without user interaction. During the display, the only external query refers to the amount of voxels to be shown. "This number depends on the bottleneck of the visualization system, or the bandwidth of the network," says Hladuvka.
Another benefit over the gradient method is the ability to preserve detail while using a smaller percentage of the overall dataset. By identifying the salient features and giving priority to their representation, says Hladuvka, "we're able to represent a data set using 6 percent of the volume." Taking further voxels, he says, produces few changes in the information that can be gleaned from the visualization.
One of the drawbacks of the technique is that the pre-processing step is 2.7 times slower than the gradient method. To speed up the computation, the researchers are looking into hardware acceleration, as well as more efficient saliency-assessment algorithms.
Hladuvka envisions a range of commercial opportunities for the new technology. One of the most promising areas, he says, will be in the visualization of volume data over the Internet. The new technique is particularly well suited to progressive transmission over networks, as a server could be programmed to deliver the most salient voxels early, followed by voxels of progressively less saliency. In addition, he says, as Web-based repositories of multi-dimensional scientific data continue to grow, so does the need for content-based retrieval of the data. This technique could be invaluable for enabling fast 3D previews for data in a remote archive.
Research on the salient volume representation technique is ongoing. More information is available on the project Web site, at http://www.cg.tuwien.ac.at/research/vis/vismed/SalientRepresentation/.
Diana Phillips Mahoney is chief technology editor of Computer Graphics World.
|Back to Top