Issue: Volume: 23 Issue: 11 (November 2000)

Volume Visualization Gets Physical



Three-dimensional volume visualization generally isn't really 3D at all. While the available tools and techniques let scientists interactively explore 3D datasets, the images they create are generally displayed in 2D, either on a computer screen or hard copy. The "3D-ness" is an illusion.

In an attempt to exploit the advantages of true three dimensionality, researchers at the San Diego Supercomputer Center have developed a system for constructing 3D physical models from volumetric data. Using solid freeform fabrication equipment, they build the models as separate interlocking pieces that express in physical form the segmentation and cutting operations common in display-based visualization.

Although existing volume-visualization techniques provide powerful capabilities for revealing the inner structure of complex sampled data (from medical scanning technologies, for example) or computed quantities from numerical simulations, real-world physical models have a number of advantages over digital media. For one, according to researcher David Nadeau, "a physical model can be held and rotated in a natural way, and doing so doesn't require graphics hardware. The models can be viewed and understood anywhere, even by people without technical training." In addition, physical models can be viewed interactively regardless of complexity. "The real world has no polygons-per-second limitations and lighting, shadows, and collision-detection are free."
A fabricated skull and brain built using 3D volume data and rapid-prototyping tools can be manipulated to gain insight into the complex structure.




Working with real-world models is not problem-free, however. Segmentation, cutting, and exploring the physical objects is a destructive process. For example, you can only dissect a frog once.

In this research application, the SDSC researchers are attempting to leverage the advantages of both display-based visualization and real-world physical models by using digital segmentation and cutting techniques to non-destructively extract data of interest from the volume dataset, then manufacturing a physical model from the segmented data.

To create the proof-of-concept model for the project, the researchers used the head portion of the Visible Human Male dataset from the National Library of Medicine. First, they created a volumetric scene comprising separate CT, MRI, and cryosection data using software developed by Nadeau that can handle multiple volume datasets. The software is based on a volume scene graph, which is a hierarchical organization of shapes, groups of shapes, and groups of groups that collectively define the content of a scene. Once organized, the data is voxelized, whereby each point in 3D space is saved as a voxel in a new, discrete volume dataset that can be rendered directly.
Various volume datasets, including CT, MRI, and cryosection images, are combined to create a scene containing the skull and brain of the Visible Man. The skull is fabricated in two interlocking pieces to allow access to the brain. The brain is sliced hori




If the resulting graphics scene were being rendered for display on a computer monitor, conventional isosurfacing techniques could be used to display the respective three-dimensional shapes. For this application, however, isosurfaces are not sufficient. "Isosurface algorithms find a surface by defining a boundary between values higher and lower than a chosen value. While such a surface can be quickly rendered using 3D graphics hardware, it is an infinitely thin sheet that is not physically manufacturable," says co-researcher Mike Bailey. The fabrication needs a sense of what parts are supposed to be solid and what parts are supposed to be air-information not contained in a surface representation. Thus, what's needed is an "isovolume"-a solid bounded by two non-intersecting, non-porous isosurfaces.

For the actual manufacture of the interlocking pieces, the researchers relied on the rapid prototyping capabilities of the SDSC TeleManufacturing Facility (TMF), which consist of two fabrication machines, both of which are connected to the Internet to support remote access and monitoring. One of the machines, a Helisys Laminated Object Manufacturing device, makes 3D parts from layers of paper or plastic. The other machine, a Z Corp. Z402, fabricates physical models from layers of powder.

To prepare the digital data for solid manufacture, the researchers implement an isovoluming algorithm that produces the list of 3D triangles bounding the outer skin of the solid in the standard STL file format for rapid prototyping.

In modeling the head data, the skull data was manipulated in the scene graph in order to allow it to be fabricated in two halves-a necessary feature enabling access to and removal of the brain model. In addition to cutting the skull vertically, the scene graph was used to cut the brain horizontally in four slices.

Moving from concept to practical reality with this volume-modeling technique will mean overcoming a number of technical obstacles. On the volume-rendering side, the researchers are developing data-management techniques. "A single volume dataset can be quite large. Supercomputer simulations can generate datasets from hundreds of gigabytes to terabytes in size, and the storage for a volumetric scene containing multiple large volumes is itself very large," says Nadeau. "The challenge is determining how to flow data efficiently through the system without requiring terabytes of RAM."

In terms of manufacturability, the researchers are focusing on polygon count. "Isosurface algorithms have the reputation of producing gazillions of triangles, and isovolumes are worse," says Bailey. "Unfortunately, the rapid-prototyping machines are geared toward CAD-based operations, which do not produce as much fine detail."

Both the scene graph software and the volume renderer employed by the researchers for this project are part of a larger suite of scalable volume visualization tools being developed under the NSF-funded National Partnership for Advanced Computational Infrastructure (NPACI). In addition to offering a compelling and natural way to explore the interior of complex volume datasets, this application provides insight into the potential value of telemanufacturing over the Internet.

Diana Phillips Mahoney is chief technology editor of Computer Graphics World.