Issue: Volume: 23 Issue: 9 (September 2000)

Turning Up the Volume on the Visible Human

The National Library of Medicine's Visible Human Project changed the way the world looks at medical visualization by making available complete, anatomically detailed, three-dimensional representations of the normal male and female human bodies. The dataset is a compilation of thousands of serial photographs of sampled 3D scalar data, including CT and MRI scans, and cryosection images, all acquired at sub-millimeter intervals. Countless research, education, and even entertainment applications have been built using part or all of these huge anatomical datasets. At the University of Maryland Baltimore County, researchers David Ebert and Penny Rhein gans are striving to make the Visible Human data, as well as other photographic data, even more useful by applying new volume rendering techniques.

With volume rendering, sampled scalar data can be displayed directly on the computer screen without having to fit geometric primitives to it, as is necessary with polygonal rendering. This is typically achieved using some type of raycasting technique in which light projected through an object's image plane extracts color and opacity data for each volume element, or voxel, it intersects. The values along the rays are combined, resulting in a 3D representation of all of the voxels that make up the object, including surface and interior information, allowing users to explore the "ins and outs" of an object by rotating it, slicing it, and varying its color and opacity values to enhance boundary distinctions.
By applying opacity-transfer functions and varying the color spaces of a volume-rendered shoulder section from the Visible Man, boundary distinctions are easily identifiable.

While this approach is useful when dealing directly with the scalar data, it presents challenges when the source data consists of photographs of the sampled information. Although color values can be obtained from the photographic description of each voxel, there is no information on opacity or the reflective or light transmission properties of the voxels. This deficiency is the focus of the Maryland researchers' efforts. They are investigating methods for transferring the known vector color data obtained from the photographic datasets to scalar opacity data. "The main challenge," says Ebert, "is determining how opaque or dense each element is when only given the color of light [RGB] that was reflected from it." The key to discerning this information lies in the careful consideration of the color spaces involved because, he says, "reflectance of the light rays is often based on gradients measured in the object volume."

In this vein, the researchers have implemented several different opacity-transfer functions using both the original RGB color space and a separate color space called CIE l*u*v. As a "device-derived"color space, RGB is not tuned to human visual perception. In contrast, the CIE l*u*v color space describes color as a luminance component of three variables (l, u, and v) based on statistical information about human perception. The researchers consider the latter to be especially useful when defining opacity-transfer functions because the goal of such functions is to capture as much of the anatomical structure that is visible to the human eye as possible. "Using [the CIE l*u*v] color space for opacity transfer-function calculations allows us to emphasize those features that are noticeable in the photographs, creating a more realistic volume rendering than using a device-derived color," says Ebert. Thus the first step in the pre-rendering process was to convert the color spaces.

Next, the researchers developed and tested three separate opacity-transfer functions. The first looks specifically at chromaticity and luminance differences within the color data by setting the density of the dataset equal to the separate l, u, and v color variables and rendering the data in a color that didn't appear in the actual photographic data. The resulting opacity changes are easily identifiable when composited with the original photographs. Specifically, the red/green color changes in the photograph correspond strongly to the boundaries of the muscles and of the fat.

The second opacity-transfer function measures the volumetric color gradient magnitudes, which represent changes in the length of the gradient. Implemented (separately) with both RGB and CIE l*u*v color-gradient magnitudes, this technique captures detail in certain areas of the data, but not a significant amount in the overall data. The third transfer function, however, adds a substantial amount of information to the gradient-magnitude calculations. Called a dot-product transfer function, this method considers the angle of each voxel gradient relative to its six neighbors in three dimensions, and thus is able to capture changes in the orientation of the gradient in addition to changes in length. This function proved capable of capturing muscle detail as well as bone, fat, and other tissue features. By combining the transfer functions, the researchers were able to compute the density of each voxel, then render the resulting volume using a modified raytracing system.
Muscle, bone, and fat details are evident in varying degrees in different volume views of the same Visible Man data, thanks to techniques for measuring the length and angle of the color voxel gradient in the 64 volume-rendered slices.

While pleased with their early results, the researchers acknowledge that the technology is not yet user-ready. "The challenge is to incorporate techniques that allow the user to be able to highlight different components and aid the understanding of important features." When such capabilities are developed, he says, a wide range of medical applications stand to benefit. "Many medical applications use sectioned organs for research into disease pathology. Using volume rendering of this data would allow reconstructions that are very realistic. The technology can easily be used to create a medical atlas and could provide reference volumes for simulation and diagnosis."

In addition to the Visible Human Project, a number of other imaging-based research initiatives are turning out high-resolution datasets that are amenable to the techniques discussed here, including the Visible Embryo Project at the Armed Forces Institute of Pathology and the Whole Frog Project at Lawrence Livermore National Labs. In fact, says Ebert, the technology is suitable for any research activities that involve high-resolution images of organ cross sections.

Diana Phillips Mahoney is chief technology editor of Computer Graphics World.