Issue: Volume: 23 Issue: 7 (July 2000)

A new view on volumes - 7/00



By Diana Phillips Mahoney

In computer graphics circles, volume rendering has long had a reputation as an "if only/then" technology. If only desktop computers had the speed, power, and memory necessary for allowing real-time interaction with the characteristically huge volume datasets, then the prospect of being able to see beneath the surface of a CG object would have a greater appeal to mainstream computer graphics users. If only mainstream computer graphics users would see the potential value of volume rendering for their applications and demand tools to capitalize on this potential, then software and hardware vendors would eagerly strive to meet the users' needs. If only software and hardware vendors would have the foresight to understand how volume rendering could address the next-generation visualization needs of users and respond a priori, then users could experiment with the new tools to develop the killer applications needed to forevermore render volumes indispensable.




Thanks to recent advances in both hardware and software technologies, many of the "if onlys" are now coming to pass. Yet to be seen is whether the "thens" will soon follow.

Three-dimensional volume rendering is fundamentally different from conventional polygon-based 3D rendering. The latter is primarily concerned with representing surfaces of objects by placing polygons on wireframe models that can be twisted and rotated but cannot simulate the interiors of the object. In contrast, volume rendering is a technique for directly displaying a sampled 3D scalar field, such as CT and MRI scans or seismic data, without first fitting geometric primitives to the samples. The resulting dataset comprises a 3D array of all of the volume elements (voxels) that make up the object, including both surface and interior information. Theoretically, not only can a volume-rendered model be twisted and rotated, it can be sliced, diced, and chopped, and the resulting representation will be an accurate portrayal of the volume under those circumstances.
The sky's the limit when it comes to rendering clouds using Arete's Digital Nature Tools. The software uses volumetric techniques to render physically based models.




In the mainstream PC and workstation markets, however, volume rendering has long been considered a niche technology, particularly suited to scientific and medical-imaging applications. To be sure, 3D medical imaging stands to gain a lot by the ability to efficiently, effectively render volumetric datasets. "Surfaces aren't a natural representation for radiologists who are used to looking at things that look more like X-rays. Volume rendering satisfies that," says Bill Lorenson, visualization researcher at the GE Corporate Research and Development Center in New York.

Although medical imaging is the most talked-about application, it is but one of many diverse applications that could be well served by advanced volume rendering capabilities. Among others that have been identified are non-destructive testing, rapid prototyping, reverse engineering, oil and gas exploration, physics, astronomy, fluid dynamics, meteorology, and molecular modeling. In addition, the ability to easily and accurately represent atmospheric effects such as water, fire, clouds, and explosions-all defined by volumetric data-could have a significant impact on entertainment applications including feature films and interactive games.

The migration of volume rendering into these areas has been hampered by the lack of commercial tools capable of handling the massive calculations (defined by n3 complexity) that are required and delivering the results onto a monitor within a reasonable timeframe. Within the past two years, however, new hardware and software technologies have emerged to make volume-rendering more accessible to the computer graphics masses. Many of these offer faster, more efficient takes on a conventional volume-rendering technique called raycasting. With raycasting, which is traditionally implemented in software, light rays representing the user's view of a volume are projected through the image plane that contains the object. As each ray passes through the plane to the volume, color and opacity data for each specific volume element is extracted from the ray to represent that section of the volume. The values along the ray are combined with those of other rays to display the final image. The process is slow and computationally intensive because it involves projecting a ray from each pixel in the plane through the volume and calculating the data value of each pixel along each ray. Consequently, raycasting has generally been considered too slow to be used for real time or even reasonable-time rendering on standard PCs.
Rendering realistic natural scenes such as foamy water and discrete clouds requires knowledge of how the full volume interacts with light. With Arete tools, such an understanding is achieved through the software's physically based dynamics operations




This notion is changing, however, thanks to a number of new and soon-to-debut technologies that either accelerate standard raycasting or attempt to mimic its visual effects through other techniques. Among the most significant of these is the PC-based VolumePro 500, a DRAM-laden PCI board introduced in 1999 by Mitsubishi Electric's RTViz (Real-Time Visualization) group. The board implements object-space raycasting through the use of a dedicated chip that enables the real-time (30 frames per second) rendering of 2563 volumes.

"The VolumePro is the first successful commercial attempt to implement these volume-rendering algorithms in a cost-effective piece of hardware," says Lorenson, who predicts the technology could go a long way toward popularizing volume-rendering applications. "Now that there's this great tool available, the hope is that customers will figure out new ways to use it. All it will take are some really compelling applications to drive the technology to the next level and broaden the appeal of volume rendering in general."
Water, water, everywhere, and not a drop looks fake. The physics that define how volumes of water interact with light are well understood, but the algorithms that describe them are rendering-time and memory hogs. Often, as with this water shot from Arete,




While clearly a significant development, the VolumePro is far from a one-stop volume-rendering solution. Although its performance is unbeatable, its flexibility is not. For example, it is currently limited to supporting only parallel projection, whereby all of the rays are cast parallel to each other from a given viewpoint. As such, a user is not able to "fly through" a volume or otherwise effect a change in the volume view by changing his or her own viewpoint. The lack of support for such "perspective projection"-achieved by accurately diverging the rays as they travel away from the viewer's eye-precludes the use of the board for rendering a virtual walkthrough of a geophysical volume, for example, or an interactive surgical simulation.

Additionally, the VolumePro 500 does not let users combine surface and volume data-a useful capability for a number of applications. "Sometimes you'd like a hard surface to give context to the volume rendering as a way to orient yourself," says Lorenson. In a medical application, this might mean adding skin so you can tell exactly where you are on the patient, or showing a virtual, polygonal scalpel as it penetrates the volume.

Because VolumePro customers are clamoring for this capability, RTViz is planning to incorporate it in the next-generation board, the VolumePro 1000, due later this year. "On the conceptual level, it's not that hard to do," says Hanspeter Pfister, a research scientist at the Mitsubishi Electric Research Laboratory (MERL) and chief architect of the VolumePro 500. "In essence, what we're doing [to achieve it] is rendering the polygonal scene on existing graphics hardware, and getting an image buffer and Z-buffer from the polygonal scene. We're using the Z-buffer information to determine where the volume-rendering rays stop. So the Z-buffer is a precondition for the volume rendering." As a result, with the new board, users will be able to render volumes up to and behind polygons and it will be able to render translucent polygons and volumes using simple multipass techniques.

Also on the agenda for the VolumePro 1000 is more on-board memory. Currently, the VolumePro 500 has 256mb of RAM. Volumes requiring more RAM have to be rendered in sub-volumes. If the volume is too large to fit in the on-board memory, it has to be stored in system main memory and reloaded to the board for each and every frame. Even the gigabyte of RAM slated for the VolumePro 1000 will likely not satisfy every user's needs, at least not for long. "The size of the problem grows every year," says Pfister. For example, imaging scanners sample data at a resolution of 5122 for each slice, and there are hundreds of slices. In the near future, they'll be moving to 10242 resolution. It's an uphill battle, he says, to attempt to match such dramatic growth.

While the RTViz approach to volume rendering clearly attempts to design a technology to fit the task, a second approach is to fit the task to available technology using 2D and 3D texture mapping. This approach offers a means for retrofitting conventional polygonal based techniques to handle volume datasets. Two-dimensional texture-mapping, which can be achieved on even low-level PCs equipped with standard graphics-acceleration capabilities, involves slicing a volume into a series of planes that run parallel to the image plane. Each point on each plane is calculated, and a representative texture for that plane is generated. The textured slices are then arranged in back-to-front order and rendered using standard alpha blending. The resulting, texture-mapped image approximates a volume representation and can be rendered in real time. Doing so, however, involves changing texture-plane directions (parallel to the new image plane), and thus can introduce unwanted artifacts.

Another technique-one that SGI made commercially feasible a couple of years ago through its OpenGL-based Volumizer API-avoids such artifacts by using 3D texture maps. With this approach, the entire object is represented with tetrahedra as volumetric primitives. These can be sliced at any angle and the resultant image is 3D texture-mapped in real time. The effect more closely parallels that which can be achieved through software-based raycasting.
Real-time interaction with a volume-rendered skull-something still impossible on standard desktop workstations with software raycasting-can be achieved through dedicated hardware acceleration using the VolumePro board. Interactive techniques let users adj




The most obvious advantage of texture mapping for rendering volumes is that it relies on existing, polygonal-based technology. Additionally, it offers a viable, unified framework for treating both volumes and surfaces. The tradeoff for such familiarity and unity, however, is in image quality. While 3D texture mapping is "good enough" for some applications, texturing right now in the PC space comes in the order of 32mb of RAM (possibly 64mb in the near future), which is nowhere near enough for many volume graphics applications.

Additionally, hardware texture mapping is limited by the number of bits available into which information can be accumulated. Most texturing hardware uses no more than (and often less than) 8-bit operations. In contrast, the VolumePro currently uses 12 bits and is slated for more. This difference is critical for some applications, particularly medical ones, in which accuracy and precision is tantamount. Another obstacle is that 3D volume textures are huge. Even a relatively small 643-by-32-bit volume texture consumes a megabyte of texture data. Add minimal interaction, and on-board texture memory is spent. While many users expected SGI to help in this regard with its promised Windows-based Volumizer, the company canceled the release, and in fact seems to have halted future Volumizer development altogether, presumably to focus its energy on its core workstation market.

Fortunately, a light in the 3D texture-mapping tunnel was shone earlier this year by Nvidia, when the 3D graphics company announced that its new Volume Texture Compression (VTC) format was licensed by Microsoft for inclusion in the latter's DirectX 8 graphics API and will also be supported in Nvidia's future-generation chips. The announcement stirred up a lot of excitement in the visualization community. In fact, one industry insider predicts that Nvidia, with its VTC, "will eat SGI's lunch and probably dinner too, and will be the company to watch for 2D and 3D texture mapping."

VTC is a compressed format for storing volume textures. With it, 3D texture data can be stored in 1/8 the amount of memory than that required for the uncompressed data, thus enabling the storage of eight times as much texture data in the same amount of memory. "It's important that the decompression of the compressed volume textures can be performed in hardware, because software decompression just wouldn't be fast enough to support real-time hardware rendering," says David Kirk, Nvidia chief scientist.

Of course the VTC technology will not initially affect the visualization community at large, but rather game developers who see volume rendering not as an end in itself, but as a means to an end: more realistic games. "One example of this," says Kirk, "is the use of a volume texture to represent volumetric fog in a game. The 3D texture can be used to show the density and light-absorbing capability of a volume of fog in a 3D space. This information can then be used as part of the process to render an environment that has fog of varying density in different parts of the scene."
This griffon speaks volumes thanks to its creator's use of compressed 3D volume textures for both the marbleized surface and the light sources, which enable the description of the soft shadows on the creature. Nvidia in-house artist Adrian Niu created




Volume textures could also be used in games as part of lighting and shading calculations. Currently, Nvidia's GeForce2 GTS chip can run a pixel-shading program for each and every pixel, completing mathematical calculations and accessing up to two 2D textures as arrays of shading information. "Volume textures can extend this pixel-shading paradigm to include 3D arrays of data. This extra dimension adds a tremendous amount of flexibility to the variety of lighting and shading calculations that can be performed per pixel," says Kirk.

To many in the visualization industry, the Nvidia announcement is a sign of good things to come for the volume-visualization community at large. "If someone finds a good use of 3D texture mapping in games, more of the PC board vendors will start to embrace the technology, and we can ride that into other areas," says GE's Lorenson. At the very least, he says, "such developments will encourage more communication between the 'Siggraph-ers' and the gamers, with the hope that what game developers did for polygons-driving cost down and performance up-they'll be able to do for 3D textures."

Despite the obvious performance advantages of both 3D texture-mapping and dedicated voxel hardware, neither can match the image quality and accuracy of full, uncompromised software-based raycasting. Efforts are at hand, however, to bring the best of each of these worlds together.

For example, a young company called Kitware has developed a volume-rendering API that provides links to all three. The product, called VolView, is a turnkey volume-rendering application built on top of the open-source Visualization Toolkit developed at GE CRD.

VolView can be used with the VolumePro board to take advantage of the latter's real-time performance, but it is not limited to working with that board. A separate operational mode implements a multiresolution strategy using both texture-mapping hardware and software raycasting. "We shrink the volume to smaller representations, creating texture maps at several resolutions, and in the end we raycast the volume," says VolView codeveloper Lisa Sobierajski Avila. "As you're moving around, [the system] is texture mapping a level of detail that's small enough to enable interaction at a minimum of five or so frames per second, so at least it feels interactive. When you let go, it starts filling in with higher resolution texture-mapped images, and finally the raycast images."

In addition to its support for existing software and hardware rendering technologies, the modular VolView architecture is designed to easily support other capabilities as they become available. In fact, because it is based on an open-source toolkit and all of the technology (minus the user interface and packaging) for the application has been added to the open-source system, it benefits from advances made by countless others. "Open source is the most scalable form of software development, because it gets a lot of the 'right' people working on the problem," says Kitware cofounder Will Schroeder. "People we don't even know go out on their own and make an improvement, either in performance or in features, and that benefits the whole community, including Kitware and its proprietary tools."
Different representations of a volume-rendered foot provide significantly more insight than could be achieved using surface rendering. The CT-based model was rendered with the VolumePro.




The cost of VolView's versatility is a steep learning curve that makes the tool more appropriate for power users and programmers than for typical end users.

Mainstream Volume Support * End users are not being left out of the volume rendering picture, however. A number of commercial animation and effects packages are taking note of the call for volume rendering and beginning to respond in kind. Arete Image Software set the standard a few years back with its Digital Nature Tools product line. The standalone product and associated plug-ins to the major animation packages enabled users to put a rendered face on the company's trademark physically based dynamics calculations for such volumetric environmental effects as water, smoke, and clouds. "Almost all of the phenomena we simulate-foamy water, discrete clouds-are full volumes," says Arete chief scientist Dave Wasson. "Unless you're able to deal with the entire volume-how it interacts with light, how light bounces around and fills it up-you can't get a realistic effect."

The challenge of doing this "right" lies not in the physics themselves, but in the computational intensity required to reproduce the physics on screen. "There are not fundamental physics to invent. You can write algorithms to describe how light interacts with an ocean wave or a cloud, but they're going to be too slow and use up too much memory," says Wasson. "The real challenge is finding good approximations that work on current computers."

While such an approach might not meet the accuracy needs of scientific applications, it's a perfect fit for the entertainment applications for which Digital Nature Tools are used. "Our products are full volumetric renderers in the sense that they do ray marching through the volume to get the light intensity at each point. But on the other hand, we're doing crude approximations to light physics to get the look we want," says Wasson. "For example, if we have a light interacting with clouds, we figure out what the important things are about how clouds look, then create algorithms to duplicate that without overburdening the system. It might mean figuring out how to simulate multiple scattering effects [the calculations for which are computationally costly] by putting 'fake things' in a simple shadowing algorithm."
Where there's smoke, there's a volume. Animators at Centropolis FX production studio created this smoky plume using the volumetric rendering capabilities of a new, multipurpose rendering package called Jig from Steamboat Software. The volume rende




Although its existing raycast volumetric-rendering approach doesn't yet come close to real-time performance, Arete is working on "very crude" approximations for real-time applications. The company is also collaborating with various game developers to try to get its technology into the game environment. "We're starting to see an interest in putting more and more dynamics and render quality into games, especially with the new platforms like the Sony PlayStation2 that have real processing power," says Wasson.

More mainstream 3D packages are also seeing the volume light. NewTek has incorporated voxel-based software rendering into LightWave via a component the company calls Hypervoxel. In addition, Mental Ray supports volume shaders and Alias|Wavefront's Maya includes some basic raycasting.

A new product to watch in this arena is called Jig from Steamboat Software. It is a general purpose, open rendering system that comprises multiple algorithms, including a volume-rendering algorithm, for meeting different rendering needs. The software employs a proprietary shading system to volume render procedural gases and particles for such effects as smoke, explosions, and fire. Test versions of the software have already been adopted and implemented by a number of Hollywood production studios, and the product itself is slated to debut commercially at this year's Siggraph conference.

While there's no question that volume rendering has a way to go before getting the same consideration as conventional graphics, it's clear from recent activities that it is moving in the right direction and will continue to do so as long as the various interests pursuing it continue to see what's in it for them. Now, if only we had a technology crystal ball, then we could see just how far it will go and what it will take to get there.

Diana Phillips Mahoney is chief technology editor of Computer Graphics World.

Arete Image Software
Sherman Oaks, CA
www.areteis.com

Kitware
Clifton Park, NJ
www.kitware.com

Nvidia
Santa Clara, CA
www.nvidia.com

RTViz
Concord, MA
www.rtviz.com

Steamboat Software
Los Angeles
www.steamboat-software.com