Issue: Volume: 24 Issue: 11 (November 2001)

Riding the Wavelets



DIANA PHILLIPS MAHONEY

High-performance computing has given scientists an unprecedented ability to study complex theoretical and experimental problems. It's also given them an unprecedented number of new questions about how to archive, retrieve, transmit, visualize, and analyze the resulting massive datasets. "Terascale physics simulations are now producing tens of terabytes of output for a several-day run on the largest com puter systems," says Mark Duchaineau, a visualization researcher at Lawrence Livermore National Laboratory. Such simulations produce surface datasets comprising hundreds of millions of polygons-crippling numbers for today's highest-end commercial storage and graphics hardware.

To enable researchers to actually see and interact with what the high-performance systems let them compute, Duchaineau and colleagues Martin Bertram of the University of Kaiser slautern, along with Serban Porumbescu, Bernd Hamann, and Kenneth Joy of the University of California at Davis, have developed a system that reduces the size and improves the manageability of the huge datasets for interactive display.

At the heart of the system is a subdivision-surface wavelet compression technique, complemented by a view-dependent optimization scheme. Together, these serve to minimize data storage requirements and drive the graphics hardware. "We want to achieve high-quality compression and display of the largest surface data in the world," says Duchaineau.

Toward this end, the researchers are testing their technique on 3D scientific simulations generated on ASCI White, the world's most powerful supercomputer. One example is a recent simulation of instability in a shock-tube experiment, which produced isosurfaces consisting of 460 million unstructured triangles. For this application, says Duchaineau, "if we use 32-bit values for coordinates, normals, and indices, then we need 16 gigabytes for the storage of a single isosurface, and several terabytes for a single surface tracking through all 274 time steps of the simulation." With the gigabyte-per-second read rates of current RAID storage, he says, "it would take 16 seconds to read a single surface." The numbers also strangle high-performance graphics hardware. "Today, the fastest commercial systems can effectively draw 20 million triangles per second," says Duchaineau. To achieve interactive rates, the triangle count of such a dataset would have to be reduced almost one-thousand fold.
An isosurface containing 460 million unstructured triangles would be impossible to analyze and store with traditional compression. Using subdivision-surface wavelets, the original unstructured mesh is converted into a regular mesh with subdivision surface




Using existing surface-compression techniques, such a reduction would surely compromise the quality of the dataset and consequently its scientific integrity. To avoid this, the re searchers set their sights on high-quality wavelet-based compression, a common technique used to reduce the size of image data, such as photographs and video.

Wavelets let a function be represented at a lower resolution by maintaining values, called de tail coefficients, from the original dataset. These enable the original function to be regenerated without any loss of information, and the process can be repeated an arbitrary number of times.

It has only been recently that wave let compression has been employed for 3D objects (surfaces). "It's more difficult to achieve both high quality and speed us ing wavelets on surfaces than it is for images," says Duchaineau. The new technology attains both the necessary speed and quality needed for practical interaction.

The system is unique in that it is the first wavelet-based compression of surfaces to achieve high-quality approximations with filters that enable not only high compression speed, but also ease of use on massively parallel machines. "We don't skimp on the quality of the wavelet, which would have been the quick and dirty way to make things fast," says Duchaineau. Instead, they've devised a multi-staged approach that achieves both high quality and fast speeds by converting the unstructured grid defining the original dataset into a structured grid amenable to wavelet compression.

The first step is a "lifting" procedure through which isocontour polygons are extracted using existing acceleration techniques. This surface data is then progressively reduced by collapsing edges. The remaining base mesh is shrinkwrapped to fit the original full-resolution surface, progressively adding back detail to create a mesh defined by subdivision-surface connectivity. The system then performs a wavelet transform on the converted mesh. Finally, the display is optimized using a technology for large-data display originally developed at Los Alamos and Lawrence Livermore National Laboratories. Called ROAM (Real-time Optimally Adapting Meshes), the algorithm generates triangle meshes that provide view-dependent optimization, whereby only information that is seen at a given view point is calculated.
In preparation for wavelet compression, base meshes are shrinkwrapped to fit the original full-resolution surface dataset and converted to meshes defined by subdivision-surface connectivity.




The compression and remapping methods are the only ones to date that scale to huge scientific surfaces, says Duchaineau. "These methods are best for large datasets where speed of compression is as important as speed of decompression, and where scaling to huge, convoluted surfaces is critical." The downside, he notes "is that because we are pushing hard on these fronts, our methods are not yet as fully automated as we need them to be in a production environment." Thus, enhanced automation-particularly of the shrink wrap process-is a major goal. Another is "topological surgery," says Duchaineau. "The surfaces are so complex that we need to simplify topology in order to achieve interaction. Combining the simplification methods with wavelet surface compression is an open challenge."

Over the long term, says Duchaineau, "we want to provide efficient, on-the-fly geometry and world data that can be explored fluidly, whether over a network from a shared universe database or from a giant computer running scientific simulations." The new compression technique is an early step in this direction. More information on the project can be found at http://graphics.cs.ucdavis.edu.