Issue: Volume: 24 Issue: 8 (August 2001)

Seeing Sound



By Diana Phillips Mahoney

What does sound look like, and who wants to know? Mostly it looks like funky abstract art, and everyone from urban planners to VR developers to concert pianists wants to see it.

Actually, depending on who is looking at it and why, sound has many different visual guises. At Lucent Technologies' Bell Laboratories, for example, sound appears as beams of light that vary in length and intensity relative to their distance from the point of origin and the obstacles encountered along their path. At the Center for New Music and Audio Technologies (CNMAT) at the University of California at Berkeley, sound looks like oceans of alternately vibrant and muted colors with waves emanating from multiple geometric objects. And in Vancouver, students and faculty at the University of British Columbia see sound as expanding and contracting balloons.




Of course, actual, physical sound doesn't "look" like anything. Our perception of soundwaves is related to a variety of acoustic characteristics, including decibel level, spread patterns (propagation), and intensity changes over time-none of which is tangible or visible. To enhance our understanding of sound, a variety of techniques exist that simulate acoustic phenomena. These result in complex numerical models, which, frankly, are not much to look at. In addition, the computational data, because of its vastness, is difficult to synthesize and comprehend. A picture, on the other hand, is worth a thousand numbers.

In this regard, there's been much noise in recent years about techniques that generate pictures of sound, both to enhance the understanding of it, as well as to manipulate it and create it.
The sights and sounds of a city are re-created using traditional geometric modeling techniques for the 3D objects and a type of raytracing for the 3D sound. VR researchers at Bell Labs and Princeton University visualize the raytraced beams to identify sou




Computer-aided acoustic modeling and visualization tools have broad application. An architect might use such tools to evaluate the acoustic properties of a proposed auditorium design, for ex ample, or a factory designer might rely on the technology to predict the sound levels of any machine at any position on a factory floor. Visualization can also be used to create, edit, or perform sound by interactively changing parameters of a visual acoustic representation.

In addition, says Thomas Funkhouser of Princeton University, "acoustic modeling can be used to provide sound cues to aid understanding, navigation, and communication in interactive virtual environment applications, particularly if the simulations can be updated at interactive rates." For example, he says, the voices of users sharing a virtual environment may be spatialized according to each user's avatar location.

Finally, slightly more esoteric sound-visualization applications include the visual representation of noise pollution as a tool to enhance environmental health protection and decision making, and underwater acoustics visualization for seafloor analyses.
The "listening space" for an individual loudspeaker (the cube) in the sound spatialization theater at UCal/Berkeley is bounded by isosurface contours representing the delivery of sound from its point of origin. The red polygonal face is a wall, which inte




One of the primary challenges in acoustic modeling is the computation of reverberation paths from a sound's source position to a listener's receiving position. At the heart of the challenge is the fact that sound may travel from source to receiver via a multitude of reflection, transmission, and diffraction paths. Given the nature of this challenge, it's easy to see why acoustic modeling and computer visualization make beautiful music together. It turns out that acoustic simulation shares many of its fundamental computational methods and algorithms with graphics rendering, says Nicolas Tsingos of Bell Laboratories, who, with Funkhouser, has developed a number of innovative techniques for modeling acoustics in virtual environments. "Because both light and sound can be modeled as wave phenomena, and because historical progress in wave physics tends to be long alternatively to optics or acoustics, computer graph ics and virtual acoustics are able to share the same geometric tools, such as raytracing or cone tracing, in worlds de scribed by 3D primitives such as polygons."

Tsingos and Funkhouser have adopted some of these techniques to predict how sound is gong to propagate in virtual worlds, and based on those observations ultimately present, or render, the sound in a way that enhances the virtual experience. Most recently, their efforts have focused on modeling reverberant sound in 3D virtual worlds using diffraction simulations. "Realistic modeling of reverberant sound provides users with important cues for localizing sound sources and understanding spatial properties of the environment," says Tsingos. "Unfortunately, most geometric acoustic modeling systems do not accurately simulate reverberant sound." Instead, he notes, current systems model only direct transmission and reflection, ignoring diffraction entirely or relying on a crude statistical approximation.

Such an oversight can have a significant impact on the plausibility of the rendered sound. "Diffraction is a wave-like effect that is very important to sound. Without a diffraction model, when a sound source moves behind a wall, the sound is going to be cut because the source disappears from the line of sight," says Tsingos. In the real world, if someone moves around a corner, you can still hear him talking, though the sound is muffled. To accurately simulate this, Tsingos and Funkhouser have incorporated diffraction-modeling capabilities into their acoustics model. Basically, says Tsingos, "[the algorithm] figures out what's happening between the sound source and the listener and in the case of obstacles in the beam path, the signal is not cut, but rather attenuates appropriately." In order to achieve this, the system computes sound propagation paths using conventional raytracing techniques between the 3D location of the sound sources and the listener-both of which may be moving around in the virtual world. Visualizing the computed rays gives designers a clear picture of where the sound energy is going to propagate, whether and to what degree it will be incident on the listener, and the presence of delays between the source and receiver. Those factors then become the sound-design parameters for translating the basic input sound into reverberant sound.
Color maps and raytraced beams provide insight into how sound from a single source within a cubicle travels through an office setting. Researchers at Bell Labs and Princeton use this information to model plausible sound for virtual environments.




A secondary benefit to the visual representation, says Tsingos, is its use as a program debugger. "Sometimes your sound-simulation program may not be producing an expected result. If that's the case, the problem can often be more quickly identified by analyzing the visualization results."

As is often the case in computer graphics, the degree of accuracy required for simulated acoustics is application-dependent. Interactive games, for example, require plausible, but not necessarily highly accurate, sound. The goal in such applications, says Tsingos, is to provide "coarse audio cues to what's happening that complement the visual cues." Though physical accuracy is not of paramount importance, audio/visual coherence and consistency is critical. Otherwise, the effect of the simulated sound will be perceptually disturbing.
Visualizing the time-varying sound pressure levels in a sound theater at the University of California at Berkeley helps determine the optimal configuration of loudspeakers for audience enjoyment.




Other applications, such as architectural design, in which decisions are being made based on the simulated sound data, obviously require a higher degree of accuracy. This is the case at CNMAT, where researchers have developed a tool for real-time visualization of acoustic sound fields for a sound spatialization theater built into the main performance and lecture space at CNMAT. The theater is unique in that it employs a flexible suspension system built primarily for loudspeakers. Each speaker hangs from a rotating beam, which runs in a track that slides along ceiling rails. The suspension cables can be adjusted for height, allowing the speakers to be moved anywhere in the room. Real-time, low-latency audio signal processing for the speaker array is performed on a multiprocessor SGI Octane workstation.

The novelty of the the Spatial Theatre setup requires an equally innovative acoustic-modeling system. "Most applications of spatial audio are based on a model in which source material is spatially encoded for an ideal room with a predetermined speaker geometry," says CNMAT researcher Sami Khoury. But because the arrangement of the speakers changes for each performance in the theater, there is no single, pre-determinable sound-processing ideal. Also, it would be far too time-consuming and ineffective to use a traditional trial-and-error approach to evaluate the effects of new speaker positions and respective software parameter changes for all listening positions. "It is easy to optimize the listening experience for the lucky person in the 'sweet spot' at the expense of the rest of the audience," says Khoury. "The challenge is to find a compromise where as many listeners as possible experience the intent of the sound designer and as few listeners as possible endure disastrous seats."

To meet this challenge, the researchers developed a generalized acoustic modeling system that could accommodate the unique needs of the dynamic physical environment. The system comprises real-time, interactive software for visualizing volumetric models made up of source signals, the acoustic characteristics of the room, and interpretations of the field according to perceptual models. The heart of the system is a database describing the configurable performance space, which contains information on geometric features, such as the shape of the room, positioning and orientation of sources, microphones and audience seating, live performer locations, and locations of their instruments.

Each object in the room is also described by acoustic properties, including frequency-dependent radiation patterns and the location of their acoustic "centers." Custom software uses the database to process source signals to simulate the audience perception of virtual sources from arbitrary regions in space, the location of which are controlled in real time through network-based communication.

The visualization software, written using the freeware Visualization Toolkit (VTK) C++ data visualization library, has access to the room database and real-time parameter estimates from the spatialization software. It estimates the sound-pressure levels in the room based on the acoustic model of the room. In one application, the varying sound pressure levels of organ pipes in the room are visualized. The pressure is shown using a color map on horizontal cut planes through the space. These movable planes are typically set to the average positions of the audience's and performers' ears, so multiple simultaneous cut surfaces may be necessary under certain conditions (balcony seating in large theaters is an example) to optimize the sound delivery to all positions.
Isosurface contours show the locus of points for which the difference in sound-arrival times from the speakers in the UCal/Berkeley theater is one millisecond. Such differences affect people's perception of sound relative to their location in the thea




"It is interesting to contrast this volumetric visualization with traditional audio metering of scalar signal levels," says Khoury. "Such metering is useful for managing signal levels in the electrical elements of the audio system to avoid distortion and speaker overload. However, it is difficult even for experienced sound engineers to use scalar metering to predict actual sound pressure levels in many locations in a venue."

Although developed for the Sound Spatialization Theatre, the acoustic visualization technique can be applied to any 3D room model. Currently, Khoury and CNMAT colleagues Adrian Freed and David Wessel are working with Alias|Wavefront on a technique for automatically extracting room database information from a conventional 3D CAD model.

Sound visualization is not confined to acoustic analysis and design applications. It is also being widely implemented as a medium for creating and manipulating sound. One "hands-on" example is a sound-sculpting environment developed by Axel Mulder and S. Sidney Fels at the University of British Columbia. Using a virtual object as an input device-in this case a 3D model of a balloon-users can control sound spatialization and pitch and tone parameters to define and perform sounds using datagloves and six-degree-of-freedom sensors. The system is an extension of a real-time, visual programming language called Max/FTS, which was originally designed for sound synthesis. The deformable balloon can only be perceived through its graphics display and acoustics representations. The computational intensity of achieving real-time audio and visual interaction precludes tactile representation.

When users manipulate the virtual balloon, a series of processing steps ensue to achieve the output sound. The dynamic geo metry of the balloon (a single layer of masses positioned in sphere-like form) and simplistic models of the user's hands are computed based on position and orientation information. The virtual object manipulations drive changes to various sound parameters, including pitch, duration, and amplitude. The proof-of-concept system, in its current state, is intentionally simplistic. The re searchers plan to incorporate more sophisticated techniques for mapping a broader range of hand movements to create a wide array of distinctive sounds. In addition, according to Fels, "we hope to apply the developed interaction methodology to other domains, such as the editing of texture, color, and lighting of graphical objects."

Such multisensory considerations will clearly be the soundwave of the future, as processing capabilities and visual simulation tools continue to grow ever more powerful. Whether it's an architect who needs to analyze the acoustic properties of a room before it has been built, a game developer attempting to enhance the sensory experience of an interactive title, a creative artist dabbling in a new digital medium, or the many other professionals whose ears and eyes are open to the possibilities, people are more than ready to see what they've been hearing in the real world and hear what they've been seeing in the digital one for years.




Diana Phillips Mahoney is chief technology editor of Computer Graphics World.






The US Navy is diving deep to get a good look at sound. The Office of Naval Research, in conjunction with the Naval Undersea Warfare Command and the Fraunhofer Center for Research in Computer Graphics, is developing highly sophisticated acoustic data-visualization techniques for use in modern sonar systems.

Unlike the acoustic visualizations based on numerical simulations of sound, the naval application relies on actual acoustic signal measurements, gathered with state-of-the-art sonar sensing technology. This is because most theoretical and numerical methods for modeling acoustic propagation, such as raytracing and parabolic equations, are ill-suited for modeling an oceanic environment. In contrast to a room or even an outdoor environment in which the spatial boundaries are clear and physical obstacles are obvious, the undersea world is boundless and unpredictable. The lack of physical consistency over time and space renders it nearly impossible to characterize using theoretical tools.

Because of this, the Navy is intent on exploiting advances in sonar technology to accurately model undersea acoustics for performing such tasks as the real-time detection and classification of dangerous ob jects, including undersea mines.
In an undersea-visualization command center, multiple screens display acquired sonar data in both search-and-detect and analyze-and-classify modes, enabling users to locate and investigate potential underwater threats.




The success of these complex, time-critical op-erations depends on the ability of the command team members to achieve and maintain situational awareness. Unlike traditional disaster-management applications, such as air-traffic control, in which decision-makers have the benefit of national and international information assets (satellite and other data) to get a complete picture of the area of interest, undersea operations are almost entirely platform-centric, meaning the undersea team must exclusively rely on its own sensors to obtain information. Maximizing performance under these conditions requires the highest degree of information-extraction and development tools from onboard acoustic sensors and processing systems.

Until recently, however, the high-speed, high-resolution visualization of such data has been hampered by computational limitations in general, and the specific challenges of visualizing the ocean environment, with its sheer vastness and the wide variation in types and intensity of noise and clutter. "In the past 15 to 20 years, technological advances in sensors, signal processing, and computational power have allowed sonar operators to collect an overwhelming amount of data," says Fraunhofer project leader Robert Barton. "But while we have greatly increased the amount and capabilities of sensors and processing systems, there has been very little improvement in the ability to display and interface with the data."
The spatial subdivision of underwater sonar space and low-resolution visual presentation enable fast searches over the data space to help sonar operators decide where to more closely focus their attention.




To deal with the resultant data overload, the Navy is taking advantage of the fruits of two Fraunhofer research projects: the Advanced Volumetric Visualization Display, created to enable the high-speed visualization of volumetric datasets, and its Large Scale Visualization Environment (LSVE) for performing the essential tasks of detection and classification of sonar contacts. The LSVE employs a semi-immersive Virtual Table display system to enhance users' ability to intuitively interact with and explore the huge amounts of visual data.

In the acoustic visualization application, this means manipulating the data using a variety of computer graphics tools to gain unique perspectives that couldn't be achieved in a standard static view of the acquired digital signals. "Prior to this, what the operators have had to look at were static lines and dots," says Barton. "[With the visualization system], they are able to move the data around, slice through it, and animate it to look for patterns and changes." With such insight, the operators are better able to detect unusual objects and classify them as objects of concern, such as sea mines, or simply as natural ocean artifacts.
In the "analyze-and-classify" mode, the visualization system switches to a view of the raw sonar data, with which the operator can interact and manipulate using such techniques as cutting planes and lookup tables to get a more detailed perspective.




The acoustic visualization environment itself is designed to deal with passive sonar data, which is the information collected by sensor arrays and systems that receive acoustic energy. The processing systems collect the passive data by "listening" to a series of hydro phones, which are devices that convert acoustic pressure energy into an electrical signal. The electrical signals are processed to reduce unwanted energy and generate focused beams. Of critical importance to the value of the collected data, says Barton, "is the method by which the information is presented, enabling the user to understand and extract the necessary information ac curately and efficiently." The challenge is exacerbated by the complex, variable nature of the undersea environment.



The new system meets the challenge head-on with a dual-component assault. The first component is a search-and-detect capability that relies on advanced visualization tools to enable sonar operators to detect undersea objects. The second component is an analyze-and-classify capability that converts the area of interest located during the detection process into a volumetric display of the raw data for interaction and interrogation.

To further aid user understanding, the system employs various computer graphics techniques, including look-up tables (through which users can interactively adjust the opacity and color of each rendered signal value), cutting planes, and volume scaling. These tools help users identify or enhance the view of objects of potential interest in the dataset.

Early tests of the new system have garnered positive feedback from naval sonar operators. And, as in most applications, "the-more-you-give-them-the-more-they-want" mantra rings true. "Everyone would love to be able to have fully multimodal interactive, large-scale, real-time volume data exploration," says Barton. "But today, visualization alone of large data volumes is hard enough. We are just beginning to explore other modalities, such as haptic feedback and speech synthesis." -DPM