Diana Phillips Mahoney
Seeing is believing. In computer visualization, seeing is also understanding. It is looking at a visual image, processing its components, and learning more about it than if it were to be described in words or numbers. But what if the picture we`re seeing--the one we`re believing and think we`re understanding--is a misrepresentation? What if the image doesn`t tell the whole story or, through lies of omission, tells a misleading one?
Such deception, while not necessarily intentional, is almost inevitable given the nature of visualization technology and that of human perception. Visualization, at its core, is an abstraction. It is a way to put a face on numbers representing phenomena that are complex enough to require such treatment. Indeed, the process through which numbers are gathered, transformed, and ultimately displayed visually can be fraught with uncertainty.
|A wireframe surface depicts the areas of uncertainty in a visualization of terrain distinctions in a region of West Australia. The red and blue areas represent crops and remnant vegetation, respectively. The visualization is based on Landsat data.|
For example, in a geological application, "sampled" data collected to understand the distribution of elements in the soil can only ever be an approximate representation of the soil distribution across the region of interest. It would be impossible to get measurements for each and every point in the sample. Thus, visualizing the data involves making assumptions about the points that have not been measured, and representing those assumptions graphically.
The same goes for the visualization of dynamic information. To represent atmospheric changes over time, for instance, data may be gathered at a series of sequential points or time steps. What happens between these points can be assumed or predicted, and visual interpolations can made on that basis, but unless measurements exist for the activity of variables at every time step, the resulting visualizations contain uncertainty.
Geospatial applications are particularly vulnerable to uncertainty, since graphic "reality" is not absolute reality. It is an approximation of reality based on geographic features (typology, location, spatial relationships) observed at many points across a landscape. Because it doesn`t incorporate measurements of every point in the region the ultimate map is to represent, it is an uncertain reality.
Uncertainty in this sense should not be equated with "method-produced," or measurement error, says Barbara Buttenfield, a University of Colorado geography professor and uncertainty researcher. "The issue is not only one of measurement, but also of the discretization of measurements, of digitizing a continuous reality."
Geographical visualization is not the only victim. Uncertainty is encountered in the representation of nearly all types of complex scientific and numerical data, including visualizations of molecular and computational-fluid dynamics, medical imaging, bioinformatics, and multidimensional financial information. While the existence of uncertainty doesn`t necessarily negate the information value of a given visualization, an awareness of it is critical to obtaining a clear view of the data. Such information could impact the interpretation of the visualization and subsequent judgments based on that interpretation.
Despite its importance, uncertainty visualization has hardly been given the attention it deserves. "People tend to believe what they see in visual representations, and quite often those representations are not immediately backed up by the numerical information that tells them when to be cautious," says Alan M. MacEachren, a geography professor and director of the GeoVISTA Center at Penn State University.
Animated sphere glyphs are among the visualization tools UCSC researchers use to show spatial and temporal data uncertainty. The spheres highlight uncertain data. Animating them shows the variation over time.
One reason for this might be the fear that doing so may lesson the impact of the visualization itself by making it confusing or causing it to be perceived as suspect. Another reason is that it makes a less dramatic picture. People crave clear boundaries, distinct beginnings and endings, smooth transitions. Because we respond to clear, sharp images, we`re impressed by isosurfaces that smooth what should be fuzzy edges and by structures with distinct, bright colors where muted color transitions would be more accurate.
As visualization technology becomes more pervasive, however, hiding uncertainty under flashy pictures becomes increasingly onerous. "It`s more important than ever to somehow embed uncertainty into the visual display, especially when people are making important decisions based on these displays," says MacEachren. For example, geographical information systems are used extensively in applications such as environmental management and national security. Policy makers in these areas can only make informed decisions if they know to what extent the underlying data is uncertain.
Similarly, in a medical application, a physician making treatment decisions based on data obtained through a medical visualization must be made aware of the degree of confidence associated with the images in order to provide proper care.
Before uncertainty can be visualized, it has to be identified, then quantified. And the quantification process itself is an uncertain one. The values are generally discerned from a model of likely realities developed using one of any number of statistical techniques, such as Monte Carlo simulations. These involve simulating enough possible datasets to get an estimate of the potential distribution around some mean value. "Once you have those techniques developed, the measures are fairly well accepted," says MacEachren. "The next step is harder--deciding how much uncertainty is too much for a particular application. Acceptable levels will vary across disciplines, just like statistical confidence intervals in medicine are usually much narrower than they are in the social sciences."
What is also application-dependent is the determination of how to represent uncertainty data within a visualization. "The challenge here is figuring out how to incorporate uncertainty features into data displays in a way that is natural and intuitive in a given application, without increasing the visual complexity," says Alex Pang, a visualization researcher at the University of California at Santa Cruz (UCSC).
Comparative visualization techniques help genetic researchers understand the "fit" between gene sequences and protein structures. In this image, UCSC researchers link Lego-like glyphs to represent amino acid components. Traditional techniques such as color-mapping and side-by-side comparisons help identify areas of uncertainty.
To this end, Pang and his colleagues have developed a broad range of visualization methods for combining data with uncertainty information in specific applications. These include vector glyphs to visualize uncertainty in certain types of flow simulations, a tool for interactively visualizing distortion in map projections, a system for comparing different fluid-flow representations, and a method for depicting uncertainty in DNA and amino acid sequences and protein structures. The researchers are currently implementing the latter to help scientists involved in the Human Genome Project visualize the quality of the fit between gene sequences to protein structures.
An awareness of the spatial uncertainty of molecules is critical to effective drug design, which involves fitting the molecules of a given medication to the active site of treatment. If either or both of these elements are dynamic, there may not be a single correct fit, but rather a series of likely fits. The top image shows eight possible drug configurations of an anti-tumor medication, each with opacity determined by the likelihood of that configuration. More opaque regions represent a higher likelihood of an atom occupying that location. The bottom image shows a volume rendering of the composite space of the likely positions over time.
The UCSC researchers have also developed an application-independent visualization method called the reconfigurable disk tree (RDT). The RDT uses a setup of links and nodes that can be rearranged to organize and display hierarchical spatial and non-spatial (such as financial or statistical) data, including the relevant uncertainty information.
To enhance the user`s perception of uncertainty, each of these methods relies on such visualization techniques as side-by-side viewing, pseudocoloring, transparency, texture, and animation.
Although most researchers agree that there is no one "best" way to visualize uncertainty, a number of general perceptual considerations can help guide the process. "You first have to think about how important it is to get across uncertainty in a particular application. If critical decisions lie in the balance, where you shouldn`t be making the decision in the face of uncertainty over a certain level, then you probably want to visually obscure the data--maybe make the uncertain data change into a background color, so the certain information is prominent. This way it`s not possible for someone to make an incorrect assumption," says MacEachren. "In applications where people just need to be able to second-guess their decisions, maybe to make sure they`re not going out too far on a limb, it`s important to let them see the patterns first, and then think about how certain or uncertain they are." In such a scenario, the uncertainty representation could reside in the background and be accessible when needed.
The key, MacEachren notes, is to clearly separate the data and the uncertainty. In an application he and his colleagues developed to visualize the surface pressure predicted by meteorological models, the pressure visualization and the certainty prediction were separated graphically. "We used isobars, lines of equal barometric pressure, to represent pressure, and we used an area-shading scheme to depict the uncertainty. We then animated the result. An interesting outcome was that it made the fact that the models disagreed more in time than space quite obvious," he says. The results were important in terms of the ability to predict when a pressure system would reach a certain region.
Mapping uncertainty to pseudocolors and glyphs offers an intuitive way to understand the quality of complex data. The size and color of the spheres represent such characteristics as the amount of uncertainty and its variation over time.
Another consideration in designing an uncertainty visualization is human intuition. "We have expectations about what uncertainty means. It`s where the colors get dim, or where things get fuzzy or fade out. It makes sense to take advantage of those expectations," says Penny Rheingans a visualization researcher at the University of Maryland Baltimore County.
One of Rheingans` areas of focus is the creation of "likelihood" visualizations for biomedical applications. In a recent project, she worked with a medical researcher developing an anti-tumor drug. "The effectiveness of the drug was dependent on how it fit geometrically into the active site. One part of the drug molecule was connected by a single side chain and moved around a lot. The `likelihood space` of the composite shape based on this movement was going to affect how it would fit," she says. "You could take a snapshot of the molecule at an instant and maybe it would fit well, but if that instance was unlikely, it might not be an effective drug." In order to view the range of what the molecule could look like, the researchers built a model based on an average of a number of possible configurations. They created a visualization to depict the likelihood of a geometric fit with respect to the molecule`s movement over time.
Such an approach is more challenging when the uncertain values are ambiguous. For example, Rheingans and her colleagues recently began working with doctors at Johns Hopkins Medical Center on a method to show the likelihood that a particular tissue is cancerous. "The doctors are looking at tumors in the liver. They have a model that says the signal response [from an MRI] at a particular level is likely to be tumor or healthy tissue. The problem is that around the periphery of the tumor there are values somewhere between the two levels," she says. The researchers` objective is to visualize the tumor and partial-tumor zone by using variations in color and opacity to depict the degree to which given voxels represent healthy and diseased tissue.
A delta wing pseudocolored with velocity magnitude combines the visual cues characteristic of two methods used to view fluid-flow data (streamlines and line-integral convolution). Using custom software, UCSC researchers combine the two types of representations to help identify uncertainty in the simulation data. Both methods produce slightly different results, thus highlighting areas of uncertainty.
Another way to demonstrate uncertainty in a dataset is to map an appropriate range of possible realities and animate the results, says Chuck Ehlschlaeger, a researcher in the geography department of New York`s Hunter College. Ehlschlaeger is researching techniques for doing this using surface-elevation data. "The goal is to ask a question and determine a range of possible answers, then link the still images representing each possible result into an animation," he says. Such an approach not only helps researchers explore large amounts of data that couldn`t be represented in a single, static view, but also provides an understanding of the effect of uncertainty in the application. "If, for example, someone has to decide whether or not to allow development in an area where an endangered species is at risk, we need to come up with a mapping of possible realities that is large enough so that the `real` reality likely falls within that distribution. Then we have to come up with some logical way of looking at all of these possibilities," says Ehlschlaeger.
Animating between various versions of reality has to be more than a straight interpolation process, however. "We`re trying to figure out ways of taking two versions of reality and morphing between them, while at the same time making sure that the morphed images have the mathematical characteristics of what could be reality," says Ehlschlaeger. This approach is not applicable across the board, however. For example, it`s less useful where adjacent potential reality stills are very different. "Imagine setting up a camera in Times Square in New York, and taking one picture every hour, then displaying the series of pictures as a 30 frame-per-second animation. You wouldn`t see anything specific, because each frame would be so different."
Uncertainty models can be used to simulate potentially optimal routes between two points and the results can be visualized to make clear such distinctions as the shortest or least-expensive route. This image contains data for 250 potentially optimal paths between two points. To avoid visual clutter, each of the colored paths represents a group of closely aligned optimal routes.
|(Image courtesy of C. Ehlschlaeger, Hunter College.) |
Some applications require not only that uncertainty data be visible, but that it also be highly interactive. This is the case at HRL Laboratories in Malibu, California, where researchers are developing methods for visualizing uncertainty in tactical data in order to determine the effect on a military commander`s ability to make decisions. "We depict uncertainty in two ways simultaneously. A `situation awareness` visualization shows the uncertainty associated with a specific object, such as an aircraft flying over a terrain. At the same time, we display an abstraction of the [commander`s] evidential reasoning network, based on the aircraft`s attributes and the uncertainty associated with them," says HRL researcher Pete Tinker.
The resulting "belief" visualization is dynamic, interactive, and animated. "A user can navigate through the network to determine sources of uncertainty and can drill down at any point to get more detail," says Tinker. This is critical if a military commander is to use the tool to get a clear understanding of data uncertainty and its underlying causes. "[Such information] can mean the difference between action and inaction, life and death."
Although uncertainty modeling isn`t new, the push toward visualizing it is fairly recent. Because of this, much of the code being developed to handle it is either fully custom or developed using low-level programming capabilities of shareware visualization software or commercial toolkits, such as Data Explorer from IBM (White Plains, NY) and AVS from Advanced Visual Systems (Waltham, MA). Recently, many researchers have begun writing their applications using Sun`s Java 3D API, citing the benefits of platform-independence and the easy production of Internet-ready data. "Java is well suited to some of the ways we specify which parameters of a display should be controlled by the user," says MacEachren. "And it provides a flexible method for representing temporal data, which is important to the work we`re doing."
While the technical challenges involved in the development of uncertainty visualizations are significant, the more daunting obstacles may be the conceptual ones. "We understand so little about the nature of error and how to formalize our descriptions of uncertainty. Additionally, we understand little about how humans deal cognitively with uncertainty," says Barbara Buttenfield. "It`s difficult to automate or mechanize something that is not completely formalized. So without formal models of uncertainty, its detection, identification, and analysis are all impeded."
Vector glyphs can illustrate the continuous range of data quality in environmental vector fields. Here, UCSC researchers used them together with traditional arrow glyphs to show uncertain wind and ocean currents. The uncertainty is depicted in direction and magnitude, as well as mean direction and length.
A related obstacle is figuring out how to get people to make decisions in the face of uncertainty, says MacEachren. "Decision makers often don`t like to know anything about uncertainty. Our challenge is to find ways to present the information so they can and will incorporate it into their decision-making process."
Diana Phillips Mahoney is chief technology editor of Computer Graphics World.