Issue: Volume: 25 Issue: 1 (January 2002)

Meeting of the minds



BY Diana Phillips Mahoney

It doesn't take a rocket scientist to build a computer game, but rocket scientists, along with biologists, chemists, physicists, and every other professional who develops and applies visualization technology for scientific inquiry and discovery, have a lot to learn from computer game developers. They also have a lot to teach them.

Increasingly, the technology gap separating scientific visualization and gaming is being bridged by revolutionary advances in computer graphics hardware and software driven by the needs of both camps. Where scientists once ran their high-end simulations and visualizations on supercomputers and expensive graphics workstations, today they're churning out volumes of visual data on the same low-cost PCs equipped with high performance graphics hardware that the game community relies on. They are also incorporating novel optimization techniques driven primarily by the needs of the gaming community for achieving real-time interaction with their large scientific datasets.

On the game side, programmers are retrofitting simulation and visualization code to achieve amazing real-time, interactive effects in their latest titles, and they are taking advantage of hardware rendering techniques born out of visualization research, including 3D textures, vertex programs, and pixel shading.
The environmental effects (above) in RealMyst, the real-time version of Cyan's popular Myst game, were built using some of the same 3D texture capabilities that University of Utah researcher Joe Kniss uses to create real-time volume renderings of MRI data




"We are seeing more and more technology crossover between the two communities," says Jeff Brown, workstation product director for graphics hardware vendor Nvidia. For example, at the Siggraph conference last year, Nvidia presented an OpenGL/WireGL clustered graphics demo in conjunction with researchers from Stan ford. For the exhibit, Nvidia used six of its QuadroDCC workstations and the WireGL distributed graphics API to demonstrate real-time volume visualization of 3D magnetic resonance images. The ability to interact with such data in real time is obviously a boon to physicians and surgeons, but the gamers are taking note as well. "This sort of development helps to drive these technologies into volume products and can be used to make gaming experiences more lifelike and realistic," says Brown.




This coming together of the two camps is all the more intriguing given the broad differences in the underlying application objectives. "The goal of visualization is to present information in a way that helps users understand it," says Hanspeter Pfister, a graphics researcher with Mitsubishi Elec tric Research Labs in Cambridge, Massa chusetts. "Games typically focus on entertainment, action, and distraction from reality, rather than the discovery of new knowledge."

However, while the goals are much different, the technology needs for achieving them are similar. "When we talk about interactive applications in either camp, we speak the same language," says researcher Joe Kniss in the visualization group at the University of Utah. In his research, Kniss explores the use of volume visualization for real-time interactive animation and rendering of anatomical data for medical and educational applications, and he relies on some of the same techniques that game developers use to achieve certain effects. For example, he says, "bump mapping is quickly becoming a standard effect in the gaming industry. I use the same bump-mapping principles to achieve the per-pixel lighting required for my volume rendering."

Another example is the use of 3D textures in games. "The idea of 3D texturing in hardware was developed at Silicon Graphics for medical imaging. Now it is beginning to be applied to games for such things as volume light maps and space-varying fog," says graphics researcher David Ebert of Purdue University. For example, the groundbreaking water and environmental effects in RealMyst, the real-time 3D version of Cyan's popular Myst title, are built on 3D textures. Similarly, in the recent game AquaNox from Massive Development, the photorealistic environments and unique surfaces of the futuristic underwater world were generated with vertex and pixel shading.

The digital connection between science and entertainment is not new. Many of the early visualization labs were staffed with technical experts lured from the movie and special effects industry. And the gaming community has long benefited from the pioneering development of graphics hardware and software driven by visualization, including such techniques as March ing Cubes, view-dependent meshes, and volume rendering. Games have also u surped visual simulations born in the science arena, such as fluid-flow animation, cloud simulation, and physics and collision representations.

What is new is the widespread availability of low-cost high performance programmable graphics hardware that can be customized to meet the diverse application needs of both camps. Both groups are developing tools and techniques using the new graphics hardware that provides unique capabilities for applications across the board. For example, 3D textures would not be on inexpensive PC graphics boards if they weren't useful for games, but the fact that they are there at all is a result of their development for scientific applications. "The gaming community release cycles are so short that they typically don't have the resources to develop new graphics algorithms from scratch. What they do is get ideas from Siggraph or perhaps the IEEE Visualization conference, and develop their own implementations [of existing algorithms]," says visualization researcher Theresa-Marie Rhyne of North Carolina State University. Ultimately, the graphics vendors, whose eyes are on the mass-market appeal of novel capabilities, will build them into their products, as was the case with 3D textures.
To make volume rendering more intuitive and efficient, Kniss uses high-end 3D graphics cards and direct-manipulation widgets (beneath the models) to interactively render multi-dimensional transfer functions for extracting boundaries and surface properties




This is a paradigm shift from earlier years, when graphics hardware designers listened to the visualization community and responded to their needs, while the game crowd took what they could get. "For a long time, SGI workstations held the position as the beloved platform for scientists, and the products were always heavily tied to visualization," says Rhyne. Today, as games are becoming the most commercially successful application for desktop PC and console graphics environments, board vendors such as Nvidia and ATI have aligned themselves primarily with this market. "Their mindset is, 'We're looking at what the gaming community is doing, and we're hoping we can address the visualization community as well,' but they don't focus on the visualization community specifically."

The question of who is driving the technology is not nearly as important as where it is going. "For instance, the programmability of the new cards allows a developer to create new shading and transform models that meet their needs. This is an advantage regardless of what camp you belong to," says Kniss.
The photorealistic environments in the underwater world of Massive Development's AquaNox game were achieved using Nvidia's vertex and pixel shading tools-capabilities born in visualization research.




The changes are also providing the opportunity for all developers to start challenging standard assumptions of computer graphics, which, says Kniss, have long needed an overhaul. "The Blinn-Phong shading model has been universally used in computer graphics for more than 25 years. Unfortunately, this model is not very expressive or realistic. The programmability of these cards allows developers to construct a shading model and look that is entirely their own. The net effect is going to be richer, more realistic scenes, whether or not you are making games or doing visualization."

The road to those richer scenes is strewn with many performance and cultural obstacles. A sizeable one is the fact that a game must always achieve a target frame rate, which limits modeling and rendering complexity. In contrast, scientific visualization has more flexibility in this regard. "I am often quite comfortable with five frames per second, but a game running at this frequency would fail," says Kniss. "Scientific visualization is able to pursue approaches that may seem a little far out, because there is the assumption that the performance will eventually catch up."

On the other hand, the gamers' need for speed can be beneficial sometimes. "It forces programmers to reduce complex algori thms to their most essential components. This reduction can provide valuable insight for researchers," says Kniss. In fact, he notes, "optimizing algorithms can be just as important as coming up with new ones."
Numerical rain and wind models are represented using isosurfaces, contour line slices, color slices, and volume rendering created with a Java-based version of University of Wisconsin's Vis5D visualization toolkit. According to UW researcher Bill Hibba




Another consideration is the fact that the latest interactive visualizations are not real-time interactive, and thus typically require more compute power than that which can be applied in a real-time game," says Steve Anderson, a game developer at Electronic Arts/Los Angeles. "Even if the platform is identical, the game typically has so many other things to do that it cannot realistically consider using the new technique." For example, he says, "suppose you do a volume visualization on the latest Nvidia graphics card. If you can move around your scientific dataset, it's probably not in real time, and it's probably taking the full resources of the computer, leaving nothing for the other tasks needed to make a game." Consequently, from the game community's perspective, the latest-generation visualization code is generally not interactive enough. "When gamers steal techniques from other disciplines, the capabilities often lag the state-of-the-art by a couple of years [because the techniques have to be optimized for real time]."

A challenge for visualization developers hoping to take advantage of new interaction or display capabilities appearing in games is that the game community does not easily or often yield code that can be redeployed as visualization tools. "You might get fragments or maybe an engine, but most real development for profit is done on a schedule and focus that makes reuse difficult, even among other games," says Anderson.

There are also imposing cultural obstacles. For one, the technology mindsets are almost polar extremes. "Scientific visualization seeks understanding and values discovery. To achieve these goals, the systems need to have flexible interfaces and programming through stable APIs, says Peter Doenges, a visualization simulation researcher at Evans & Sutherland. "They also need accuracy in multivariate data, scalability in data, and CPU graphics bandwidth."
Getting an inside-out view of a frog is achieved using the volume texturing capabilities of Nvidia's GeForce graphics cards-a boon for both gamers and visualization researchers alike.




In contrast, says Doenges, "games focus on human performance under challenge with fast fixed-function rendering of virtual worlds and landscapes." And game developers have to achieve this under the pressure of an incredibly short release cycle.

The tight deadlines, the increasing de mands from users, and the rapid-fire price/ performance enhancements of graph ics technology that have led to ever-changing hardware, drivers, and APIs, inevitably re sult in products with lots of bugs. This worries visualization developers, who crave stability, says Doenges. "They can't tolerate buggy graphics drivers, API experiments on legacy application code, or database corruption on its way to the screen."

On the other hand, the gaming community can't afford to wait for the most stable APIs and drivers because they will miss out on their window of market opportunity. "Visualization researchers live with longer development cycles. They're solving problems over the course of years. Game developers don't have years. They have months," says Rhyne.

Some researchers have a different perspective on the time crunch. "As a visualization developer, I feel a lot of the same time pressures," says William Hibbard, a graphics researcher at the University of Wiscon sin/Madison. For example, in 1995, Hibbard and colleagues wanted to be the leaders in Java visualization systems. To this end, the group announced the availability of its popular shareware VisAD system just one day after Sun announced the availability of the Java 3D API. "The deadlines don't get much shorter than that." The point, he says, is that even though the subject matter is different, visualization and games share a fundamental challenge: to focus on the quality of the user experience, and to do so under deadline pressure."

Another cultural barrier that visualization researchers must contend with are the approximations and short cuts that game developers make in the interest of time. "I wouldn't want my medical visualization done with some of the hacks that are often found in video game code, where simulation is not as important as the overall final effect," says Anderson. The scientific community should have a healthy mistrust of tools and code developed purely for entertainment, he contends. "Pick up an old book on cartography and read about the debate and preference for map projections. Then imagine that same problem in 3D, with lots of other perceptual issues such as color and lighting variations that could cloud the result of a scientific simulation." This issue is irrelevant in most games, where the objective is a good-looking, believable scene, whether or not it is a fully accurate one.

The accuracy concern as such is valid, but should not be prohibitive, says Ebert. "If error metrics are maintained and quantified, then many game approximations may still be useful and can provide visually accurate results. We need to think about accuracy in terms of the image generated and how it is perceived, not in terms of floating point value differences."

In order to overcome any and all of these obstacles, yet a greater one has to be overcome: the lack of open and direct interaction among scientists, game developers, and graphics hardware and software vendors. "We need better communication opportunities," says Ebert. "From my previous work with [game company] Electronic Arts, I was impressed with the new rendering techniques and approximations being developed, but they were not necessary being disseminated widely."

Recently, commercial graphics vendors such as Nvidia and ATI have begun to serve as enablers in this regard. Last summer, for example, Nvidia sponsored Nvidia University-a three-day conference for some of the nation's top professors and students from university graphics and visualization groups. "This was a tremendous opportunity for those in the CG research community to interact with the people who are designing the next-generation graphics hardware," says Kniss. The payback to the companies attempting such outreach is that it spawns the development of new visualization algorithms, such as those for achieving textures and shading, which are beneficial for games and entertainment. "These advancements not only add value to the current generation of graphics hardware, but provide inspiration for the next generation."

The game development and scientific visualization communities are also exploring ways to bring each other into their respective folds. For the past couple of years, the Siggraph, IEEE Visualization, and Computer Game Developer's conferences have hosted panel discussions that focus on bridging the gap between the disciplines.

Once the bridge is in place, both sides stand to benefit. From a technical point of view, says MERL's Hanspeter Pfister, "the visualization community can learn a lot about efficient resource management, highly optimized rendering algorithms, and effective use of modern graphics hardware and multi-user networked applications." With respect to art and design, "visualization experts can learn about effective user interfaces and communication methods." On the flip side, he says, "game designers can learn about large data management, geometry simplification, physics simulations, rendering algorithms, and more efficient data structures." And in terms of APIs and cross-platform development, the gaming crowd could benefit from the more architecture- and hardware-independent development used in visualization. "Instead of squeezing every bit of performance out of hardware X, they could learn to take a global view of algorithms, software-engineering, and cross-platform development that has far more value in the long run."

Undoubtedly, the willingness to explore the possible synergies would be greatly aided if a convincing case could be made for potential commercial benefits. "It will take a few commercially successful examples of combining visualization and games for vendors to really open their eyes," says Pfister. "Successful synergistic products could even lead to the creation of new visualization paradigms or a new game genre."

In fact, says Peter Doenges, that new paradigm or genre might just be the elusive 3D killer application. "Computer games have been pushing the envelope, but they're still not the killer application that will serve the widest interest."

Joe Kniss believes the crossover killer app might involve simulation for educational games and experiences. To illustrate the potential value of his own volume rendering research in entertainment, he imagines a game version of Fantastic Voyage, in which the player might defend the body from the invading pathogens, destroy cancer, and repair a brain aneurysm. "The techniques that we have been developing could make this possible, and a game developer's performance and story-telling savvy would be critical," he says. "The result would be a product with enough action and gore to appeal to kids and enough intellectual value that parents will buy it."
Typical physics simulations of hundreds of interacting bodies can cripple the most powerful workstations. Shown here are the results of an efficient multi-body physics simulation by Mitsubishi Electric researcher Brian Mirtich that has been optimized for




Additionally, an application such as this could make good use of existing scientific assets. "Why create models of the human body from scratch, when there are so many great resources out there?" says Kniss. The Visible Human Project, for example, has a full account of human anatomy. "While this is overkill for games, a working subset could be used to guide the development of realistic anatomy." Similarly, there are many other scientific datasets, such as physics simulations, planetary animations, and biological reconstructions that could serve as the basis for realistic models for use in games.

Scientific visualization researchers have another reason to support cooperation and collaboration with gamers: if they don't, they could be left out in the cold. "Some researchers are concerned that game graphics and the operating systems and libraries favored by gamers will gradually hamstring academic computing and 3D visualization research," says Doenges. To the extent that scientific visualization favors OpenGL, this API is now being driven by 3D game innovations, though at a considerable lag relative to Direct 3D capabilities. "OpenGL visionaries want more sophisticated memory management connected with graphics, textures, and frame buffers, but these improvements are slow to come," he says. This could lead to the predominance of other programming interfaces, which could strand scientific visualization without needed features.

"This gulf must be answered with collaboration, as well as strategic commitment of game hardware and API experts to look further out at the convergence of visualization and games," says Doenges. Perhaps the best way to ensure this would be simply to foster and encourage respect between the different disciplines. "Pixar takes pride in nurturing a culture where artists and technical programmers live at the same level of recognition and status. Why not have game graphics developers put sci/viz developers on their teams as the visionaries helping to lead games to the next level?" In which case, maybe it will take a rocket scientist to build a computer game.




Diana Phillips Mahoney is chief technology editor of Computer Graphics World.