|By Jenny Donelan
|ATI, Nvidia, and 3Dlabs all unveiled new graphics chips or related technologies around the time of SIGGRAPH.2003. ATI also introduced the FireGL T2-128 card (top) and the hard-to-miss FireGL X2-256 (bottom). Both cards come with sizable amounts of onboard
At first glance, it might seem as if the high-end 3D graphics accelerator market hasn't changed much in the past year. The big three manufacturers—ATI, Nvidia, and 3Dlabs—are still the big three. Programmable hardware continues to be the hot topic that it was last year, and the year before. The DirectX graphics API, which arrived on the scene awhile back, is still with us, but so is OpenGL, which some people thought might be eclipsed by DirectX.
Strangely enough, the most exciting development of late may be that the big news of 2001—programmable hardware—is still big news. Instead of being pushed aside or superseded by some other technology, programmable hardware not only has made its mark on the gaming industry, but also is gaining momentum in the high-end workstation arena as both hardware and software vendors work to establish standards and create products that will take advantage of it.
"We're in the middle of this very important trend toward programmable graphics," says Peter Glaskowsky of the Microprocessor Report. "The real-time guys [game developers, for example] are getting the capabilities that only studios like Pixar had in off-line rendering." Before the advent of programmable pixels and shaders, he notes, "you basically had a choice of a dull matte finish or a shiny metal finish. Therefore, all the games really looked alike. Game developers wanted to differentiate their products, and programmable hardware has helped them solve this problem."
More sophisticated-looking games are only the beginning, however. The power of programmable pixels and vertex shaders is beginning to be available to higher-end markets, such as product design and digital content creation for film. Although progress is being made, it's not racing along, because there are a lot of players and a lot of standards involved. Here's an overview of where they're all at for the moment.
Microsoft's DirectX graphics API, now in pre-Version 9, has been seen by some experts as a challenge to OpenGL, historically the open standards graphics API of choice for high-end applications. Certainly development proceeds at a stately pace for OpenGL, because it's an open standard for which revisions must be agreed to by the members of The OpenGL Architecture Review Board (ARB), an independent consortium of hardware and software vendors. Though incremental revisions do appear on a regular basis, OpenGL 2.0, announced more than a year ago, is still in development and not likely to be available for several more months. Microsoft, on the other hand, as Glaskowsky points out, is able to execute updates to DirectX on a more rapid basis because it controls the API on its own. Nonetheless, he notes, "There is no apparent risk of OpenGL going away."
Meanwhile, the big three vendors are trying to sort things out with shader languages, drivers, and just about anything that will make programmable hardware accessible to developers working with their products. Just prior to SIGGRAPH, 3Dlabs announced that it had shipped an open-source OpenGL Shading Language Compiler, the first implementation of this high-level shading language that was recently ratified by the ARB as an official extension to OpenGL. The new shading language is designed to enable programmers to use a C-like syntax to code imaging algorithms that can be directly compiled to the graphics processor. Nvidia has already introduced Cg, a programming language it describes as "C for graphics." And ATI and 3Dlabs have worked together to release RenderMonkey, which Neil Trevett, 3Dlabs' senior vice president of market development, describes as "like Visual Basic for shader programs."
The chip vendors are working together, therefore, sitting on boards like the ARB, and even cooperating on joint projects from time to time. But they remain competitors. Relations among the members of the 3D graphics triumvirate have changed but slightly during the past 12 months.
3Dlabs, acquired by Creative Technology a year ago, continues to produce its Wildcat line of cards, which is aimed at the high- and ultra-high-end markets. ATI and Nvidia, which outpace each other with new products every several months, have begun to eat into that high-end niche somewhat. However, for the meantime, says Trevett, "We are the only vendor that is developing exclusively for the workstation market. Our sole focus is on the specific needs of the professional." One of the company's latest offerings is the Wildcat VP990 Pro loaded with 512mb of graphics memory—the largest amount in the industry. The extra memory makes the Wildcat especially adept at handling large models and large amounts of textures.
OpenGL 2.0 has not been released yet, but development is proceeding. This rendering of a glass statue was created using 3Dlabs' front-end OpenGL 2.0 compiler with Wildcat VP optimized drivers.
Image courtesy 3Dlabs.
Chip-maker Nvidia and chip- and board-maker ATI develop products for the gaming enthusiast as well as the professional. The focus at ATI, according to its director of workstations Dinesh Sharma, is to ap-proach hardware rendering through applications.
"At ATI, we support open standards through the applications that users know," says Sharma. In other words, the company is working on OpenGL 2-based drivers for Softimage and Alias's Maya, and DirectX 9-based drivers for Discreet's 3ds max.
ATI recently announced two additions to its FireGL Line—the high-end FireGL X2-256 and the FireGL T2-128. The former comes with 256mb of graphics memory and is based on the company's FGL Visual Processing Unit, or VPU. The FireGL X2-256 comes with four geometry engines and eight parallel rendering pipelines. Advanced pixel shader support, according to the company, will enable 3D professionals to render models, scenes, and effects within the hardware, and in real time, as opposed to slower software rendering.
Nvidia also has released new products—the Nvidia Quadro FX 3000 and Nvidia Quadro FX 3000G. Both cards, the latter of which will come with genlock and framelock capabilities, will be available from OEMs, VARs, and systems builders this fall. The FX 3000 cards also will come with 256mb of graphics memory, and offer full-screen anti-aliasing. The company is promoting them as products for such markets as medical and satellite imaging, automotive design, oil and gas exploration, video postproduction, and broadcast. "The bandwidth is twice as wide as the 2000 [the previous generation]," says Jeff Brown, director of workstations for Nvidia. "It will allow users to work with large models and really is going to enable us to address this area."
So is the new world of real-time rendering upon us? Many of the new product descriptions claim real-time rendering capabilities made possible by programmable hardware. It's not quite there, not in every case. Says Trevett, "In 2005, you'll have the capability of taking short-production films and running them in the hardware, in some cases in real time."
|Programmable hardware has enabled filmmakers, designers, and others to use Nvidia technology to create compelling characters.
Sharma, on the other hand, is a little more optimistic. "Last year at SIGGRAPH there was a lot of talk about cinematic rendering," he says, "but we didn't see much content. The developers really didn't know how to do it." The difference this year, he points out, is that applications-based implementations are going to make the technology more accessible. "The shader side of it is a whole new ball game," he says. "There is going to be a whole bunch of people creating shaders."
Looking even a bit farther ahead, says Glaskowsky, the programmable GPU is going to be so powerful that it will serve purposes beyond mere imaging. "For example, it will do the analysis for collision detection," he says. Tasks that would once have been done by the workstation's CPU will be performed directly at the GPU. "People will be thinking of it as a co-processor that happens to be connected to the frame buffer," he says. Until then, however, most of us will have to render and wait.
Jenny Donelan is a contributing editor at Computer Graphics World.
Want to know which 3D graphics card is the fastest? It would seem like a simple operation: Log on to SPEC/GPC's well-organized Web site at www.spec.org/gpc and check the results of the numerous benchmark tests posted there. But as just about everyone knows, when it comes to benchmarks, nothing is simple.
First of all, there are two main sets of benchmarks on the SPEC Web site: SPECviewperf and SPECapc. The former are often referred to as "synthetic" benchmarks, and involve operations created purely for the purpose of benchmarking. The latter are sometimes called "real world" benchmarks, because they're based on actual operations performed on relatively recent versions of actual programs. Which is more meaningful?
"To be useful, you really need both types," maintains Peter Glaskowsky of the Microprocessor Report. Both sets tax workstations and graphics cards in different ways, and the synthetic variety is designed to be updated more quickly, and can thus better keep ahead of vendors that are constantly optimizing their products to score better in benchmark tests.
Coming up: a new SPEC benchmark based on Alias's Maya. — JD