Issue: Volume: 24 Issue: 9 (September 2001)


By George Maestri

Over the past two years, Nvidia has come from nowhere to become the leading vendor of PC graphics cards. Its boards are technically advanced, fast, and reasonably priced. For the average consumer, they are hard to beat. The GeForce3 is based on Nvidia's new architecture, which builds on the successful GeForce2 with the addition of faster memory and new features including vertex and pixel shaders.

Nvidia doesn't sell directly to consumers, so the card I reviewed is available only through OEM sources-you might find it inside a Dell or Gateway computer. Or you can get GeForce3 graphics processor-based cards from board vendors such as Elsa, Leadtek, Visiontek, and Creative.

Unfortunately, my review had a shaky start. I powered down the machine, popped open the case, and proceeded to swap out my old GeForce2 card for the GeForce3. When the new card touched the AGP slot, I saw a spark and a small puff of smoke. Before I knew it, both the motherboard and the GeForce3 card were dead. I've replaced dozens of cards before and this has never happened, but I probably should have unplugged my work station completely, as apparently a small trickle current runs through the board, which caused the short when I inserted it. After several phone calls, a new motherboard, and a new video card, I got down to the review.

In terms of technology the GeForce3 is impressive. The chip has 57 million transistors-15 million more than Intel's Pentium 4 CPU. This hefty dose of silicon offers up some powerful features, so that the card has the ability to render images in hardware that look much like high-end software renders. The GeForce3 is touted as an excellent card for gaming, and it has received good reviews from gaming publications. This review, however, will focus on using the card in a production environment.
Two heatsinks and a fan help keep the GeForce3 from overheating.

The big new features for the GeForce3 are programmable vertex and pixel shaders. The vertex shader doesn't really shade vertices. Instead, it allows developers to manipulate geometry on the video card rather than the CPU. This can include deformations such as facial animation or skinning; lighting, shading, bump mapping; and even motion blur and realistic fur. Complementing vertex shaders are pixel shaders, which allow for a wide variety of effects, including realistic shadows and reflective bump maps.

Another interesting feature is the GeForce3's ability to support higher order surfaces. This allows the card to render patch-based surfaces in real time, tesselating them on the fly. Anti-aliasing has also been improved with the introduction of a new technology called Quincunx. What this does is anti-alias five pixels with only two samples, providing high quality anti-aliasing at twice the speed of normal x4 oversampling. Quincunx finally allows you to use anti-aliasing that looks good without a serious performance hit.

The last new feature is called Light speed Memory Architecture, a fancy name for a fast memory bus. This is not to be overlooked, however, as many video cards ship with a memory bus that can't keep up with the graphics processor. Nvidia has come up with a novel and elegant solution. The card contains not one, but four memory controllers. These are cross-wired so they can all access memory simultaneously, allowing for a much higher overall bandwidth.

All these features are amazing, but if there is no software to exploit them, the card loses its advantage. I tested the GeForce3 with a number of standard 3D applications-Discreet's 3ds max, Alias|Wave front's Maya, and Softimage XSI, as well as Adobe's Photoshop, Premiere, and After Effects. I found 2D image quality excellent and much better than that of the GeForce2 card it replaced.

The card also performed well in 3D, but wasn't as fast as I thought it would be. In terms of OpenGL speed, the GeForce3 is not Nvidia's fastest. A few months ago, I reviewed Elsa's line of Nvidia-based Quadro2 cards. Performing the same Viewperf Ad wavs tests with the GeForce3 on the same 800mhz dual-processor machine with 512mb of RAM gave me a score of 46.54, which was well below the 63.72 for the Quadro2 Pro-based Gloria III card and 51.30 for the Synergy III (Quadro2 MXR) card (higher scores are better). Interestingly, 3ds max performance was a bit better for the GeForce3. Using OpenGL, the card ran the standard max benchmark file Texture2.max at 45 fps, compared to the Gloria III's 48fps and the Synergy III's 42fps.

These tests, however, did not really tap into the true power of the GeForce3. Several tests supplied by Nvidia, most notably the Aquanox Direct3D benchmark, showed that the card can be up to 3 times faster than a GeForce2. Sadly, benchmarks don't pay the production bills. At this point, gamers are the card's prime audience. For those in production who use OpenGL regularly, the card is a decent performer, but I would hope that performance all around will get better as software matures to match the card's power.

George Maestri is a writer and animator living in Los Angeles.

Price: $350 to $400
Minimum System Requirements: Intel or AMD-based machine with an AGP slot supporting Windows 98/NT/ME/2000