One constant in the world of workstations is that they are always getting faster. Artists, animators, editors, and technical directors are constantly demanding more computing power to create extremely complex and highly realistic images. These days, there’s a new way to add more processing power to a workstation, and that’s to add more processors. Dual-core processor systems are now the norm, and quad-core systems are poised to steal their thunder. So what are vendors waiting for? Adding more processor cores to a system is not a simple task. It involves much more than basic mathematics.
Widening the Road
Think of a single processor representing a one-lane road. While you can raise the speed limit so more cars can traverse the route, you would have to double the speed limit to enable twice as many vehicles to travel down the path in the same amount of time. In terms of processors, speed is measured by the processors’ clock rate, and it used to be that you could measure a processor’s performance simply by its clock rate. Clocking a processor faster and faster, however, generates a lot more heat and consumes a lot more power. In order to double the speed, you have to double the clock rate. Yet, you can only double the speed limit so much before the speeds become insane. “Insanity” hit the CPU world somewhere between 3 and 4 ghz.
So, instead of continually upping the speed limit, processor companies chose to make the road wider, adding more processor cores so the CPU could carry more data with a reasonable speed limit. The first step was to create dual-core processors, which had two processors on a single chip. This essentially doubled the processor power, but wasn’t a new idea. The concept of having dual processors in one workstation has been a reality for at least a decade. The big difference with a dual-core processor is that both processors are on a single chip. This saves a lot of expensive real estate on the motherboard and is much more efficient from a power standpoint.
It stands to reason that if two cores are better than one, then four must be better than two. In late 2006, less than 18 months after the introduction of its dual-core chips, Intel announced its Quad Core Duo and Quad Core Xeon chips. These processors are basically a two-for-one arrangement, packing two of Intel’s dual-core processor chips into a single-processor package.
AMD currently is running a little bit behind Intel in the quad-core race, but that is because AMD’s new Opteron chips will be much more integrated, with all four cores on the same piece of silicon. Whether this makes a difference in performance has yet to be seen, but AMD’s chips will be available sometime this year.
Of course, if four cores are better than two, then it seems reasonable that eight must be better than four. To this end, several manufacturers are shipping machines with two quad-core processors installed, for a grand total of eight processor cores. HP’s xw8400 workstation offers dual Xeon quad-core processors, and Apple’s Mac Pro workstations can also support dual quad-core processors.
Another important factor in this new technology is the advent of 64-bit processing. A 64-bit data path essentially doubles the amount of information the computer can process at a time. The one catch is that the operating system needs to be 64-bit capable. All the current operating systems, such as Windows Vista, OS X, and Linux, are 64-bit, but older software versions may only be 32-bit. Updating your operating system and applications to the most current version should ensure 64-bit performance.
Not every software application is set up to benefi t from dual cores,
though parts of Autodesk’s Maya can indeed utilize the technology.
Rendering is one DCC application that benefi ts greatly from
Not So Fast
In some ways, the argument for quad core is simple: More processors mean more computing power. This would lead you to believe that an application running on four cores will run circles around the same application running on a single processor. But reality is not so simple. The vast majority of applications are written to run on only one processor at a time. For instance, word processing and e-mail apps simply don’t need more than one processor, and the additional complexity of writing multi-threaded software that takes advantage of all these processors is rather complex.
In the area of animation, video editing, and visual effects, however, the benefits of multi-threaded applications are clear. Productions are constantly adding more effects shots, and the resulting images are getting more complex, but the deadlines don’t change. Artists creating these images need all the power they can get. Thankfully, most of the major software companies have been threading their applications for quite a while. Now that most new systems these days have multiple cores, there’s even more pressure to write software that efficiently takes advantage of all the processing power available. Still, even with excellent multi-threading, the performance of an application will never scale linearly with each new core that is added. This is because the application needs overhead to manage the whole process of multi-threading itself.
Even within an application, some parts of the code may not be threaded. According to Rob Hoffman, product manager for Autodesk Media and Entertainment, 3D applications such as Maya and 3ds Max are multi-threaded in those parts of its applications that really need it, such as for simulation of particles, fluids, hair, and cloth. Multi-threading these processor-intensive tasks makes these packages much more interactive, showing artists a clearer picture of what the final product will look like. With multiple cores, animators will be able to animate characters more fluidly and see hair and clothing in near real time. Effects artists can more closely simulate crowds, fluids, and many other complex effects.
Applications such as image editing, compositing, and video editing can also take advantage of multiple cores. In a compositing application, for example, one core may be performing color correction, another may be tracking motion, another may be reading the video frames off the array, and another is rendering finished frames. Even an image-editing package such as Adobe’s Photoshop can use multiple cores.
One big advantage for multiple cores comes in the area of rendering. At NAB2007, Apple was touting one of its eight-core Mac Pro machines as having a “renderfarm in a box.” In some ways, this is very true. Rendering is great way to leverage all the processing power of the many cores, simply because rendering can be divided into as many streams as you want—each frame uses its own core. Another benefit of quad-core processors is economic: Applications such as Mental Images’ Mental Ray, one of the more popular third-party renderers, are licensed per CPU socket. This means that a quad-core processor can run four instances of Mental Ray, while a single-core CPU would only run one instance, giving the user four for the price of one.
One final argument for multiple cores is that when running multiple applications, the system will have the resources so that there’s no dip in performance. This makes the system run more smoothly with fewer delays, resulting in more productivity for the artist.
Within Maya and 3ds Max, parts of the code dealing with simulations,
for example, are multi-threaded.
Feeding the Beast
Four processors can chew through a lot of data very quickly, so feeding those processors requires faster and faster subsystems. Memory is one of the most critical subsystems, and quad-core systems will have much faster memory with a wider bus and more channels than it did before.
Just a few years ago, 32-bit systems topped out at 4gb of RAM, but the new crop of 64-bit systems can address millions of terabytes of memory, so there is no longer a practical limit to the amount of memory a workstation can address. Still, there’s a practical limit for the number of memory modules you can install on a system, and most quad-core systems provide anywhere from 16 to 64gb of RAM. How much RAM you’ll actually need depends on the application. According to Mike Diehl, product manager for HP workstations, most people using a package such as Maya or 3ds Max will need approximately 4 to 8gb of RAM, though some people will need more, particularly those working in data-intensive applications such as oil and gas exploration.
In addition to having more memory, systems need to have faster memory, and this is solved in two ways. First, the memory clock speed is faster, and just like having more cores, chipset makers now add more memory channels to widen the road and relieve the bottleneck. Intel has switched its current generation of multi-core servers and workstations to a new memory technology called FB-DIMM (fully buffered dual inline memory modules). Intel’s latest chipset has up to six channels available, though many motherboard manufacturers opt for only four. AMD will support the venerable DDR2 for the time being, switching to FB-DIMMs in the future.
Another factor in this equation is cache memory in the processor, which helps stage the data for execution within each core. Processors now sport as much as 8mb of cache on the chip itself. “Intel has done a good job with utilizing cache to increase performance,” says Diehl.
In addition to memory, the demands on other subsystems are also greater. The advent of HDTV and higher-resolution images will demand larger and faster disk arrays. For graphics cards, the fairly new PCI Express bus will also be upgraded to Version 2, which will provide even faster graphics. The bottom line is that the entire system, and not just the processor, needs to be fast.
Increasing the Numbers
As for what the future holds, the general consensus is that the number of processor cores will continue to increase. Microsoft’s director of strategy, Ty Carlson, spoke about this recently during the Future in Review 2007 conference in San Diego: “You’re going to see in excess of eight, 16, 64, and beyond processors on your client computer.” He also suggests that operating systems such as future versions of Windows would have to be “fundamentally different” in order to take full advantage of this new technology. Most operating systems are geared for four, perhaps eight, processors, but 64 processors may require some very new technology.
For the end user, however, a lot of this new technology may be transparent. An artist or technical director will simply see a faster machine and will get their jobs done more quickly. Autodesk’s Hoffman puts it very simply when he says: “Processor power is kind of like pizza and beer; you can never have too much of it.”
Indeed, if this technology lives up to its promise, then, hopefully, artists will have little more free time to enjoy their pizza and beer.
The Gates Planetarium in Denver is using the power of quad-core
processing to render vast quantities of scientific information
and complex calculations.
George Maestri is a contributing editor for Computer Graphics World and president/CEO of RubberBug animation studio. He can be reached at firstname.lastname@example.org