Viewpoint
By: Ken Pimentel
Issue: Volume: 33 Issue: 2 (Feb. 2010)

Viewpoint

The Rendering Revolution: A Desired Disruption
Today it can take hundreds of minutes to compute a single HD-resolution, photoreal image or animation frame using a typical workstation and existing software renderers. In many cases, it might even take several minutes before there is the slightest indication that something isn’t right in the image, forcing the whole process to start over. Artists rarely render just once to get a final result; instead, they render 10 or even 100 times to get the look they’re searching for.

As such, long render times and delayed feedback have a large impact on both the creative process and the cost of producing a photoreal image. Customers overcome these limitations by building large, dedicated, rendering farms, with potentially hundreds or even thousands of computing nodes, to create a more responsive and interactive solution, or to meet deadlines.

But, what if you could generate photoreal image quality in under a minute? What if you could render a finished frame of an animation in seconds—perhaps of a lesser quality, but still good enough for most purposes? Disruption occurs in any technological field when a 10x or better improvement occurs to an existing process. So, if an artist could iterate in “human time”—with the creative feedback loop measured in seconds or a fraction of seconds and entire animated sequences produced in minutes—then a rush of new business models and applications could open up for artists using the disruptive technology.

We’re on the cusp of something like this happening with rendering. Here at Autodesk, we’re calling it the “Rendering Revolution.”

Under the Hood
The technology underpinning the Rendering Revolution is twofold. First, CPU cores have multiplied in workstations, with eight cores now available for less than $2000. Second, a dramatic increase in the computing capabilities of GPUs during the past couple of years has finally made it practical to move some of the compute-intensive calculations required for raytracing to the GPU, which is looking more and more like a supercomputer on a chip.

Ken Pimentel is director of the Visual
Communication Group at Autodesk

Also, many of the shading and lighting effects found in a traditional software renderer can now be delivered through GPU techniques. Witness the quality of recent game titles that leverage state-of-the-art game engines to produce images that are increasingly close to photoreal. While these images may not always be physically accurate, they can often be good enough for those who do not require ultimate pixel fidelity from their rendering.

These two factors have driven a myriad of interactive or progressive rendering solutions to market and to trade-show floors as technology demonstrations.

Layered on top of the new hardware are sophisticated software algorithms that scale almost linearly with the increase in hardware resources.
Using 16 cores is practically twice as fast as using eight cores, and adding a second GPU can almost double the performance of the single GPU solution. When you couple this with advancements in virtualization and cloud computing, you approach a point where you can instantly scale your rendering resources according to deadlines and other requirements. No longer is a dedicated renderfarm an absolute requirement to get results quickly.


Caustic Graphics illustrates the result of integrating Brazil RT with 3ds Max, using the CausticOne rendering accelerator.

Along with the availability of computing resources, a lot of attention is being paid to the process of locking in the finished look and making decisions around materials, lights, and cameras that drive the creative value of the image. New progressive rendering solutions enhance the iterative experience to accelerate these creative decisions. Instead of waiting for part of the image to be rendered, the entire image is progressively rendered at an increasing resolution, morphing from grainy to photoreal—akin to watching the image of a Polaroid instant film develop.

While the time it takes to get a finished image may not change much, the fact that critical creative decisions can now be made in the first tens of seconds of rendering is revolutionary for many artists.


ArtVPS’s Shaderlight raytracer enables artists to interactively make changes to fully rendered images, such as this car model.


New World of Rendering
During the past 18 months, we have started to see products and prototypes that take advantage of these new capabilities, and there is definitely more to come throughout 2010.

For instance, days ago Mental Images released Iray, the first interactive and physically correct photorealistic rendering solution. Using a new path-tracing approach, Iray leverages the GPU to deliver physically accurate rendering with zero setup. It is a great example of a progressive renderer that delivers results in seconds, not minutes. Iray makes the rendering process interactive by progressively refining an image until the desired detail is achieved, becoming increasingly faster as more GPUs are employed. This allows the play of light, shadow, and reflection to be studied, making interactive realism a reality.

Shaderlight, new raytracing software from ArtVPS, allows interactive changes to be made to fully rendered images. Shaderlight renders intelligent pixels that understand where they fit in a 3D image. When changes are made to materials, environments, lights, or textures, the information embedded in each pixel is used to update the image without the need to re-render. ArtVPS calls these MELT changes.

Chaos Group recently released V-ray RT for 3ds Max, which provides quick previews of images. Performing the actual rendering outside of 3ds Max, V-Ray RT follows the user’s actions while working on the scene and progressively generates a photorealistic preview of the scene. Even more impressive, the company has also demonstrated a research effort using the GPU to generate anywhere from a 10x to 20x speed improvement over the CPU version. It is expected that this product will soon reach its final development stage.

Similarly, Caustic Graphics showed Brazil RT integrated with 3ds Max and running in an interactive mode with the firm’s Caustic­One hardware rendering accelerator at SIGGRAPH 2009. The firm claims that its next iteration of the chip will be 10 to 12 times faster than its already interactive results.

Finally, Autodesk is exploring these new computing trends with a solution called Showcase, which offers both interactive GPU-based visualization and a near seamless transition to CPU-based raytracing when quality is needed. Manufacturers use Showcase to evaluate styling and functionality of their CAD designs. Autodesk, along with other companies, believe that these new approaches to producing photoreal images will open up a larger opportunity, as quality and capabilities improve to address additional requirements outside of CAD visualization.

Imagine the Possibilities
As is often true with new technology innovations, the whole is greater than the sum of its parts. The Rendering Revolution is just under way, and as with any disruptive event, it is difficult to predict exactly what new visions it will enable. I’m sure that artists, engineers, and designers will welcome the creative freedom that is enabled by iterating in such an expressive fashion. This could lead the shift from using static images, to project ideas, to using animations and fully interactive experiences.

With game-quality graphics closing in on film-quality graphics, the use of GPUs and interactive rendering techniques to produce finished frames for TV episodics and architectural visualizations is growing. Given the enormous time and cost pressures associated with creating animated content for TV, the ability to render a 30-second shot in 30 seconds versus 30 hours has significant competitive implications.

We can only imagine the sea changes that the Rendering Revolution will bring. Only one thing is definite: The cost of pixels is about to get very cheap.