Imagine a world where the filmmaker shoots a scene blindfolded without the ability to see who or what is actually being captured on film until the footage is in review. A workflow that limiting would require inordinate amounts of labor—tightly controlled scripting of all actors and objects on the set and camera locations, angles and lighting predetermined long before the cameras roll, and more.
This particular scenario may sound far-fetched, but 3D animators and artists face similar workflow challenges when setting up and adjusting lighting, materials, shadows, and angles to achieve the desired end result. However, realistically determining the creative look and production quality of an animated shot is a time-consuming process that requires rendering layers and assembling everything in a compositing application.
The initial setup of a scene doesn’t require this type of sequential workflow. However, small modifications to a scene, such as adding sheen to a character’s hair, requires the artists to go back and re-render the entire sequence before they can actually see how it looks. In fact, almost any change to visual appearance, no matter how subtle, can become very time-consuming and costly from a creative standpoint: It requires adjusting, rendering, reviewing, and repeating serial discrete steps.
This type of iterative sequential workflow is the norm for 3D CG animation and is similar to the workflow challenges video editors faced 20 years ago, before real-time non-linear editing programs became widely available. Back in the day when tape-to-tape editing was in practice, each piece of video had to be laid down to the recorder in a linear sequence. Once the process was under way, making a simple change was not possible without completely re-editing the footage that followed the change. However, once non-linear editing tools entered the mainstream, video workflow forever changed. Instead of building a program sequence one shot at a time, non-linear systems allowed film and video makers to work with any piece of footage and see the results as they unfolded. This simple modification to editing allows editors greater flexibility and control over the final footage, along with a hefty time-savings in production.
Faster processors and professional graphics accelerators are advancing technology at an astounding rate, resulting in more powerful CPUs and GPUs for faster final rendering using commercial rendering and raytracing programs. Many accelerated rendering solutions, such as Chaos Group’s V-Ray, Caustic Graphics’ CausticRT, Mental Images’ Mental Ray, and others, continue to raise the bar for realistic visualization. And the promise of even faster final rendering using the new line of graphics accelerator boards, like Evergreen from AMD and Fermi from Nvidia, will continue to drive innovation beyond the imagination.
Faster final rendering is only one part of the equation, however. The real breakthrough comes with acceleration of the entire workflow, allowing artists to work in a non-linear environment to make and view changes as fast as the artist (or client) can envision them.
Imagine being able to add a light to a scene on the fly, make adjustments, and see the shadows, glows, blooms, AO, reflections, and so on, almost exactly as they will appear in a final render. No need to look at the interface and type in a number. Instead, move a slider and watch the image change directly. Scrub the timeline and as the camera moves, dynamically manipulate lighting, depth of field, motion blur, materials, or turn compositing layers on or off without having to re-render the scene. Jump to an arbitrary time point in a scene, make a change, and see the results without having to click a render button or even wait for a progressive scan render. This kind of non-linear, real-time creative control has the potential to dramatically improve the way 3D CG animation, architectural visualization, product design, and film pre-visualization are created and delivered.
The eureka moment for this type of workflow came to me while working with and observing some of the incredible real-time graphics rendering with today’s most advanced game engines. It made me wonder: Why can’t we apply the same type of GPU-accelerated real-time rendering technology used in 3D gameplay to create a real-time creative 3D animation workflow?
This was the launching point of development for MachStudio Pro, a real-time, non-linear 3D compositing and finishing program. The idea behind MachStudio Pro is that light and materials become brushstrokes on the canvas of the camera, providing artists with immediate visual feedback. The approach is similar to how artists use Adobe Photoshop and Apple Final Cut Pro, but for animated 3D lighting, materials, cameras, and rendering.
Processes that were previously separated in a manufacturing assembly-style workflow (adjust, render, and repeat) are now available in a single work space. Basically, this allows for the consolidation of the roles of the shader technical director (TD), lighter TD, and compositor into a single work space so that the artist can creatively explore and control all aspects of the images and animations together in one place.
People often wonder how difficult it is to create a great render. In traditional rendering programs, you spend a great deal of time setting up, previewing, modifying, re-previewing, and waiting for a final render. I have been working in this field for more than 20 years, and I know artists who don’t even look at a scene; instead, they run a script to process and render a scene, which is hardly an artistic process.
But in a non-linear, real-time 3D work space, there are no discrete setup, preview, and render steps. The interface is more intuitive, allowing you to move sliders and dials interactively, and instantly see the results. There is no need to memorize “the perfect render settings.” In fact, you may not even need to look at the numbers. By moving the slider, you are able to see, in real time, changes to lighting or effects as they appear in a 3D scene, thus opening the door to experiment with new artistic treatments and set up more complex lights, fog, shadows, AO, and more throughout the creative pipeline.
With a 3D workflow, artists can concentrate on full scenes rather than individual shots. In shot-based animation rendering pipelines, different lighters work on different shots that are cut against each other. In the best-case scenario, that means re-creating light rigs across shots. In the worst-case scenario, different artistic styles lead to mismatched files assembled together in a 2D compositing application. However, in a non-linear 3D real-time work space, the same lighter can work on all the shots in the scene using a common set of light rigs and render passes. This approach helps improve consistency and reduce logistical errors, production time, and production costs.
Which Render Style is Best?
Is the quality of real-time non-linear rendering sufficient for production use? Does every project need to be raytraced? This is, of course, a straw-man question. The real goal of any project is to choose the best techniques to produce the best quality images, with the desired look, as quickly as possible. The “gold standard” of film rendering combines many different techniques and rendering styles to achieve a unified look and feel.
More importantly though, with a non-linear 3D workflow, the image can be developed based on an artist’s instinctive reaction to a scene. Clicking a render button and getting a physics-defined, raytraced image doesn’t require artistic vision. Instead, artistic creativity is the ability to make a change, quickly observe the results, and fine-tune the image to achieve the desired look. It is this type of interactive dialog with the image that makes great 3D CG animation possible.