Viewpoint
Issue: Volume 33 Issue 7: (July 2010)

Viewpoint

For the movie The Last Airbender, Industrial Light & Magic (ILM) was challenged with bringing to life, among other things, the fire-bending effects from the animated series of the same name (see “Well Bent,” pg. 26). One of the specific challenges we faced was to develop a pipeline to handle the fire-bending requirements of the show, both at the aesthetic level (our client wanted to anchor these effects in reality, while retaining some creative freedom) and at the production level (the shot count involving fire bending was fairly high).


For The Last Airbender, ILM had to create digital fire that met the film’s unique VFX requirements. This led to development of a new 3D fluid solver and volume renderer utilizing Nvidia’s CUDA.

Fire has always been an important part of the visual effects landscape. At ILM, mid-ground to foreground pyrotechnics typically would be done practically. However, it is increasingly difficult to tailor practical elements to their final-use motion and look in the shot. That said, filmed elements display an incredible amount of detail in a broad range of scales. This is one of the main reasons why it is difficult to simulate and render fire that holds up to the organic richness of practical elements. The simulation resolutions involved in accurately simulating the full range of scales are usually impractical for a given production schedule. In other words, the computational power required to fit a large quantity of hero fire simulations/renders in a production schedule has not really been available. One of the key factors for us is the number of iterations that an artist is able to produce and show to the client or supervisor before a shot gets finaled. Moreover, the computational fluid-dynamics models behind such simulations are complex enough that they often lead to a certain lack of control, which defeats one of our main goals of giving our clients as much creative control as possible. For these reasons, digital fire has been particularly challenging and an area which we have targeted in our research and development efforts. The advent of graphics processing units (GPUs) as high-performance computing devices helped us bite a sizable chunk off this hurdle.

For Harry Potter and The Half-Blood Prince, we successfully harnessed the power of the Nvidia graphics boards for high-performance computing purposes. The solver/renderer developed for that show, dubbed Verte, led to highly detailed fire renders computed in a fraction of the time they would have taken on traditional multicore workstations. Unfortunately, this frustum-based solution was not appropriate for the require­ments of the The Last Airbender’s fire-bending effects. To satisfy the filmmaker’s vision, we knew we were going to have cameras orbiting around the fire as well as fireballs coming straight at the camera, and the fire was going to have a good deal of interaction with other elemental forces, such as earth, air, and water.

PC Graphics Shipments Increase

Jon Peddie Research (JPR), a research and consulting firm for graphics and multimedia, announced favorable estimates for graphics chip shipments and suppliers’ market share for the first quarter in 2010. Also noteworthy is the fact that the year 2009 came in above expectations, with 11 percent year-to-year growth—an amazing comeback.

The first quarter of 2010 showed traditional seasonal slowdown, with everyone except Nvidia and SiS showing decline. Intel was the leader in unit shipments for the first quarter of 2010, elevated by Clarksdale, continued Atom sales for Netbooks, and strong growth in the desktop segment.

On a quarter-to-quarter basis, Nvidia gained in the notebook integrated and discrete segments as well as the desktop integrated segment. AMD gained a fraction in the desktop discrete segment and over four percent in notebook integrated. AMD reported that its graphics segment revenue for the quarter was $409 million, down from Q4’s $427 million and up significantly from a year ago ($218 million). Intel reported “revenue from chipset and other” of $1.761 billion in Q1.

Nvidia’s quarter, which straddles the calendar quarters, reported revenues of $982 million for its fiscal Q4 2010, which is from September to the end of January.

The fourth quarter of 2009 saw the first shipments of a new category, the Integrated Processor Graphics (IPG). With the advent of new CPUs with integrated or embedded graphics, JPR predicts that we will see the rapid decline in deliveries for traditional chipset graphics or IGPs (integrated graphics processors). However, for ease of reporting at this time, JPR is including these devices in its integrated numbers.

The Q1 2010 edition of Jon Peddie Research’s “Market Watch” is available now in both electronic and hard copy editions for $995. 

By August 2008, Nvidia had released Version 2.0 of CUDA, its high-performance computing development framework, which greatly simplifies access to the power behind GPUs. With this framework becoming more mature and the success of our previous experience on Harry Potter, we decided to develop a more general-purpose 3D fluid solver and volume renderer using CUDA this time; we named it Plume. From the very start, we decided to limit the use of low-level hardware optimization to a strict minimum, and built a stable and easy-to-use “grid-based computing construct” in which we could express most of the algorithms we were going to need, at the risk of not always getting optimal performance. This approach allowed us to focus primarily on the fluid dynamics and rendering algorithms, and we rarely had to face hardware-level issues.

After six months of development, we were able to run fairly high resolution fire simulations: Our benchmark simulation had a grid resolution of 640 x 320 x 320 and ran in 25 minutes on an Nvidia Quadro FX 5800—more than 10 times faster than for roughly an equivalent simulation to run on our multi-CPU solution. We also implemented in CUDA an artist-friendly volume renderer, including features such as self-shadowing or multiple scattering for additional smoke and render-time detail-enhancement controls. The gain in performance allowed our artists to move from setting off overnight runs, only to have to wait until the following morning to see the result, to being able to see multiple iterations per day.


Using the new solver, ILM generated smoke and fire together at a grid resolution of 640x320x320.

Another feature we pushed forward was the integration of the simulation and rendering; it quickly became apparent, during the initial development phase, that saving simulation data to disk and loading it back for rendering took more time than both tasks when the resolution gets high enough; therefore, we designed the system so that it can optionally render as it simulates. Most of the time, we found that artists decided not to save their simulation and were more eager to evaluate their work directly looking at the render rather than through some intermediary simulation visualization. This helped reduce the iteration time even further. It also made it possible for less technical artists to use the system, since the simulation and rendering integration meant that they didn’t have to learn the many steps of our effects pipeline in order to be productive.

For Airbender, most of the fire visual development was shared between traditional assault fireballs and what we called fire trails, whereby a character would displace/bend fire from an existing source toward another character or object, thus forming a trail. We had to produce special fire events here and there, but these two types represent the vast majority of our work. For the fireballs specifically, the artists used a feature that enabled the simulation domain to move in space. For the fire trails, we also had some early look direction to try having the fire twist like a tornado.

Particles were the most popular choice for driving and sourcing into these simulations. Their dynamics were authored in our effects pipeline using Zeno, our proprietary 3D platform. In both cases, since the simulated quantities travel mostly in one direction, we were able to chain simulations together, which led to effective, extremely high resolution simulations. Shortly after full production started, we realized that artists were occasionally going to need more control of the simulation and provided them with an expression-based framework in which they could author simulation stages that run on the GPU alongside the hard-coded ones. This level of customization proved to be invaluable for certain shots.

Throughout the movie, Plume additionally was used as a velocity field generator for our air-bending pipeline. It provided a fast way for artists to generate motion for the many particles that would get rendered for this bending effect. We also ended up using it for simulating and rendering mist in some of our ocean shots, as well as for smoke and dust.

Moving forward, the system put in place for this film reinforced our belief in using GPUs in production for simulation and rendering purposes. It is hard to imagine any future effects development without considering harnessing their power in one way or another. This performance gain, coupled with the integration of the simulation and rendering in the same tool, has really helped democratize a traditionally more technical task among our artists.