BackDrop (May 2007)
Issue: Volume: 30 Issue: 5 (May 2007)

BackDrop (May 2007)

Spider-Man 3 challenged not only modelers and animators but also compositors, who had to integrate complex CG characters, live-action plates, CG cityscape sets, and effects into more than 900 compelling CG and live-action shots.

In all, 12 facilities contributed to the film’s compositing, assisting the team at Sony Pictures Imageworks, which was responsible for approximately 85 percent of the final shot work. According to Matt Dessero, compositing lead on SP3 at Imageworks, the compositing work on this film proved especially difficult. The challenge on this project was establishing a pipeline that allowed for infinite flexibility.

“In a typical composite workflow, an artist receives a plate and can start working on the final composition of the shot right away. You may not have all the elements needed for the shot, but at least you have a plate in which everything should eventually lock to,” Dessero explains. “On this show, bluescreen plates were shot in the Vista Vision Format (which, after scanning, is 1.5x larger than the typical full-ap plate). The advantage of the Vista Vision plate is that it allows us to reposition the plate without loss of resolution. The disadvantage is that we now needed to animate and compose almost every bluescreen plate before a final shot camera could be approved. With this pipeline, we needed to augment [the process] to allow for unlimited flexibility. As a result, every plate had to run through multiple departments before it could be composited.”

 According to Dessero, the crew shot 80 to 90 percent of the bluescreen plates (but not the entire film) in Vista Vision, thereby providing additional imagery pixel space to move the character around during shot design. This was done so that live-action plates of the characters could be re-animated to make them look as if they were moving through space faster than humanly possible, while avoiding the physical constraints of on-set camera rigs and stage limitations.

“In this film, the action is intensified with many fast-moving and long-sweeping shots,” Dessero explains. “It would have been impossible to capture the final composition in camera with one take.” For example, in The Final Battle sequence where Spider-Man and Harry battle a gigantic Sandman, it would be typical to have live-action plates of Spider-Man and Harry, who were shot independently and then re-animated on tiles in 3D; that data, says Dessero, would be sent back out to compositing. To avoid softening the original plate during the re-animation process, 3D camera and tile data were exported from Maya to Bonsai, Imageworks’ in-house compositing package.

Then, the team had to do the usual tasks of unwarping the plate, removing the lens distortion, sending it to matchmove for plate prep, and creating a camera for the plate; meanwhile, the plate also would be sent to precomp, where the group created alpha channels for the bluescreen element. Later, the element, bluescreen channel, Vista Vision plate, and camera move would be sent to the animation department for tile preparation, where the team would place the characters on a 3D tile and animate these live-action bluescreen characters inside the 3D world. For example, in The Sandman shots, there might be a Peter and Harry who were shot independently and then re-animated in 3D; that data, says Dessero, would be sent back out to compositing.

This post-animation process was introduced on the first Spider-Man in a handful of shots, increased to about 10 percent in the second film, and implemented extensively on this latest iteration because it proved so successful. “It allows the director and animators more flexibility so they can push the performance even further,” says Dessero. “They can speed up the characters and shift them around in 3D space.”

Further complicating the process was the use of post depth of field (DOF), written for Bonsai, the studio’s in-house compositing system. The new DOF node allowed the artists to choose any pixel in an image; that pixel’s depth coordinate would then be plotted onto a curve, which could then easily be adjusted to give it either more or less focus.

“The VFX supervisor could point to a portion of a frame, and the artist could go back to his or her desk and visually dial in the DOF curve with the interface we created,” explains Dessero. As a result, Imageworks did not have to render big 3D DOF in every shot, which can get expensive. Rather, they could do it on the back end and adjust the DOF composite there in the comp, saving about 10 percent in rendering. In addition to Bonsai, the group used Apple’s Shake for paint solutions and for optical flow-based retiming through The Foundry’s Furnace plug-ins, as well as Autodesk’s Flame system.

“Every composite in this film was difficult; environments were huge, as were the characters,” says Dessero. “If (as a compositor) you were not managing a large number of layers for the environments, you were depth-compositing the sand renders.”

Some of the more complicated comps involved The Sandman, who required upward of 20 layers per shot, all rendered with depth holdouts. The main layers included a beauty pass of Sandman, flowing sand, falling sand, and volumetric renders for dust. According to Dessero, it was difficult achieving a balance in the sand, particularly with grain size. If the grain size was too large, scale would be lost; too small and Sandman did not feel menacing enough. Further, the passes were rendered with 3D holdouts for depth compositing.

“Rendering the sand layers with holdouts saved days of render time,” says Dessero. “Depth compositing is difficult, especially when it comes to applying post DOF and post motion blur. The key here was communication between all the teams: FX, lighting, and composite.”

Throughout the show, Imageworks looked to its India facility, acquired during the movie’s production, for some paint and roto work; before this could be done, however, the studio had to establish a special pipeline for efficiently transferring files between the facilities. Imageworks also sent some sequences out of house. Here, Scott Gordon, visual effects supervisor at CafeFX, Fred Pienkos, visual effects artist and supervisor at Eden FX, and John Vegher, co-founder and VFX supervisor at Giant Killer Robots—with assistance from their respective artists—discuss the sometimes “sticky” situations they dealt with while compositing certain sequences.



What was the extent of your compositing work on Spider-Man 3?
 
Gordon: We did 81 shots. The bulk of our work was in the Crane Disaster sequence, where a construction crane goes out of control. We created CG buildings, environments, the crane, and the beam suspended from it, breaking glass, falling furniture, breaking columns and window supports, animating damage reveals, concrete, Sheetrock, and metal debris, and dust and smoke.

Pienkos: Eden FX helped work on a series of 40 shots that ranged from technical adjustments, to shots that were never supposed to be VFX shots, to multi-element bluescreen comps.

Vegher: We did about 10 shots that entailed bluescreen extraction and set extension for the Bell Tower sequence.

Did you do any other major scenes?

Gordon: We also completed shots in the Final Battle sequence, where we added backgrounds to shots of Spider-Man and Marko, and augmented the shots by replacing heads, replacing eyes, making eyes water, and removing tears. We also completed several shots that take place in the subway that required the addition of CG water sprays to a bursting water pipe.  

Which scenes proved the most difficult, and why?

Gordon: Our most complicated shots involved integrating full-scale photography with miniatures and lots of CG debris. These shots also had fast-moving cameras with complex moves for which we had to alter the timing. We had to re-time and re-project all of the filmed elements, both full-scale and miniature, and integrate them with our backgrounds, a collapsing building, falling furniture, breaking glass, debris, and smoke. Most of the shots involved integrating CG elements (building, furniture, or debris) alongside their live-action counterparts, which were on screen at the same time. Other shots that proved difficult were from the Marko Atomized sequence, during which Marko runs from the police on a foggy night; our task was to add electrical towers and a city skyline in the background. The live-action plates were shot under different lighting conditions, some over black, some over bluescreen, some with fog, and some without, and we had to tie them all together as seamlessly as possible.

Pienkos: We did a sequence that had to be digitally manipulated for continuity. There were scenes with cigars and smoke that needed to be changed from cut to cut. This meant painting out areas where there should not be smoke. And adding smoke where there should be smoke proved to be challenging. We also did some facial replacement shots, where they wanted to use a different facial performance for a shot but the original actor’s body. Often the new performances were shot under different lighting conditions, which means severe tracking, warping, and color correcting to get the new facial performance to track to the original performances (the actor’s head).

Vegher: The set extensions were difficult due to a lack of reference photography of the bell tower set.

How did you overcome those challenges?

Gordon: For the Crane Disaster shots, we re-timed and re-projected a lot of the footage, and in general, integrated all the CG elements, paying attention to the smallest details of every frame. For the Marko Atomized shots, we spent a huge amount of time animating fog densities and color corrections frame by frame.

Pienkos: The tool set within Eyeon’s Fusion 5 provides for color correcting, tracking, masking, keying—it was a perfect fit.

Vegher: We did so through meticulous painting of existing plates.

Which scenes are the most complex in terms of layers?

Gordon: Many of our scenes had one or more live-action plates to which we added a constructed background (made from tiled photos and enhanced with movement, like moving traffic and waving flags), the building exterior, a crane arm with a beam suspended on a cable going out of control, breaking glass, falling furniture, breaking columns, and window supports, animating damage reveals, concrete, Sheetrock, and metal debris, and dust and smoke. Usually each of these elements consisted of several layers (ambient, diffuse, specular, reflection, refraction, various lighting passes, depth, normals, motion vectors, etc.) to allow as much control in compositing as possible.

What hardware and software did you use?

Gordon: Digital Fusion for compositing and Autodesk’s Combustion for paint.

Pienkos: Fusion 5 for compositing on dual dual-core PCs.

Vegher: Shake, Maya, Mental Ray, and Photoshop on AMD-powered workstations.

In what way was this project unique for you?

Pienkos: We received reference QuickTimes from the FX editor that were very accurate representatives of what Sony wanted us to return in the final shot. Usually if a question arose, the answer would be, ‘Make it like the reference,’ which here was a good thing because we were running low on time.

Vegher: In most cases, we were working with sequence leads rather than directly with the VFX supervisor or director, which worked out fine.

Did you learn something new from this project?

Gordon: We’re always pushing the envelope, and with that comes new ideas, tools, and techniques. Most of the compositing growth was evolutionary rather than revolutionary: streamlining and standardizing the use of layers from 3D, using motion vectors, improving some of our keying and spill suppression, depth of field, and even film grain.Pienkos: It is how smooth a production can actually go with the right amount of planning and attention to detail. The work by Sony’s farm-out team made our job easier. With very little room for confusion, it was easy for us to go in and do our work, and be confident what we were submitting was the right effect for the right shot, sometimes even on the first version of the comp.