Facebook Logo Twitter Logo RSS Logo
Issue: Volume: 25 Issue: 7 (July 2002)

Attack of the Clones - 7/02




The second of a two-part series

By Barbara Robertson

Ironically, although they have the title role, the CG characters no one talks about in Star Wars Episode II: Attack of the Clones are the clones. "In all the critical reviews of Episode II, I have never heard any criticism of the clones," says Jeff Light, motion capture supervisor at ILM. "That was also true of the droids in Episode I. No one mentions them." He's right. Some reviewers note that the clones are based on Jango Fett (Temuera Morrison), and that's about it. They often comment on such CG characters as Yoda, Jar Jar, or the Kaminoans, but when it comes to the clones, the reviewers are silent.

Light takes this silence as a compliment because that was the goal-to create performances for the digital characters that make the audience believe the clones are humans wearing uniforms. Whether as sembled en masse, fighting on the battlefield, or talking individually with actors, the uniformed clones are always CG characters. Their performances were created with motion capture data massaged by animators. No one wore a clone suit in the film; the crew didn't even build a clone suit.
Character modelers used dimensions from actor Temeura Morrison to create a body for the clones; hard-surface modelers created the suit. The clones, which were animated with the help of motion-captured data, appear in 156 shots.

© Lucasfilm Lt




"I wanted to build a clone suit," says Ben Snow, visual effects supervisor. "It makes a lot of sense when we have a clone talking to an actor, for example. But George [Lucas] said, 'Look, this morning you showed me a CG R2D2 that looked absolutely believable. It's fine. You'll be able to do it.' He and Rick [McCallum, producer] love throwing down the gauntlet like that."

In one scene, for example, a clone trooper runs after Padmé (Natalie Portman) who has tumbled out of a vehicle and rolled down a hill during the clone war. He kneels down to ask if she's OK; the interaction continues as she gets up. There's no clue that the trooper helping her is digital. "We worked and worked to come up with every footstep, every reaction," says Light. "We did more takes of that performance than any [motion-captured] scene in the film."
The models for Episode II battle droids had already been created for Episode I, but some of the performances were newly motion-captured. When droids are sliced in half, rigid-body simulations control the breaking apart of these hard-surface models. The Je




Light's crew started with rough blocking from an animatic created in Maya that showed what the clone was supposed to do; measurements from a camera match-move showed where Portman was on the bluescreen stage. Using those measurements, the crew put a volleyball on the mocap stage to represent Portman's head when she was lying down, and used a tripod with a marker so the performer would look in the correct direction when she stood up.

Walks Like a Clone
It's one of many motion capture performances created for Episode II. The shot in which the clones get their helmets-hundreds waiting in line around a carousel while sergeants direct traffic-is entirely CG. The droids are all performed with motion capture-as in Episode I, by James Tooley, technical animation supervisor, who supervised the cloth and rigid body simulation teams. A shot in which Count Dooku (Christopher Lee), ducks under a hallway after arriving in his evil (CG) ship, was created with a digital double and motion capture because changing perspectives made a bluescreen shot too difficult. "We studied videotapes of Chris topher Lee to understand how he moved," says Light. "An animator can make some corrections, but if the es sence of the performance doesn't work, you have to reshoot it. You have to think like a director."

In addition, for scenes with thousands of clones and droids, mocap coordinators had to consider which actions would hook with others so performers could do them sequentially. "If we wanted to get a transition from a march to a run and separated those motions by even an hour [during the capture session], it would look wrong," says Light.
From the top: 1. A photograph shot with a 35mm camera. 2. The CG terrain's underlying wireframe. 3. The photo is in the foreground, CG terrain is in the midground, and a painting is in the background. 4. Digital characters and ships are positioned. 5.




For the capture, the team used a Vicon8 optical system with around 50 markers per character arranged in an asymmetrical pattern. "We use more on one side than the other to form odd-shaped pentagons, which helps the system determine which marker is where," Light says. The mocap team cleaned up the data with House of Moves' Diva software, applied it to characters, edited it further in Softimage 3D, and then sent the characters on to animators. All told, the team captured 1400 performances. Of those, they selected 700 as "keepers" and then fitted 450 to characters, 420 of which were used in the film-most, of course, in battle scenes.

Battleground
To set the stage for the battle, compositors combined matte-painted sky, rendered terrain, film taken of miniatures, and scanned photographs to create a panorama. The stills, taken of a desert in the Southwest, were tiled together and used for distant horizons. "I like having a sliver of reality in a shot," Snow says.

For terrain, they used USGS data to generate height fields, then built displacement shaders and other textures on top. Geometry was generated on the fly depending on what the camera sees.

"Once we had the panorama, Ben would pick the area where he would stage the scene and put in the CG camera," says Marshall Krasser, one of three compositing supervisors. Then, using ILM's CompTime software, compositors would color-correct individual pieces, blend and fade the painted sky, and add elements such as rocks and highlights to make the scene seamless and interesting. "When we composite, we start working on a shot one frame at a time because that's the best way to build it up," Krasser says. "We do all the composite work and color tweaks on that frame and then run it over the whole sequence."
The giant battleship on the left and "crawler" on the right are 3D models, as are the clones. The hard-surface modeling crew worked for 18 months to create approximately190 models for the film.




Once the stage was set, compositors began bringing in CG elements-rendered clones, ships, droids, and so forth, starting with foreground elements and working backwards. "We like to put rendered elements into the composite untouched so the effects supervisor can judge the initial quality of the renders," he says. "It's hard to match 100 percent, though. We do the last 5 percent."

One of the side effects of shooting the film with high definition (HD) digital cameras was that compositors often needed to artificially create a Z-depth. "The HD chip is about the size of a 16mm frame, so you get four times the depth of field at the same f-stop [as a film camera]," says John Knoll, visual effects supervisor. Thus, to help create the illusion of distance, compositors added increasing amounts of haze with a traveling depth matte that moved backwards in space.

Finally, compositors added the "eye candy:" lasers, tracers, smoke, and dust. "We tried using particles for dust clouds but it was difficult to get them to look great without a lot of compositing," says Snow. In stead, they used Mental Ray to render volumes of dust. "The volumetric shaders gave us a better hand-held, documentary feeling," he says.
Everything is digital in this shot of the droid manufacturing plant, except C3PO, the far background wall, and the haze. The pools of light in the haze help create the illusion of depth.




Once the composite for the keyframe was complete, the compositors launched a proprietary program called "Do All" "You plug in parameters, and it knows what software to call to do different portions of the composite and the render you want," says Krasser. Some times, the parameters from one frame work for the whole shot; sometimes they need to set more keyframes.

Droid Machines
Interestingly, the team originally developed the volumetric rendering techniques used for dust on the battlefield to create smoke in the droid manufacturing plant. But Krasser, who was also compositing supervisor for Pearl Harbor, discovered a way to use compositing techniques instead. "Marshall would take the background, blur it to death, and mix that into practical smoke elements," Snow says.

Krasser explains: "By putting a blurred version of the smoke back over itself, it gives a nice subtle fall-off so you get a feeling of time elapsed. When we needed more of a fall-off, I would create a little garbage matte for the area I wanted the smoke plume to fill as it dissipates, create a similar shape with a 2D smoke generator based on fractal noise patterns, and feather the edges so it feels like it's been burning for a while."

"When you look at the factory, you think 'wow, that's a vast area,' but it's all a matter of layering and getting nice lighting fall-offs," he says. In fact, the far distant wall of the factory is a miniature. Everything else is CG except, of course, for Padmé, the caldron when she's inside it, and, in a few shots, C3PO and R2D2, although the robots are often digital as well.
Inset: A CG R2D2 jumps off a ledge in the droid factory. At right: The 3D elements in wireframe include R2D2 and thousands of droids on conveyer belts being manufactured by various machines. At bottom right: The final shot includes sparks created with par




Snow used the miniature to get that slice of reality he likes as a foundation for building an image. Also, he and his team studied a real manufacturing plant during a field trip to a foundry. "After we got back, we brainstormed about how the droid factory would work," he says. Modelers created a dozen different digital machines-some with moving parts, five different types of heads, curved and straight sections of track, and some towers. Using these models, layout artists assembled the factory as if they were working with a three-dimensional train set-a curved section here, a straight section there, a machine with moving parts over there. "Each piece of track would know what could be added to it," says Russell Paul, digital model supervisor. "Layout artists could get the information with a right mouse click."

Conveyor belts were attached to tracks with splines, and droids in the making were added-around a thousand per conveyor. The factory is huge: One shot, for example, has 30 tracks. "One of the biggest problems was administering this amount of geometry," says Snow. "The TDs broke it into layers, and we used RIB [RenderMan Inter face Bytestream] archiving to be able to process it." With RIB archiving, pre-baked sections pulled into the scene at render time reduced the computing load on the CPUs.

Models, Models Everywhere
The droid factory and the battlefield were but two of several nearly all-CG environments in Episode II. To manage the modeling, ILM divided the work into two units, a character unit supervised by Geoff Campbell (see "Attack of the Clones, Part 1," June 2002, pg. 16) and a hard-surface unit supervised by Paul. The hard-surface group handled the environments, vehicles, props, C3PO, and any new droids. "I think we had around 190 different models and many variations of those," says Paul.

The modeling group, which ranged from 6 to 12, primarily used Alias Studio software, with an assist from Maya and Softimage 3D. "Studio is really good, especially in workflow, for the complex mechanical shapes," says Paul.

Because the digital models were often extensions for miniature sets or based on sculptures, Paul's group transferred 3D files back and forth with the physical modeling shop. "They started using Rhino in addition to AutoCAD and laser cutting tools," Paul says, "and they had a copy of Studio. We'd send 3D data to them, and we could use their 3D designs for animatics or layouts."

One of the most important models was the Republic gunship. "We'd see it in shots from every possible angle for long periods of time when it was just sitting behind the actors taking the place of a set," Paul says. "It was used for long shots, close-ups, and was even crashed. It had to be very detailed."
For the speeder chase on Coruscant, the hard-surface modeling team created 10,000-foot tall buildings with architectural continuity between building sections and enough detail to provide scale.




Also particularly interesting for the team were shots on Coruscant. For the speeder chase, the team built 27 vehicle models to simulate city traffic-taxis, buses, trucks, vans, and, of course, the speeders. In addition, they created the skyscrapers, some as tall as 10,000 feet. To give a sense of scale, they added such elements as air conditioning units and handrails. To make the chase more interesting, they created neighborhoods.

"George insisted that if you did a 10-minute chase, the architecture had to be different from beginning to end," says Knoll, who supervised the sequence. The chase starts in the upper city, moves into an industrial area, speeds through a financial district, and ends in an entertainment district. "For each area there were about 12 buildings, each designed so you could turn it and make it look different," Knoll says. Of those, only the buildings in the industrial field and entertainment district were miniatures.

"People did amazing models for this show, " says Paul, "and not only the modelers but also the artists who painted the textures. From Anakin's little sphere in Padmé's apartment, which wasn't as easy as it sounds, to the hero vehicles in the clone war. It's amazing how much stuff went into this film."

It would be difficult to imagine another live-action film with as many digital elements or as many all-CG scenes. As Snow says, when describing how he created camera moves for digital shots in the battle scenes, "It's almost like an animated film."

But it's not.




Barbara Robertson is Senior Editor, West Coast for Computer Graphics World.



Back to Top
Most Read