Killer Machines
Issue: Volume 34 Issue 6: (June/July 2011)

Killer Machines

What could be more fun than watching giant robots smashing through the streets of Los Angeles? Director Michael Bay had a few ideas. Send the transforming Autobots and their nemeses, the Decepticons, to the moon. Create a 200-foot-long snake-like Decepticon and wrap it around a skyscraper in Chicago. Give fans the moment they’ve been waiting for: a fight between Optimus Prime and Megatron. Show the film in stereo. And more--enough to set box-office records.  

Steven Spielberg was executive producer along with Bay on Paramount Pictures’ release of the third film in the franchise, Transformers: Dark of the Moon. Industrial Light & Magic (ILM) took the visual effects lead for the third time by creating the Autobots and Decepticons, with Digital Domain again putting some of the quirkier robots on the screen.

At ILM, the artists met the new challenges: working with stereo, re-creating Chicago, and building the biggest robot yet. But, they also influenced the storytelling in new ways. They applied lessons learned from previous films to simplify methods for building and animating the robots. And, they incorporated new technology developed at the studio to improve the look of the robots, old and new.


At top: ILM digimatte artists built large sections of Chicago from thousands of photographs taken on site at various times of the day, and then derived the CG models from those photos to fly digital cameras that followed robots and spaceships through the city. At bottom: In addition to explosions, smoke, robots, and so forth, the studio placed animated water into the shots.

“We had to do a crash course in stereo control and manipulation,” says Scott Farrar, visual effects supervisor. “The decision came a little bit late.” In fact, Bay wasn’t a fan until Farrar created a test shot in stereo.

“I created a stereo version of Bumblebee,” Farrar says, referring to the loveable yellow Camero-based Autobot. “It knocked everyone’s socks off. We could see all the detail in the thousands of parts. It’s a lot different from seeing animated characters with simple shapes or humans in stereo. You can see inside the robots. It’s truly something you have not seen before. That helped everyone get excited. So we started shooting in stereo and, sun of a gun, Michael [Bay] fell in love with it.”

The original plan to shoot with the Cameron-Pace Group’s stereo 3D system expanded from 10 days to more than 80 days. “We ended up with half the movie in native stereo,” says Nigel Sumner, digital production supervisor. “So, when we had native stereo, we would do a stereo render, composite, and final, and when we had mono plates, we did a single-eye render and a single-eye comp.” For the latter, ILM sent the final shots in layers to vendors, which did the 2D-to-3D conversion. In the final film, shots created in the two methods intercut one with the other.

“The intention was to simplify the process, but these are complex images,” Sumner says. “Sometimes we sent 40 to 50 layers per shot.”

Farrar estimates that the decision to create the movie in stereo added between 20 and 30 percent more work, all of which happened in a truncated time schedule. “It was as short or shorter than the last movie,” he says.

And still, they needed to build Chicago along with 24 new robots, some with their vehicle forms as well.

Big Shoulders


The ILM crew decided that rather than building Chicago by starting with geometric models, the digimatte department would create the city using the studio’s proprietary photomodeling tools. That meant photo­graphing each building in the areas they would re-create, from top to bottom, at different times of the day, from morning to evening.

“We tried to be as real as possible,” Farrar says. “We had a team of two people with cameras, plus helpers and PAs who spent weeks photographing a portion of Chicago, and I was up in a helicopter for weeks. If you can start with real, you can end up with real as long as you cover it properly for the time of day.”


The Decepticon Colossus wrapping itself around the glass-fronted building is the largest robot that ILM has built. Artists at ILM needed a 12-core, 48gb machine to open the scenes above, which contained 100,000 pieces of geometry. In its full form, Colossus has 86,000 pieces of geometry.

Using the photographs taken of the buildings from different viewpoints, the in-house photomodeling software created large sections of Chicago in 3D that a CG camera could fly through. The photographs also worked as a reference for sections of the buildings that the digimatte artists needed to create in CG–mullions on windows, for example, or sometimes entire buildings from scratch. Even though they usually had storyboards or animatics to guide them, the team photographing the buildings sometimes missed a shot.

“We canvassed the area and shot as much as we could,” Farrar says. “But there’s always something we didn’t get, an angle we didn’t shoot. We might have only gotten the front face of a building, so we’d have to supplement it, but I think the average person won’t know.”

Big Robots


These invisible environmental effects set the stage for the mechanical stars of the show, the returning heroes and villains, and the 24 new robots. “I think the first movie surprised everyone,” says Sumner. “The second one was a visual feast, exciting and action-packed, maybe almost too much. For the third one, we’ve gone back to what we did on the first—concentrated on the characters, details, and complexity.”

The legacy robots got a touch-up. “It’s shocking that in each of these films, the camera moves somewhere and we see parts we didn’t finish because we didn’t see them before,” Farrar says. As the camera moved closer and deeper, the robots’ hidden parts got new paint jobs.

In addition, Bumblebee and Optimus received a slight redesign. “They didn’t get a face-lift,” Farrar says. “It’s more like a new car model. Optimus has a 12-pack rather than a six-pack. He looks more heroic.”  

Among the most complex robots are three SUV-based Decepticons, the “Dreds.” “They are so complex, with so many pieces, and the pieces are so small that the SUV design is not apparent,” says Scott Benza, animation supervisor. “And that made those robots more complicated to model, rig, and animate.” The number of pieces and the disconnect between the character’s robotic and vehicular forms made the transformation from one to the other especially difficult. Animators used the same method as before to create the transformations: brute force.

“Usually we scrunch the robot into as much of a car form as we can, and use a recognizable car part as a road map to go from A to B, but with the Dreds, we couldn’t find a road map,” Benza says. “We couldn’t tell if the head was supposed to be in the rear end facing up or the front end facing down. It was a time-consuming process for Keiji [Yamaguchi], who did most of these transformations, but they are some of the most interesting because they’re so complex.”

The Dreds appear during a car chase, attacking Sam [Shia LaBeouf] at 60 miles per hour. It’s a scene that Benza suggested Bay add to the film. “The version that ended up in the movie is trimmed down,” Benza says. “The original chase involved a number of robots. But, it was really rewarding to design the scene and cool to be able to pitch an idea and work with Michael [Bay] on it.”    

Yamaguchi and the other animators created the transformations for all the robots anew, even the legacy ’bots. “A transformation from the previous film might not look good from a new camera angle,” Benza says, “or not as fresh and new. And, we wanted to have the flexibility to change quickly based on what Michael asked for. We couldn’t do that with a procedural simulation or pre-baked transformations.”

Big Rigs


The animators also stretched the robots’ performances in other ways by adding, at one extreme, more fights, and at the other extreme, a more emotional performance for some of the robots. “I think you’ll notice more fighting on screen than we had in the previous films,” Benza says. “We have robots full frame, up close to the action, with lots of interesting fight moves and fight choreography. And the robots do a lot of speaking in this movie.”

During a sequence set in Africa, Megatron (Hugo Weaving) explains story points to Decepticons Soundwave (Frank Welker) and Starscream (Charles Adler). And, in another sequence, Autobots Optimus Prime (Peter Cullen) and Sentinel Prime (Leonard Nimoy) have an emotional confrontation filled with dialog. For this reason and because Sentinel Prime represents Optimus Prime’s father figure, ILM gave him a more advanced facial rig.

“His face is more organic in his movement than any of the other robots, and definitely more so than Optimus,” Benza says. “His face is still metal, but it appears to deform.”

To accomplish the more human-like deformations on the hard-metal face, the character developers created two layers of rigging. The underlayer is a mesh with points that the animators moved, much as they would for a fleshy character. That layer drove an upper layer of rigging, which moved the metal parts.

“So, when we raise the corner of [Sentinel Prime’s] mouth, it affects the plates on his cheeks automatically,” Benza says. “The animators loved animating that character because they could get interesting poses out of that face relatively easily. On Optimus, an artist grabs the metal parts of his face and constrains them to imply there’s a muscle structure underneath.”

For the first Transformers film, the character riggers had developed a system with which animators could arbitrarily animate any part of a robot and any group of parts. “Any cog could turn,” says Sumner. “But the implementation was heavy. We would transform a hierarchy of multiple nodes whether or not a piece animated.”

Thus, the riggers created a more efficient system for this film: An animator could work with a more basic rig until the transformational matrix became necessary.

“It was a fundamental change,” Sumner says. “We removed the underlying hierarchy but made it dynamic. So, an animator working in [Autodesk’s] Maya can select the topology to animate separately from the primary rig and, with the click of a button, insert the transformation hierarchy independently from the rig itself.”

The studio named the system Dynamic Rigging. The switch to this system, along with other efficiencies, meant that, for example, animators could load the robot Ironhide in 45 seconds rather than waiting five minutes, according to Sumner, who provides another example: “Optimus Prime had 8000 pieces and 17,000 nodes, so the actual dataset we loaded into our scenes was complex. Now, he still has 8000 pieces, but we don’t carry the additional nodes.”

The primary rig still handles collision problems—that is, prevents the kind of collisions that might occur when a robot bends his leg, for example. “The rigging in these robots handles 90 percent of those problems,” Benza says. “When we push the rig beyond its capabilities, that’s when we go to the Dynamic Rigging, which allows us to create rigging on the fly at the artist level.”

In addition to creating a method for animators to use the simpler rigs most of the time without losing the advantages of the more complex system, the team implemented other efficiencies.

“We made a concerted use of subdivision surfaces and displacements rather than polygons,” Sumner says. “But the downside of gaining efficiency is that we added more complexity to the robots.”

Small Details


That complexity manifested primarily in the surface textures and materials, and in the lighting: The studio has changed its shading model for most of its films to one that produces more natural lighting but with an increase in rendering time.

“To get a glimmer across a compound curve is still difficult in computer graphics; all those sun glints you see on the complicated geometric surfaces in a car are still difficult to get on the robots,” Farrar says. “But, we have better control on sheen, glints, and things like that than we had on the first film. And we have better control of the lighting. We did more things with mirror boards, slash shadows, and intricate lighting patterns.”

To accomplish this, the lighting team switched from the standard environmental lighting model with key lights and normalized ambient occlusion, to a model that takes advantage of global illumination with importance sampling. “Depending on the composition of a shot, we could go to full geometric reflection,” Sumner says. “We’d use lower sampling for the non-geometric reflections with basic occlusion, and then for the final lighting, say on Optimus’s face, we’d switch on the raytraced reflections to get extra detail and fidelity within the cracks and to bring out the displacement details of the robot.” For this, they used Pixar’s RenderMan.


The Autobots Bumblebee (at top), Optimus Prime (bottom), and other legacy robots visited the body shop before making their third screen appearance, to make sure all their internal parts were primed and painted for the stereo cameras, which can see deep inside the complex robots.

However, for the shot of the giant snake-like robot wrapping itself around a skyscraper, the lighters switched to Mental Images’ Mental Ray. “We figured that Mental Ray would be more efficient at raytracing the details and complexity in the high-density metal structure and the breaking glass,” Sumner says. “For that sequence, we used both renderers throughout, sometimes in conjunction.”

To match the lighting used on set, the effects team captured a full range of HDRI images and created spheres for the environments. “Then, we used energy-conserving [RenderMan] shaders that are much smarter about balancing the dynamic range than the shaders we used in the old days,” says CG supervisor Kevin Barnhill. “Before, specular was clamped. Now, with EXR [file format] and extended-range images, we have more flexibility in terms of the dynamic range within the specular. Before it was up to an artist to determine whether something was too dark or bright. Now, the shader does the right thing in terms of rolling from one value into the other. The shader considers reflections and specular as one thing.”

That means the artists could spend more time working with light creatively, rather than becoming as mired in technical details as before. “We had an environment sphere czar whose task was to gather the HDRI images, stitch them into environment spheres, and publish the spheres in a library,” Barnhill says. “That was a good starting point; it set the standard so that the characters fit into the environment from the get-go. After that, it was a matter of sweetening the shots.”    

For example, for shots with sunlight, the artists might paint out the hot core of the sun but keep the glow and then place a light where the sun’s hot spot had been. “That gives us control in terms of shadowing,” Barnhill explains. “We can change the size and shape. We might want a rectangle if we wanted the side of a building to refract onto the robot.”

Sometimes they would add area lights to control light bouncing onto the robots from buildings. “Michael likes his robots shiny,” Barnhill says. “He doesn’t want any dead spots, so we put a lot of movie lighting in place.”

For example, they used digital versions of “mirror boards.” In the real world, a lighting crew might crunch aluminum foil and then unwrap it so that the resulting facets cause light to bounce off at different angles. ILM’s lighting team created a digital version of the crumpled aluminum. “We’d hit a robot with light bouncing from a mirror board as it walked past, which produces interesting lighting effects and liveliness,” Barnhill says.

Barnhill helped develop the look of the robot Sentinel Prime early in the show, trying different colored materials on a robot positioned on a turntable. “We have a standard setting in which the lighting never changes that we use to review all the assets,” Barnhill explains. “That’s something we lock down at the beginning to be sure we get a consistent look for all the characters.” Like most of the robots, the crew created Sentinel Prime using many types of materials.

“We have brass, painted metal, body panels with color, tinted glass,” Barnhill says. “We even used chip maps that reveal an underlying primer beneath a painted surface, and displacement or bump maps that show the depth.” As battles progress and the robots become wounded, the materials change. But, the artists had even locked down these materials early in look development with the robot on the turntable.

“I remember the old days when every shot was turmoil when we made material changes,” Barnhill says. “Now that we have the standardized environment, the robot looks good from the beginning when we drop it into a shot. Even though we might have different lighting in the sequence than we did for the turntable, we rarely touched the asset. We could control the look with the lighting.”

In fact, for CG sequences, such as those taking place on the planet Cybertron, Barnhill’s team started with the standardized environment, substituting that for the HDRI spheres. Then as digimatte artists created Cybertron, a huge planet with interlocking latticework structures and harsh lighting, the lighting team used those images to create a more accurate HDRI-like environment sphere.

In addition to more fighting, better lighting, more dialog, and new body work, the team added pyro gags and hydraulics to amp up the action in 3D.

“We have lots of hydraulic liquids spewing,” Farrar says, “brown, green, blue, red liquids. Starscream spits like crazy, and at one point, it even sticks to the lens. We have rocket trails, all that typical stuff that shoots toward the lens or past it to get those 3D moments.” All told, the crew ran more than 12,800 simulations inside the studio’s proprietary Plume software during the course of postproduction.

“That’s the benefit of having GPUs,” Sumner says. “What would have taken a day before, takes an hour or less now.

Big Shots


All that, plus the complex robots, the digital environments, and the ensuing mayhem created hugely complex scenes. “We measured the scene in which the skyscraper tilts over, and it had 100,000 pieces of geometry in it,” Sumner says. “Thankfully, our hardware capabilities have increased since the last movie, but only the artists with 12-core, 48gb machines could open it. And, once they had it open, the interactive speeds were very slow. They couldn’t open the scene and render it on one machine.”

The creature that wraps around the skyscraper is larger than the Devastator, which was the biggest robot on the previous film. “We created Devastator from six other robots,” Sumner explains. “He had 52,000 pieces of geometry; 11.7 million for rendering. Colossus has 86,000  pieces of geometry and 30 million polygons. He’s like 2.5 Devastators. There were times when we had to lock off parts of the renderfarm to be sure these shots could get finished in time.”

Farrar and Benza received Oscar nominations for the first Transformers, and it’s possible the well-honed effects in the third film will send them to red carpet land again. “I’d have to say our crew did incredible work,” Farrar says. “So did Digital Domain. Their shots look fabulous. Everyone as practitioners of effects and as craftspeople did exquisite work. I think theatergoers will enjoy it.” 

Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at BarbaraRR@comcast.net.