Weighty Matters
Issue: Volume: 32 Issue: 7 (July 2009)

Weighty Matters

Read in-depth features about how cutting-edge CG was created for various movies making Oscar buzz.

Director Michael bay's sequel, Transformers: Revenge of the Fallen, stars nearly 60 CG robots thanks to ILM, which built two thirds of the mechanical beast, and Digital Domain, which built the rest.


Huge. You can talk about Transformers: Revenge of the Fallen, Michael Bay’s sequel to his 2007 film Transformers, six ways to Sunday, and it still comes back to huge.


“It’s a big movie,” says Industrial Light & Magic’s (ILM’s) Scott Farrar, who supervised the visual effects for the DreamWorks and Paramount presentation in association with Hasbro, as he had for Transformers. “Nobody shoots a movie like this anymore. The effects are huge. The locations are huge. The style of the project is huge. It’s a big adventure that travels all over the world.”

As before, the plot centers on the evil Decepticon robots that want to rule the world, the Autobots that are determined to prevent them, and one unlikely human, Sam Witwicky (Shia LaBeouf), who holds the key to the Decepticons’ success. And once again, Autobot leader Optimus Prime and the Autobot Bumblebee, Sam’s Chevy Camero, protect the teen from the Decepticons, but this time they have more company.

In Transformers, these two much-loved ’bots stood out among the 15 good and bad bone-crushing machines, all giant, mostly bipedal robots that transformed to and from a variety of vehicles. In Revenge of the Fallen, another 58 otherworldly robots in a universe of sizes and shapes join them. The robots are more emotional than before, they have more dialog, display more heavy-metal brutality, and interact with the environment more explicitly. Bumblebee cries, but it also rips a Decepticon robot apart. Jetfire, an old Autobot new to the franchise, walks with a cane, hitches up his pants, carries a scene—and spits to show his disapproval. Optimus Prime crashes through a forest, breaking tree branches as it runs.



Jetfire (top) an old robot that walks with a cane, and Wheelbot (bottom), a non-bipedal 'bot, join more than 30 new and 14 returning heavy-metal characters created at ILM.

Three studios created the digital effects. People cite a varying number of robots, depending on whether they count a ’bot reappearing in another form as one or two, for example, but the crew generally agrees that ILM, which shouldered the bulk of the work as before, created 46 robots. Digital Domain, the studio owned by Bay and others, created another 13. In addition, Asylum added non-robot effects.

The smallest robots are Microcons, ball-bearing sized Decepticons that assemble themselves into a large robot called Reed Man, created at Digital Domain. The largest is ILM’s Devastator, also a Decepticon. “He was daunting,” says Jeff White, ILM’s associate visual effects supervisor, of Devastator. “He’s the heaviest asset we’ve ever made…by far.”

Including the vehicles that the robots transform to and from, and various buildings and bridges they destroy, ILM built 240 assets for the film, but the robots occupied most of the modelers’ time. “We had the largest modeling crew this facility has ever had,” says Farrar of ILM. “They worked for nine months producing the hundreds and hundreds of objects that had to be modeled and painted.” For most films, ILM assigns five to seven modelers and texture painters on the crew. For this movie, the modeling and painting team grew to 25.

Big ’Bot Love

Fourteen of ILM’s robots from the first film returned and received what model and viewpaint supervisor Dave Fogler calls “a round of love.” “The art department upped the complexity on the new robots,” he says. “When you’re dealing with organic creatures, to fool the eye into believing something is real, you add wrinkles and hair. With robots, the formula we’ve come up with is that when you add enough detail to confuse your brain, it believes something is real. So, most of the former robots dropped right in, but they all needed some cleanup and improvement.”

The modelers provide complexity with individual mechanical parts, which they combine until they they’ve created the robot’s shape. “You take for granted how the robots look now, but in the beginning, they could have transformed into something with car panels and a closed look,” Fogler says. “After going round in circles for a while, Michael Bay said we needed car parts, not alien insides—things that belong in car engines. That ties the robots to our world and gives them a sense of scale.”

At ILM, modelers sculpt in Autodesk’s Maya. Painters create texture maps in Adobe’s Photoshop, and use the studio’s proprietary Viewpaint software to apply the texture maps to the models. To make it easier for the modelers faced with building more than 30 new, unique robots with thousands of mechanical parts in each, Fogler leaned on his “kit-bashing” background in a traditional model shop. In the physical model shop, crews often build new miniatures by using parts from an existing model kit. To move that idea into the virtual world, Fogler’s digital model shop created a library of parts that modelers, in effect, copied and pasted into their robots.

 “Once a part was modeled and the look developed with textures and material assignments, we would drop it into a database so that it was available to everyone,” Fogler says. Modelers building the robots could use any of these parts repeatedly, and they did. Fogler estimates that approximately 15 percent of the parts in all the robots originated in the parts library. Each part had a marker and ID; at render time, the system automatically looked for the assigned materials and textures for pieces checked out from the library.

Devastator

This procedure was especially important for the Devastator. The massive machine reaches 100 feet when it stands, although more often, it moves on all fours like a gorilla. At five times the size of Optimus Prime, it has 52,632 parts made from nearly 12 million polygons. “If we took all the parts and stacked them lengthwise, it would be as tall as 58 Empire State Buildings,” says Jason Smith, digital production supervisor.

Smith has another comparison to share: “An average car has 5000 parts. Devastator has ten times that.”

And, another. “If you used all the gold mined in the entire earth, in the entire history of man, it would fill only half of Devastator,” Smith says. “I know that’s a bizarre stat, and you might think, ‘well, it’s in the computer, it doesn’t matter.’ But with that scale comes a demand for detail at our level. We killed ourselves putting this thing together, and when we showed it to Michael [Bay], he said, ‘Huh. It almost looks like a toy.’” Bay’s comment forced the crew to add even more detail, and they continued adding detail to it during the entire postproduction process.

Devastator

  • Number of geometric pieces: 52,632
  • Total number of polygons: 11,716,127
  • Total length of all pieces: 73,090 feet or 13.84 miles
  • Gigabytes of textures: 32
  • Total textures: 6467
  • Devastator is as tall as a10-story building.
  • Laid out end to end, Devastator’s parts would be almost14 miles long.
  • When Devastator punches the pyramid, its hand is traveling at 390 miles per hour. 


Devastator builds itself by smashing into one mining-construction robot after another; six in all, each as complicated as Optimus Prime. Its head is a cement mixer. A crane becomes one arm, and a scoop loader the other. One dump truck becomes the torso, and it attaches another as a leg. A second leg is a bulldozer. “And, these aren’t normal construction scoops,” Smith says. “They’re for mining. The scoop isn’t a normal backhoe scoop; it’s eight feet tall, and we get close with the camera, so we needed to have the rivets, and the dirt, and dents, and scratches, and shiny metal—and that’s just the scoop. You start working your way up the arm, and pretty soon you have a heart attack.”

So, having the parts library helped. By the time the modelers needed to build Devastator, they had already built the construction vehicles and the robots they transform into, and could use library parts from those models to assemble the giant, adding new parts as needed to get the form right. “I think maybe 40 percent of Devastator painted itself,” Fogler says, referring to the library parts with material and paint assignments that the system automatically applied at render time.


The Autobot leader Optimus Prime delivers more dialog than in the first Transformers and fights with additional brutality

“So now we have the heaviest asset we’ve ever built,” White says. “There are guns on Devastator’s arms. He has cranes and cables—a phenomenal amount of detail and geometry. And on top of that, we had to work at IMAX resolution, which is eight times the resolution of normal 35mm film. And, every time we thought we were done, we’d get a shot like this.”
White pulls up a scene on the computer screen. Devastator fills the screen. Crawling up the monstrous robot’s back are two other robots. “They’re twins, new characters that move and climb all over him,” Smith says. Created from small cars, the twin Autobots are approximately 11 feet tall.

“We had so much visual complexity in the frame, our challenge was to separate the robots so you can actually see the small ones,” Smith says. To do so, they used such techniques as putting smoke behind a character to provide visual separation.

Sucking Sand

And then, on top of that, during a sequence that takes place in the desert, the Devastator opens its huge cement-mixer mouth and sucks in everything in its path, creating a huge vortex of sandy dust, trucks, and debris. The vortex is a particle simulation, but rather than running that simulation repeatedly for approval, the technical directors shaped a cone that connects to the creature’s “mouth” and to the ground. They blocked in movement for the particle simulation by running a cloth simulation on the cone. “That gave us the fluttering and motion we wanted to integrate into the dust,” White explains. “The particle sim would have taken overnight to run. We could run the cloth sim in under an hour, and get a buy-off on the overall flow.”

Fun Facts from ILM:

  • Optimus Prime will be life-size on IMAX screens in many forest-fight shots.
  • All the robot parts laid out end to end would stretch from one side of California to the other, about 180 miles.
  • If all the texture maps on the show were printed on one-square-yard sheets, they would cover 13 football fields. 

The team then streamed the particle simulation over the surface. Because this was an IMAX shot, which would project at 4k on a multi-story screen, ILM used grid rendering and wavelet technology to add details. “In the past, we’d instance clouds of sprites or other particles to add dust, and you sometimes see little dust bunnies popping on and off,” says Smith. “For this show, we used volumetric rendering that treats the density as values in a 3D volume, a grid.”

First, the TDs ran a low-resolution fluid simulation in a 3D grid. Then, they emitted dust material into the grid and added “octaves of noise.” Smith explains: “Imagine that each cell might say, ‘I’m zero percent dust,’ or ‘I’m 50 percent dust.’ When we add velocity, the cell would say, ‘I’ve got 50 percent dust moving east,’ and the grid to the east would say, ‘let me take that dust from you.’ We call that density.” To add complexity, they increased the resolution of the fluid grid by dividing each cell into eight, sometimes even 64 cells.

Then, the team added noise. “We interpolate the directions and velocities in the fluid sim along these new tiny cells, and at the same time, we add Komogorov noise using research on wavelet turbulence from a SIGGRAPH paper last year,” Smith says. “The noise gets pulled through the fluid sim from frame to frame so it’s coherent, and we get a nice movement with swirls at certain frequencies.” By using volumetric rendering, in which every pixel is actually a stack of pixels going back into the camera with different densities, they could attach details from the sim to the camera view.

Smith calculates that the 400-frame shot would take someone three years to render once on a PC equipped with a 2ghz processor. “We rendered it on eight-processor machines, so it renders faster. Not eight times faster, but faster.”

Breaking Bricks

That is but one simulation in a sequence that sends the action way over the top—over the top of a pyramid in Egypt, as it happens. Devastator has learned that the machine everyone wants to find is inside the pyramid. The robot climbs up the side of the pyramid and smashes through the top, sending bricks flying and falling—122,000 bricks, in fact, all simulated using rigid-body dynamics (see Viewpoint, pg. 16).

Until this film, the biggest simulation ILM had done using rigid-body dynamics destroyed a valley in Indiana Jones and the Kingdom of the Crystal Skull (see "Keys to the Kingdom," June 2008). This simulation was eight times bigger, and to accomplish it, CG supervisor Chris Horvath designed a rigid-body dynamic engine that Cliff Ramshaw, in R&D, helped build in CUDA on Nvidia GPUs.

By utilizing thousands of separate computation streams running at once, the engine could calculate the movement and collisions of thousands of objects through time. To destroy a clock tower in Paris, however, they ran a similar simulation in their in-house software, which is based on Stanford University’s Physbam, to access high-level controls that were not needed for the pyramid destruction.




Rendering such complex robots as Bumblebee (top) and Optimus Prime (bottom), other CG objects, and simulation took 80 percent of ILM's rendering capacity. Including artists workstations, the studio can utilize 7700 core processors, the newest of which are dual processors and quad cores

“We love destruction,” says Smith. “I remember showing Michael Bay some death-and-destruction simulations, and he said, ‘You know, if I worked here, I would do that.’”

Smith doesn’t have a rendering analogy for shots of the Devastator with 52,632 moving parts causing 122,000 bricks in the pyramid to tumble down, but he does offer an overall fun fact. “If you looked at all the rendering we did for this film and tried to do it on one computer, a one-processor 2g PC, and you wanted to be done by the release date, June 24, 2009, you would have had to kick off the renders 16,000 years ago if you ran the computer 24 hours a day,” he says. “Every night, we were pushing years and years of rendering time. It was really insane. We have hallways and hallways and racks and racks of hundreds of these machines going all the time. We broke all the ILM records.” The 2008 Transformers needed 20tb of disk space. This film used 154tb.

Crashing into Trees

In addition to the Devastator sequences, Bay used IMAX cameras to film a forest fight between Optimus and two Decepticons. Animation director Scott Benza and his team choreographed the fights for the sequence, and Benza and Farrar supervised the plate photography in New Mexico. “One of the things we thought we could improve on from the first film was the fight choreography, the brutality of the fight,” Benza says. “Michael really pushed making the fights as brutal as possible because they are machines. You can tear them from limb to limb and it’s not as violent or disturbing as it would be if they were actors.”

Having Benza and Farrar involved in shooting plate photography helped the crew in postproduction tear the trees apart from limb to limb. “When you have giant robots fighting in a forest in a Michael Bay movie, you’re going to be breaking trees,” Smith says. To break the trees as the robots crashed into them, technical directors strung rigid-body objects together with springs so the objects would have mass and weight, and could collide and bend like a tree branch. To splinter a tree, the TDs used springs that snapped when hit hard enough. “We could tune the stiffness and mass on the springs,” Smith says. The idea is akin to that of the TDs at Pixar who used strings of rigid bodies to capture Kevin in a net in Up (see “The Shape of Animation,” June 2009); similarly, the ILM TDs created a net that holds two Transformers down until one slices through it with a sword.

To help the animators, look developers, and TDs work with the giant robots, ILM created a multi-res pipeline that would automatically look at the shape of individual parts and change the amount of detail in the geometry—the resolution—for some more than others. Because the creation of this geometry at different resolutions was procedural, modelers didn’t have to create parts with various levels of detail. All told, the animators could choose any of seven resolutions, from “pawn” resolution, which was handy for the rigid-body simulations, to 1300k.


The Doctor Bot, which transforms from a microscope, is the smallest robot that ILM created, but the smallest 'bots in the film are Microcons created by Digital Domain.

“We could block a shot with a light version of a character and as the shot progressed, incrementally increase the level of detail,” Benza says. Fifty animators worked on the film, more than twice the number who animated the robots in the previous release. As before, they used a dynamic rigging system that allowed them to group parts together on the fly. Any part, even on the Devastator, could move.

Although the animators created most of the action with keyframing, to help give the robots a little extra reality, procedural animation and simulations helped them sweat, cry, spark, drip, release gas, and squirt their version of blood when injured. “We used anything we could to make them look like what would happen if hoses were cut and wiring was chopped on a physical machine,” says Farrar.

Damage Control

All that fighting, chopping, and cutting created surface damage, as well, and that created extra work for the modeling and painting crews. “One of the things I’m most proud of is the damage on Optimus Prime,” says Fogler. “He gets covered with dirt, gouges, and scrapes, and one thing that intimidates me most is bending metal. Most of the tools we have to bend polygonal meshes are for creatures, and they’re great for creating a knuckle. But a dent in a car door has a specific shape, with a soft area and sharp corners, and unless you model that specific shape, it doesn’t look real.”

Aaron Wilson, the artist assigned to Optimus, created close to 90 percent of the damage with textures. “It’s surprising how much you can do in texture, especially using ZBrush as a displacement tool,” Fogler says, referring to the Pixologic product. “We added the damage to the robots as shapes in order to bend and distort pieces of metal, and had an additional texture flag. A TD could load levels of damage.” Wilson, for example, developed a routine for adding texture to Optimus depending on the damage. “He even had clumps of dirt and grass, as if he had fallen into the sod like a football player,” Fogler says.

Throughout the process, ILM’s role was more of a collaborator than a service bureau. The ability of the studio’s digimatte department to replicate locations digitally gave Michael Bay camera moves he other­wise couldn’t do or might think of later, and for all the CG scenes, the studio put a virtual camera into Bay’s hands on a motion-capture stage so that he could frame shots of CG actors in the same way he might frame shots of actors on location.

The ILM artists also looked for other ways to bring Bay’s style to their shots. In an all-CG scene that takes place on another planet, Megatron visits the Fallen, who, we learn, provide energy for newborn Decepticons. “We loaded the environment with dust, drips, goo, and mist,” describes White. “We had drifting stuff, droplets falling onto the ground. We were thinking about how Michael would make this an interesting environment for the characters.” They even added “edge corners,” which are artifacts of 35mm film, to create the illusion that they filmed the characters on a real set. Similarly, for the all-CG underwater shots, they used an Apple Shake plug-in developed by Horvath that replicates light transmission underwater, and then added dust and swirling elements in the comp to mimic reality. In this sequence, which happens early in the film, a Doctor Robot that transforms from a microscope revives Megatron, the Decepticon leader that “died” in the first film, by borrowing parts from other robots.


Animators at ILM created the performance for Ravage, a catlike Decepticon, and all the other robots using keyframing; however, procedural animation built into the rig helped prevent interpenetration.

Small but Sharp

Doc Bot was ILM’s smallest robot, but it isn’t the smallest robot in the film. Digital Domain created those, the ball-bearing-size Microcons. Matthew Butler, visual effects supervisor at Digital Domain, provides the context, which centers around the All Spark, the otherworldly cube that can bring mechanical and electronic objects to life—obviously a great prize to a Decepticon interested in ruling the world. “The All Spark is held under close surveillance by the military, so one of the Decepticons, called Ravage, that ILM created, spews out a bunch of [Digital Domain’s] Microcons, which fall down a vent, swarm into patterns, and hatch into mean little insects that assemble at breakneck speed into razor-blade-like surfaces to form a character called Reed Man. He’s so sharp that when he aligns his body in a particular direction, he becomes nearly invisible.”

Getting the little Microcons to behave properly was a complicated process involving particle simulations for the tiny robots rolling around on the floor and animation walk and climb cycles stamped onto a particle system that became a crowd simulation. All of this ran in Side Effects’ Houdini. To create the illusion that the Microcons were climbing up and forming the praying-mantis-shaped Reed Man, Digital Domain ran a rigid-body simulation system backward. “We knew where everything had to be in the end,” says Lou Pecora, compositing supervisor. “We knew where everything had to be assembled in Reed Man. So, it was easier to put them there and deconstruct with reverse rigid-body simulation and then play it forward.”

CG supervisor Paul George Palop explains: “It was a bit of a hack. Once we had a base system, we could change the way they climbed down or, in the final simulation, climbed up.” The Microcons transformed into razor blades when they reached the leading edge of their original destination.

“They’d go, ‘OK, I’ve been here for 20 frames, time to go flat,’” Palop says. “If you look at one guy, it doesn’t seem like much is going on because it’s a very simple behavior. But when you have 20,000 actors making decisions based on position, on who’s in front, it’s really cool.”

To help the animators line up all the razor blades for Reed Man’s invisible moments, the technical directors wrote scripts that turned all the blades to the camera using some spring dynamics for secondary motion. “We gave them a simple tool that they ran as a simulation once they had keyframed the pose,” Palop says. “But, they could tone it down if it looked too heavy-handed.”

To light the creature, the team used a Maya-to-RenderMan pipeline. To composite it, they used The Foundry’s Nuke. Pecora managed the depth of field in the composite, using it to help lead the audience’s attention through the shot. “It was a nail-biter right to the last minute,” Palop says. “It wasn’t the rendering time; it was the simulation that would take all day. If we wanted to change how fast the Microcons were moving or reveal the blades earlier, or move anything closer to the camera, it would take a whole day.”

Alice, Callous Alice

The second headache for the crew was Alice. “That’s just complicated stuff,” says Butler. “We started on her in November, and we were still working on the shot in May. She’s a pretender. When Sam goes to college, she masquerades as a hot chick and hits on him, but she’s a nasty ’bot, of course. Her tongue tries to strangle him, a metallic tail comes out, and all her skin rips off and reveals the inner mechanics of a Decepticon.”

Because she’s human at the beginning, Digital Domain needed to track the actor’s moving skin and dress, and transform her into a complex array of mechanical moving plates. To do this, the team used a combination of keyframe animation and procedural animation in Maya and Houdini.

“Once we tracked her, we baked that geometry as a cache for the Houdini team, and they went crazy,” Palop says. “Michael [Bay] didn’t know exactly what he wanted, only that he wanted little tiles that moved away to reveal the robot inside.”

Disk Space

  • Transformers utilized 20tb of disk space. Transformers: Revenge of the Fallen utilized 154tb. 154tb would fill 35,000 DVDs. Stacked one on top of the other without storage cases, they would be 145 feet tall. 

So, the team projected Alice’s image onto 3D geometry, processed the geometry to create the tiles, ran a simulation in Houdini that animated the tiles, and rendered them out for compositing.

“The human model, Alice’s geometry, was basically a point cloud,” says Palop. “Each tile was a little model; we cloned the tiles and completely covered her body with 200,000 tiles; we stamped tiles on all those points across the geometry. Then, we controlled their behavior, based on rules, with control maps that told them when to flip and move away.” On one side, they had the projected image of Alice; on the flip side, they had the projection of the robot.”

To finish the shots, Pecora’s team took over, working with lighting passes to match the plate and adding geometric complexity—cuts in the skin and so forth—as the tiles moved. “We used Nuke’s robust tool set a lot for her face. It was a laborious shot but really cool. We balanced between having the detail too fine and not fine enough, until we finally found the magic number with the right number of tiles at the right size.”

Cool-to-Expletive Ratio

Not all the robots Digital Domain created were as difficult, though. Soundwave, a Decepticon, attaches itself to communication satellites using Houdini-driven tentacles of glass tubes, which reflect and refract light using raytracing in Side Effects’ Mantra. “We didn’t have much time to create the effect, but it came together quickly, and Michael [Bay] loved it, so it became the least of anyone’s worries.”

The studio also transformed a whole series of household appliances into Blenderbot, Disposalbot, Dysonbot (a vacuum cleaner), Espressobot, Toasterbot, Microwavebot, CiscoRouterbot, and Mixerbot, all of which went from looking new as appliances to looking scratched and damaged as robots.


The Twins, created by ILM, prove double the trouble for Devastator

“We worked on a cool-to-expletive ratio with Michael,” says Pecora. “If we stayed with something realistic and pristine, the expletive ratio was really high, but when we went from pristine to a robot with battle scars, the expletive ratio took a nose dive.”

Because the team had only 10 frames for the transformation, they didn’t try to create anything realistic. They did try to add real motion blur, though, and discovered that Michael Bay had specific motion-blur tastes.

“He likes sharp and crisp,” Pecora says. “But if you use appropriate motion blur, things look smeared. So we rendered motion-blur wedges. I won’t tell you the magic number, but we found one that we wouldn’t have guessed. Even when we were baking motion blur in the 3D world, we’d bake in that shutter angle and it got past him.”

Having Fun

One robot, Wheelie (not to be confused with ILM’s giant Decepticon Wheelbot), provided comic relief—both at Digital Domain and in the film. It’s a Decepticon that transforms from a remote-control car, and it develops a crush on Sam Witwicky’s girlfriend, Mikaela (Megan Fox). It has bug eyes on poles and can skate on two wheels, or walk or spin around in circles. Butler describes a scene with Wheelie in Mikaela’s chop shop. “He’s running around the room and tries to crack the safe, but Mikaela pins him with giant pincers against a lathe,” Butler says. “She sticks him in a toolbox and keeps him as a pet. He’s rude and funny.” And, it humps Mikaela’s leg like a dog.

“At first, Michael [Bay] thought he was a cute, ‘I wov you’ kind of guy, then he decided Wheelie should be schizophrenic and high strung,” says animation director Dan Taylor. “Then he shifted into a little troublemaker. It was great collaborating with Michael. He pushes as hard as he can to get the most from people, but he is also very collaborative,” Taylor says. “We actually suggested editorial changes. He’s one of the most creative people I’ve worked for.”

Farrar reinforces that opinion. “Michael was here at ILM and 100 percent involved,” Farrar says. “He might hit 20 people in the animation area and then start going through the TD shots. He knows people by name. He solicits ideas. He asks, ‘What should this character say? What’s the line?’ If someone has a good idea, he’ll put it in. Every one of the people involved [in every area]—animation, lighting, everything—helped him create this movie. If you want to make movies and be involved, it doesn’t get any better than this.”

Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at BarbaraRR@comcast.net.