Issue: Volume 33 Issue 10: (November 2010)

Mind Over Matter

By: Barbara Robertson

Why would a filmmaker choose to create a superhero action movie with animation rather than live action? One out-of-this-world reason is to turn the genre on its ear.

“That’s the fun of this movie,” says Tom McGrath, who directed DreamWorks Animation’s Megamind. “It turns the superhero genre upside down. It isn’t a parody or a satire. It’s a comedy.” And yet, the heart of the drama is a love story between two unlikely partners, Megamind and Roxanne.

Megamind, the star of the film, is a big-headed, blue alien voiced by Will Ferrell, and he’s a supervillain, not a superhero. With the help of his alien sidekick, Minion (David Cross), a fish-headed robot gorilla, Megamind wants to conquer Metro City. But his evil plans never succeed, thanks to superhero Metro Man (Brad Pitt). Roxanne (Tina Fey), a reporter in Metro City, is Metro Man’s girlfriend and frequent Megamind kidnap victim. The plot twists when Megamind accidentally defeats the Metro Man and a new villain emerges. “It was nice to have a small cast of characters,” says McGrath, who previously directed the ensemble cast in Madagascar: Escape 2 Africa (see “Home Is Where the Art Is,” October 2008). “We could focus on their relationships and on putting a new twist on the dynamics. It’s a huge, epic action story, but we have intimate relationship stories going on.”

Move It, Move It, Move It

Jason Schleifer, head of character animation, led the team of 40 animators at PDI/DreamWorks in Redwood City, California, and 20 at DreamWorks animation in Glendale, California, who created the action and the emotion.

“Our biggest challenge,” Schleifer says, “was making a likeable villain that the audience wants to follow. In the first five minutes, Megamind kills the hero, but we had to make people root for him.”

The first test performances for Megamind produced a crazy-eyed, wicked villain who lifted his arched brows maniacally and frowned a lot. Typical evil behavior. Then, the artists realized that even though his dialog might be wicked, if he smiled and looked happy rather than evil, he became appealing. “He is a villain because he likes the challenge, the excitement of ‘bad guy versus good guy,’” Schleifer says. “He enjoys taking over the city. When we played with that enjoyment factor, he became super appealing.” The second performance challenge for the animation team was in differentiating Metro Man and Tighten, two superheroes who are physically identical from the neck down. “We had to find a way to differentiate their silhouettes,” Schleifer says.

At top, director Tom McGrath usually recorded the dialog tracks for Megamind (Will Ferrell) and Roxanne (Tina Fey) separately, but on three occasions, the actors improvised the shots together. At bottom, Minion, the fish inside the bowl, provided interesting opportunities for stereo 3D artists to dive deeply behind the glass.

They decided that Metro Man, who was born a superhero, knew how to control his muscles when he used his superpowers. “If he wanted to heat a cup of coffee, he’d say ‘screw the microwave,’ tense his abs, position his legs and head, and use his laser vision,” Schleifer says. “The force would kick his head, but he could absorb the power.”

Tighten, on the other hand, is a former cameraman without Metro Man’s lifetime of experience with superpowers. He would react differently. “Tighten would shoot backwards,” Schleifer says. “But as he became more powerful and gained control of his power, we played with using his physicality differently.”

To accent these superhero performances, the animators could activate a muscle-based system that made it possible to scale and flex the muscles. “We wanted Tighten and Metro Man to have crazy, powerful poses,” Schleifer says.

All the characters in this film moved thanks to the studio’s new rigging tool called Rig. “On How to Train Your Dragon, the dragons used Rig, but the humans had the old system,” Schleifer says. “So this was the first show that used Rig for all the characters. The background characters had the same rig as the hero characters, which made it easier for the animators. Because they had consistent controls for every character, they could concentrate on the character’s personality. They didn’t have to spend time learning new tools.”


Tighten, in red whispering in Megamind's ear, is a superhero-turned-villain. To animate his and others' capes, character effects artists filmed themselves wearing capes while flying down zip lines.

The character with the most personality is Megamind, the villain—who is Schleifer’s favorite. “He ended up being such a compelling character,” Schleifer says. “A lot of that has to do with how the animators performed him. He puts his heart on his sleeve and is such a wonderful character to watch. I love him.”

Mega Characters

Mark Donald was the character lead who oversaw the performance of that compelling character, and he was also one of seven supervising animators who managed teams working on particular sequences for the film. “We’ve had supervision [of teams] based on sequences in the past,” Schleifer says. “It’s important to help every animator grow as an artist over the course of a show. But, we also had character leads as a resource for animators so they didn’t have to figure out the characters on the fly.”

The character leads worked with McGrath to create a library of facial expressions and poses that the animators could use directly, partially, or as reference. All the hero characters’ faces have several hundred controls for moving predesigned shapes from the library. To create a smile, for example, an animator might dial in a shape and then use layers of controls to improve it. Donald began refining Megamind’s facial expressions once the riggers had created basic controls.

“Megamind had a range of expressions two or three times greater than the other characters, and he had a huge head,” Donald says. “We worked with the character technical directors to tweak expressions and fine-tune the rig based on directions from Tom [McGrath]. We wanted to push his face into cartoony shapes, but we also needed a fine degree of control to sell the dramatic and emotional scenes, which are the opposite of a cartoon. The smallest muscle twitch. The smallest movement in the brow or mouth. This character had that range.”

On screen, the animators see a shaded, textured version of the characters, with shadows but not with final lighting. They could also look at any shot in stereo 3D. “We all have a tendency to cheat to the camera,” Donald says, “to maybe curl or stretch an arm that you wouldn’t see in 2D. But in 3D, you see all it instantly.”

To help with the subtle performances, the crew filmed actor Will Ferrell as he performed the dialog, using a small lipstick video camera affixed to the corner of the recording booth. “The challenge is making it feel like his voice is actually coming from the character,” Donald says. “When you achieve that, you stop seeing it as an animated performance.”

For the broader performances, the animators often filmed themselves. “Sometimes mechanical things have to happen, so if you video yourself, you can study that and see what you’re doing,” Donald explains. “And, you can act in front of the camera and show the director how you intend to animate the shot.”

In fact, to see how the supervillains and heroes might fly and how their capes would flow around them as they did, several people on the team filmed themselves “flying” on a zip line.

“It was super fun,” Schleifer says. One of the things we wanted to do with these characters was to push the superhero iconic look, so we worked on trying to get cool graphic poses, to tweak the bodies to get the arcs in the legs, and sculpt muscles to get graphic lines. And, we wanted to have dynamic poses with the capes.”

Damon Riesberg, head of character effects, led the team charged with creating exciting yet believable capes. “They got someone to teach them to sew,” Schleifer says. “They built their own capes out of various fabrics—cotton, silk, linen. Then, they filmed themselves on the zip line and saw how non-heroic the capes really looked. Realism is not heroic. It’s the exact opposite of heroic.”

But, by applying the properties that they discovered in the real world to a cloth solver in Autodesk’s Maya, Riesberg’s team created a solution that worked for the film. “We could have the capes simulated with realistic physical properties, but we could also sculpt specific shapes and animate them,” Schleifer says. “We could go with real-world physics, and then when the superheroes blew up the real-world physics by moving 1000 miles per hour and stopping in a matter of frames, we could blend in keyframe shapes to sculpt the performance of the capes.”

The animators could start with the base simulation, tweak and hand-animate whatever they needed to create the silhouette they wanted, and then blend back into a simulation.

Mega Crowds

In addition to the five main characters, hundreds of background characters fill Metro City. “In some of the shots, we had crowds of over 70,000 people,” Schleifer says. “They are Megamind’s guiding rod within his character arc; the way the city responds to him helps define his arc through the film. So the crowds needed to feel special and unique.”


Metro Man is the only character who can fly through the enourmous CG City and fall without flattening himself on the pavement. The crew constructed and textured buildings in the city procedurally and animated crowds using Massive software.

Thus, the studio decided to use Massive software for the first time to control the all-important crowds. Four animators created motion cycles so the individual Massive agents would cheer, run in terror, boo, stroll around, and react in other appropriate ways depending on the situation. A crowd team built the network of brains that triggered the cycles. “We had a huge number of motion cycles for the crowd,” Schleifer says. “The nice thing is that we can use the brains and cycles for future shows, which will save us a ton of time.”

To provide an appropriately large stage for Megamind and the enormous crowds, the effects team built an elaborate city. “We had to use a matte painting to do a city for Madagascar,” McGrath says. “Now, we have a [3D] city the size of Chicago, with enormous overpasses, fire hydrants, and details like tar filling in the cracks in the streets. It’s a human-based world. It’s important that our characters live in a tangible, real world. There has to be jeopardy and real stakes. We don’t flatten characters if they fall. Metro Man is the only character who can fall and live.”

The environments fell under the purview of visual effects supervisor Philippe Denis, as did modeling, surfacing, effects, and lighting—everything except character previs, animation, and camera layout. “We tried to be smart working with the previsualization team, and they did a good job, but even so, I stopped counting after 75 environments and sets,” Denis says.

Mega Stereo

This is Phil “Captain 3D” McNally’s 10th stereo 3D movie and his sixth at DreamWorks this year counting the Shrek conversions. We talked with him about how stereo 3D has progressed and how the crew used stereo for Megamind.

Where in the process did you begin working on the stereo version of Megamind?
We worked closely with Kent Seki, the director of previsualization, who was kind of like a cinematographer. We have superheroes flying about a city, so that was a great situation for maxing out the 3D. Ken did a great job of composing in a way that really uses depth and in thinking of ideas that will be spatially interesting.

Can you give us an example?
Megamind has an idea wall—a clothesline with bits of paper hung up on clothespins. There are so many things hanging in space that it gives you interesting spatial composition. It’s a simple idea, but strong. Also, Minion is a fish in a goldfish bowl. The bowl is the character’s head. It is an invisible wall with a transparent watery surface, which is interesting. You can really see the 3D in the refractions in the water.
Shiny things, shiny paint, shiny windows are great in 3D. When you ask someone about reflections in a mirror, they tend to think the mirror is 2D until they really look. Of course, it isn’t. The image is not on a surface. Stereoscopic imaging can really hold a lot of detail that might be distracting in 2D. That’s why filmmakers use shallow focus so much in 2D, to simplify the shot. So, reflections are rich spatial environments that we can use.

How, then, do you focus the audience’s eye in a richly detailed stereo image?
We use other things. Think of a theater environment. They use sound, lighting. If one person is talking, the focus is obvious. Maybe some people move and others don’t. We don’t have a problem knowing what to focus on in real life.

Did the stereo artists use your “Happy Ratio” software to set the stereo cameras?
Well, I trust myself by using tools with my opinion in them. The way to think of the Happy Ratio is that it’s the ratio between interaxial and convergence that makes me happy; the balance between comfort and a 3D effect and an emotional position. In the software, the camera can measure how far things are, so the artists position a plane to measure the nearest object. A calculation then runs in the background that drives the stereo attributes in a way that sets up the shot similarly to the way we have done shots like that on previous movies. It sets up the amount of roundness for a particular distance and lens, and positions the whole shot based on input from the scene.

Are you planning to add to “Happy Ratio” in any way?
We’re looking at automating it further. We could almost run an automatic first pass on a sequence to give us more time to make creative decisions. Just like autofocus on a camera means you can concentrate on position. It would be like point and shoot for stereo.

How is the stereo pipeline set up at DreamWorks?
We have a pool of artists who work on camera and final layout there might be 20 artists all working on the movie at one time. In addition to [the] camera [work], they prepare files for animation and set dressing. Each person who opens a shot to work on camera does the stereo settings. At final crunch, we tend to have a smaller group with an eye for stereo work on the shots. And at the end, I jump in. We’re at the stage where all the leads do not yet have full stereo skills and experience. So, I might partner with the final layout lead, or head of layout or previs, to make up the stereo gap. But, as the stereo experience increases, the artists take up more and more of that work.

How are you seeing the use of stereo changing?
We refine the craft every time we work on an animated film at DreamWorks. Megamind shows that we’ve got the craft dialed in. You’ll see not just neutral stereo that doesn’t hurt, you’ll see stereo smoothly integrated into the whole story. It’s an almost invisible contribution. The automated settings, the efficiency have given us more time to craft the 3D the way we want it. With the craft dialed in and the internal tools developed, everyone is confident about their choices. We can do half the movie automatically. So, on each show going forward, the artists can think of their style of stereo, what they want to do creatively. That’s the whole point.


Mega City

The city was the biggest environment, and likely the biggest ever built at DreamWorks; it had to accommodate superheroes flying overhead during action sequences. The crew constructed it entirely with 3D geometry. But, rather than modeling a few dozen buildings to replicate and place in various configurations, the artists decided to construct the buildings procedurally.



At top, animators learned that if they showed Megamind enjoying himself while capturing the city, he became more appealing, a necessary trait for a villain who stars in a film. At bottom, Megamind's lair was nearly as complicated an environment to create as was the city.

“We went with this approach because we were really concerned about rendering a city,” Denis says. “We had never built such a big set, and we wanted to be sure we could change it at will.” They wanted, for example, to easily allow the crew to shorten a tall building if the director noticed that it cast a shadow on a character.

For each block in the city, the crew first mapped large areas with buildings of a particular type and height. “We had to find the rules that make a city look like a city,” Denis says. “Cities have an organic aspect, but they’re organized. You don’t want too much variation.” The team based the maps for Metro City on Paris, creating, in effect, arrondissements (neighborhoods) that spiraled out from the center.

Jonathan Gibbs, who had been chief effects architect on Monsters vs. Aliens, supervised the city development, working with groups from the art department who helped define the design and architectural rules that created windows and other details in the right proportions for various building sizes and styles. “Then, some of the effects artists created a language to describe the buildings,” Denis says. “The advantage we had by going with the procedural approach is that we could manage the size of the buildings. We could decide to make a building wider or taller at any point in time. We could also manage the level of detail in the model, the surface, and the textures.” The procedural system worked so well that the crew ended up creating fewer hero buildings than planned.

Because the procedural architecture had a consistent UV structure, the artists could switch textures at any time. “We could easily make a building that effects needed to blow up out of concrete,” Denis says.

For such demolition, the effects crew created a system based on the Blast Code plug-in for Maya. “It took nine months of preparation,” Denis says. “We knew we had a lot of destruction, and we wanted to be sure we had something ergonomic enough for the effects artists to use. We looked at a lot of footage of building demolition, really looking at it to see the response, the size of the detail.” The system works with texture maps to define the shattering.

Shaders written in the studio’s proprietary rendering system managed the baked texture maps and created volumes inside the buildings. “We wanted to be sure the windows didn’t have mirror-like reflections,” Denis says. “We wanted to always see something inside, so for each window, we could have a code that defined the volume behind. As the camera moved, you’d have parallax and you’d see the city come alive. At night, we barely had to light the city because of the lights that are on in the interiors.”

Also, cars driven with Massive added lights—headlights, taillights—to the nighttime city. “Massive was very successful for the cars because it’s a procedural simulation, and that’s what traffic is all about,” Denis notes. “You define some rules and go. When we’re really far above, we just generate particles of light, but light is always moving in the city. It was something we were especially concerned about.”

Rather than trying to place hundreds of thousands of individual lights in the city, the lighting artists used point-based global illumination (PDGI) as a bounce element. “We could light the street and get light bouncing back from the buildings,” Denis explains. “It was expensive, so we rendered layers to manage it in compositing, which isn’t the way we usually work. But, it was interesting for the night sequence.” Although the studio relies on a proprietary compositor, the compositing artists worked with The Foundry’s Nuke to polish the images.

In addition to cloth simulation and environments, the artists on Denis’s team also created more typical effects—fire, dust, smoke, and other atmospherics “We used a lot of 3D simulations and a lot of particles with 3D textures for detail,” Denis says. “For fire, we used a 3D volumetric simulation. The effects are almost characters, but this is not a cartoon. We really wanted to have the dramatic action sequences to be tangible, to feel big and dangerous. And, the best way to tackle that is to use a lot of fluid simulation to get the details.”

In addition to being able to turn the genre upside down, this type of realism, evidenced in movement and detail rather than in photorealism, is another reason why McGrath enjoys creating a superhero movie in CG rather than live action. “When you look at a live-action superhero movie, you kind of know when the effects turn to CG,” McGrath says. “But, when the entire world is CG, you never feel like you’ve switched to a new world. Characters can even do their own stunts. That’s really exciting.”

Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at BarbaraRR@comcast.net.

Back to Top

Printed from CGW.com
Computer Graphics World Inc. | www.cgw.com | (800) 280-6446

Copyright 2013