Monsters of the Deep
Issue: Volume: 32 Issue: 3 (Mar. 2009)

Monsters of the Deep

Read in-depth features about how cutting-edge CG was created for various movies making Oscar buzz.

You’d need to have a mighty inventive mind to think of combining Mad Magazine with 1950s monster movies and the film The Dirty Dozen, but that’s exactly how two DreamWorks Animation directors started when they created the studio’s latest feature animation, Monsters vs Aliens.

Conrad Vernon, one of Shrek 2’s directors, had the idea of making a film told from monsters’ points of view albeit in Mad Magazine style. Rob Letterman, one of the Shark Tale directors, wanted to create an animated feature with a Dirty Dozen plot. Put the two ideas together, and you have a story in which an Army officer, General W.R. Monger (Kiefer Sutherland), releases a team of imprisoned monsters to fight an alien menace.

“We thought it was an interesting idea to give monsters personalities, hopes, and dreams, and take them down a comedic path,” says Vernon.

Fighting for Earth on the monster team are Dr. Cockroach, PhD (Hugh Laurie), Insectosaurus (Jimmy Kimmel), The Missing Link (Will Arnett), Ginormica, the former Susan Murphy from Modesto, California, a 49-foot-11-inch-tall woman (Reese Witherspoon), and BOB the blob (Seth Rogen). On the other side is the alien overlord Gallaxhar (Rainn Wilson).

To create the look of the film, the directors worked with production designer David James and character designer Craig Kellman. “We put together material from old 1950s B-movies, horror flicks, and sci-fi films,” Vernon says. “DreamWorks gave us a budget to buy 150 films on DVD. So, everyone checked them out and watched the acting, lighting, and the cameras. But, our biggest influence was Jack Davis, the way he handled humans. They were caricatures, but you believed them in serious situations.” Davis, a Mad Magazine cartoonist, also drew for Tales from the Crypt.

The alien, in particular, represented an amalgam of these influences. “Craig drew 30 or 40 different designs,” Vernon says. “Sometimes we’d see brains through the skull, sometimes antennae, sequined disco shirts with belts, and squid-like legs. At one point, he had a sinister moustache, pointy beard, and sharp teeth.” They dumped the moustache, beard, and teeth, but kept the other attributes. “He has a disco shirt, ‘dealiebobs’ on his head, squid legs, and four eyes,” Vernon adds, “all the 1950s sci-fi attributes. But, he’s still quite unique to our film.”

Similarly, the monsters trace their origins to classic sci-fi films, but are unique to this one. Dr. Cockroach is the team’s homage to The Fly. “He has a huge head, giant eyes, antennae, elegant hands, and skinny feet,” says head of character animation Dave Burgess. “He looks like a toothpick with a giant olive on top.” Insectosaurus is a Godzilla-sized bug; he looks like a 350-pound fuzzy grub. The Missing Link is a cross between the Creature of the Black Lagoon and King Kong. And Susan Murphy is almost as tall as Nancy Fowler Archer in the 1958 film Attack of the 50 Foot Woman. 

All these monsters are bipedal variations, although two have tails (Link and Insectosaurus) and Dr. Cockroach has antennae. BOB is another story. BOB is a cross between a character and a fluid simulation. Character technical director Terran Boylan worked for two years to create a rig that animators could use to control him and still have the creature look amorphous.


Susan Murphy, aka Ginormica, leaves a stream of particle system-created destruction in her wake as she waves good-bye to the alien robot chasing after her.

What About BOB?
“Our concept evolved over time,” Boylan says. “He started out as a CG blob with a face mapped on. As we progressed, he became a metaball-like character that acts, has a face with characteristics of a blob in certain circumstances, and a head and arms that can come off.”

But, Boylan still needed to give animators control over this amorphous character, which would have no consistency from frame to frame. “My throw-down to the character TDs was that I wanted the BOB rig to be so simple anyone could use it,” Burgess says. He and Boylan wanted to give the animators a traditional interface, one that was similar to the rigs they used to control other characters.

“The trick, if there is a trick, was to have two versions of BOB,” Boylan says, “one made of patches and another that’s an implicit surface made of blobs.”

The rig for the patch version, the one the animators used, looked like a series of rings with an eyeball in the middle. “He’s elaborate and flexible,” Burgess says. “By default, his arms are off, but we can pop them out when we need to and rotate his eye around his head. We could also turn his face on and get eye and mouth controls. Every­thing else was a simulation that shows up when he’s rendered.”

Dick Walsh, who received a Scientific and Technical Academy Award in 2002 for developing PDI’s facial animation system, rigged BOB’s face. “I was proud that we really challenged Dick [Walsh] to fulfill a lot of unusual requests for our movie,” Burgess says. “When you can tell him to give me a guy with one eye, a huge mouth, multiple chins, brow ripples that come and go, and to swivel the eye around the head, then he gets excited.”

In his blobby state, BOB was a consistent set of metaballs. “Imagine 30 flat meta­balls that could separate running up and down a spine,” says Boylan. “His head can separate between his mouth and his eye, or under his mouth.” To create arms and fingers, the crew added curve primitives to the studio’s metaball system, which previously used ellipsoid primitives.

Then, to bring the two versions together, the character TDs developed a system that mapped from the metaball version to the patch version, and used the patch version, that is, the motion system side, to warp the metaball version. In other words, it was like wrapping a lump of Jell-O with a net, and then pulling on the net to squash the Jell-O into specific shapes.


BOB has popped his arms out of his amorphous body to make a point, thanks to a sophisticated rigging system that gave animators control over his blobby form. Behind him, The Missing Link and Dr. Cockroach used more traditional bipedal rigs.

“It all comes back to having two different representations and blending between them,” Boylan points out. We blend between one patch and one tessellated surface; the patch drives the tessellated isosurface.”

In practice, BOB is always both versions at the same time—both versions plus one. A second patch version made it possible for BOB to lose his head and rotate his eye. “In order to make that gag work, we needed two different patch versions that work in concert,” Boylan explains. One goes with the head; one stays behind.

Similarly, a surface patch version of the eyeball model can orbit BOB’s head. The system blends between the patch surfaces for the eyeball and the head.

The animators saw a simple version of BOB with rigid polygons parented to joints, but they could dial in more details as they wished. “One of the true challenges from the production side was how to make a character with an implicit surface interactive enough for animators,” Boylan says. “It took a lot of special optimization code written in C++ and a lot of work from R&D, as well, to get that performance. But, the animators could go all the way from a stand-in version in one-tenth of a second to a final version that took two or three minutes.”

For BOB’s face, animators could work with a surface representation, but as soon as the character needed to be blobby, they switched the view. Special deformers represented BOB’s contact with the ground, but when BOB touched something, animators needed to work with the higher-resolution versions. In addition, because bubbles inside the blob added to BOB’s performance, animators could use the rig to control a bubble simulation.

“One challenge was making sure all his gags played well together,” Boylan says. At one point, I was describing how BOB worked to our lighting supervisor, and he said, ‘You know, you guys could have said no.’ But, we didn’t want to say no.”

Blob Lighting
As it turned out, creating a patch version of BOB provided advantages beyond animation. “One of the things that characterizes BOB from a CG standpoint is that we have a polygonal mesh made of triangles that can change in every frame, and there’s no consistency from frame to frame,” Boylan says. “That becomes a problem for lighting. So, for purposes of lighting, we had a mapping that occurred between the implicit surface and the reference model.” Helping that along was a new polygonal model format developed internally that allowed the technical artists to specify arbitrary attributes at a vertex level.


Stereo 3D helped the directors emphasize the size difference between the 49-foot-11-inch-tall Ginormica and the normal-sized General W.R. Monger because they could push her deep into the screen.

At first, the artists wrote a special shader to transfer normals at render time from the patch surface to the polygonal version of the model, but the result had artifacts that caused “chattering.”

“We could smooth them out and reduce them, but we couldn’t get them to go away,” Boylan says. “So, we decided to steal the normals that worked in the patch model and apply them to the polygonal model.”

To sell the idea of a liquid, lighting used refraction. It turned out, though, that refraction and reflections became an interesting predicament once viewed in stereo 3D.

“You can render specular and reflections correctly, but sometimes when you get them on the screen, given current projection technology, they can read differently in the two eyes and create a shimmer,” says Ken Bielenberg, visual effects supervisor. “It’s physically correct, but it’s not pleasing.” Thus, rather than calculating the true reflections and specular for only the two eyes, the CG artists also simulated the result from a center camera so they could choose that simulation instead for the left and right cameras (eyes).

“It has the effect of changing the depth,” Bielenberg says of the simulation. “In the extreme, you see a reflection mapped on something, instead of seeing the depth. However, we have a creative knob so we can dial between true and fake reflections. We almost always use true reflection. But BOB has such variation in his shape, as well as lumps and blobs, that the reflections and specular differences were extreme between the two eyes. So, we found a happy medium.”

In the Deep
The crew was approximately three months into development when Jeffrey Katzenberg, DreamWorks’ CEO, decided that Monsters vs Aliens would be the studio’s first film authored in stereo 3D from the beginning.

“Jeffrey said, ‘We’ll show you what it entails,’” Vernon relays. “And then the studio gave us 300-page books with a lot of math. I became an artist because I don’t understand this stuff.”

Phil (Captain 3D) McNally jumped to the rescue. “He took us into the theater and broke it down for us,” Vernon says. McNally explained that the more the cameras converge, the less the 3D effect, and the less they converge, the more the 3D effect. Using a laser pointer, he showed the directors where objects were in relation to the floating stereo window frame. 

“He taught us all the tricks of the trade,” Vernon notes. “We learned that 3D wasn’t just about poking things out of the screen.”


Stereo 3D caused the effects department to throw such comp’ing tricks as 2D heat-wave distortions and 2D glows overboard and rely instead almost entirely on 3D CG and volumetric rendering, as they did to create this light beam.

The directors decided to use the depth to bring the audience into the screen as much as possible. “We started looking at 3D in our layout pass when we had really rough, blocky shapes and forms,” Vernon says. “If anything hurt our eyes or overtly popped out, we’d take it out. First, we made the composition and camera nice, and then pushed 3D later on. We constantly looked at each shot, each sequence, and then the entire movie in 3D to make sure we used 3D to its full potential without calling attention to itself. We cherry-picked the places where we could poke out from the screen.”

Even though the directors used 3D with a deft touch, Vernon believes that in many ways this is a perfect film for stereo 3D.  For example, stereo 3D helped dramatize the characters’ size—especially that of Susan/­Ginormica, the almost 50-foot-tall woman. “We could actually push far back, way into the screen, to get that depth,” Vernon explains.

Similarly, when General Monger buzzes around Susan/Ginormica as he leads her through the monster prison, pulling him out into the audience and pushing her deep into the screen enhanced the size difference between the two. But while they were determined to keep the stereo 3D effects subtle, the directors did embrace a few possibilities for what Vernon calls the “boo” factor. For example, when Insectosaurus shoots silk out his nose, it zooms right out into the audience. 

Managing the Effects
Working in stereo 3D affected all the departments. The layout department had to think about camera setups that maximized the effect, but that also worked in 2D. Animators needed to be careful about eye lines and about positioning characters that touched something. “We all had a constant dialog as we looked for what our stereo sensibility is,” says Bielenberg, who notes that stereo 3D challenged the creative side of effects more than with other films.

“We designed the effects to have the maximum impact in stereo,” Bielenberg explains. “We’d look for opportunities—lightning bolts, smoke, and sparks—where the story motivated the effects and the audience was so immersed in the scene that the effects didn’t take them out of the picture. If they were too front and center, or felt too deliberate, we’d pull them back. We wanted the effects to make the story more dramatic. This wasn’t a movie about stereo, like a theme-park ride.”

Even so, Bielenberg singles out two great stereo 3D effects moments: During first contact, when the military is attacking the alien robot, bits of debris with streaming smoke behind them fall into the audience space. And in a later sequence, when Susan dives to the bottom of a ship with all the monster characters, she and the characters come off screen. “When the ship explodes, they are silhouetted against the explosion and all the sparks behind them,” he says. 

To create these effects, the crew needed to work almost entirely in 3D CG. “Comp’ing tricks didn’t hold up,” Bielenberg says. “In the past, for example, we could do heat-wave distortions in 2D, but in stereo, they wouldn’t have the right depth. Sometimes, if we matted correctly, we could do 2D glows, but usually we had to put in 3D beams and use our volumetric renderer.”

In addition to the challenges with stereo 3D, the scope of the film, which was the largest effects film that DreamWorks Animation has done, taxed the studio. Typically, approximately 20 people join an effects crew for an animated feature. For this film, Bielenberg supervised 45 effects artists and nearly 80 lighting artists.

“We had a lot of effects sequences,” Bielenberg says. “Destruction was big. A chase in San Francisco had a lot of building destruction. We destroyed the Golden Gate Bridge. We also had gunfire and explosions.”

For particle effects, the studio used a combination of Autodesk’s Maya and proprietary tools. For destruction, they used Maya plug-ins from Blast Code for the first time. And, for fluid simulations, the effects team relied on the studio’s award-winning proprietary software originally developed for Antz. “Scope and creativity were the challenges, not new software,” Bielenberg says.

In terms of the character effects, other than developing the right reflections for BOB, Susan’s hair provided the most interesting problems. “Getting the right styling and lighting for her white hair was a massive effort,” Bielenberg says.


Reflections such as those in Dr. Cockroach’s eyes caused the studio to create new tools to keep them from shimmering in stereo 3D.


Jeffrey “JJ” Jay, one of the character TDs who worked on setting up the controls to move Susan’s hair, explains. “I don’t think we had done hair dynamics at that scale before,” he says. “It wasn’t a major issue. We didn’t have to create new technology; we could use our proprietary system. It was more a matter of finding settings that worked.”

Jay also worked on rigging for “generic woman A,” who appeared in crowds as secretaries in the war room, as soldiers, and so forth. “They’re generally off in the distance and don’t have speaking roles,” he says. Jay, who has moved on to become the character TD supervisor for Shrek Goes Fourth, notes that the studio incorporated new rigging techniques for a few characters in this film. “We like to fold a bit of the new technology in to prove it out and see how it works,” he notes. “Ideally, animators would like to animate in real time, to scrub and see characters move. We’re on that path.”

For clothes, the studio again primarily used the same tools as usual—Maya Cloth and Syflex software—but this time, they folded in a Maya plug-in, Kualify, from a Korean company. “We really wanted to raise the bar with simulated clothing,” Bielenberg says. “In the past, we were selective about how much we simulated and how much we rigged and animated procedurally. This time, we pushed pretty much all the clothing through the simulation pipeline, even the crowd characters.”

Light at the End of the Tunnel
Of all the groups on the pipeline, the lighters and CG supervisors probably felt the greatest impact from working in stereo 3D. “There were twice the number of frames to render, and there was zero tolerance for anything broken in the rendering,” Bielenberg says. “A shadow that worked in the left eye might not work in the right. An incorrect comp’ing order that might have worked in 2D didn’t work in 3D. The attention to detail hit the lighters and CG supervisors the most, and they had double the amount of work.”

To help at the very end of the process, the studio used Autodesk’s Lustre for digital color correction. “We corrected the color in mono in Lustre, then copied to the other eye, looked at it in stereo, and then did a pass to brighten and warm the shots,” Bielenberg says. “The [stereo 3D] eyeglasses add a green tint that darkens the image.”

In addition to darkening images, a second aftereffect of stereo 3D is ghosting, an outline that sometimes appears around objects and characters. This happens most often when a bright object lies over a dark background. To fix ghosting problems, the studio devised proprietary contrast-reduction software. “One of our PhDs wrote software that evaluates the difference between the two images,” Bielenberg says. “We can dial in the amount of contrast reduction we want. If that doesn’t work, we go back to Lustre and reduce contrast or shift lighting. When a planet was ghosting, we put a glow around it. We’ll use whatever tools we can.”

During a San Francisco prescreening for Monsters vs Aliens, Katzenberg said, “This is not my father’s 3D. Until now, people made films in 2D and converted them into 3D. That’s analogous to shooting in black and white and colorizing. When you author in 3D, you start to enter a new, creative world. Because filmmakers have to think about dimension in every shot, the film is more cinematic. But, it’s as if we’d been speaking English and woke up and had to speak Russian.”

Fortunately, the inventive minds on the production crew were more than up to the challenge for the comedy/action-adventure Monsters vs Aliens, the first DreamWorks animated feature authored in the new language.

“Jeffrey is out there touting the importance of 3D,” Bielenberg says. “We had to make sure the movie had the right stereo impact, that it didn’t feel deliberate, and that it was really story-driven.” And that, you can be sure, was no joke.

Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at BarbaraRR@comcast.net .