Poetic Justice
Issue: Volume: 31 Issue: 3 (March 2008)

Poetic Justice

On the fifteenth of May, in the jungle of Nool,
In the heat of the day, in the cool of the pool,
He was splashing…enjoying the jungle’s great joys…
When Horton the elephant heard a small noise.

So begins the 1954 Dr. Seuss classic Horton Hears a Who! The small noise, of course, comes from a speck of dust blowing through the air. The dust carries the entire microscopic city of Who-ville and its inhabitants, and the Mayor of Who-ville was yelping for help. In the story, Horton, who can hear the Whos but not see them, rescues the tiny creatures and protects them, despite the efforts of a disbelieving kangaroo, some jungle monkeys, and a black-bottomed eagle. Because, as Horton says, “…a person’s a person, no matter how small.”

As with all his stories, Dr. Seuss—the pen name for Theodor Seuss Geisel—illustrated this book with fanciful pen-and-ink drawings of outlandish animals and fantasy creatures, inventive contraptions, and slightly surreal backgrounds. Over time, the drawings have come to life as short animated films, and former Looney Tunes animator Chuck  Jones turned one, How the Grinch Stole Christmas, into a TV special.

But, while Dr. Seuss stories became live-action films, no one had attempted to interpret his style with 3D computer graphics until now. Twentieth Century Fox’s Horton Hears a Who!, directed by Jimmy Hayward and Steve Martino, and created at Blue Sky Studios, is the first.

Blue Sky Studios, a wholly owned unit of Fox Filmed Entertainment, has been at the forefront of 3D computer graphics since several former Tron crew members founded the company in 1987. The studio’s short film Bunny won an Oscar in 1999, and Ice Age, Blue Sky’s first feature, received an Academy Award nomination in 2003, as did two shorts, Gone Nutty (2004) and No Time for Nuts (2007). The Looney Tunes type of animation used for the character Scrat, who starred in the recent shorts, and the rendering style used for Bunny would both come into play in Horton to create characters, sets, props, and effects in Dr. Seuss’s style.

“One of the really difficult things, and I think we did it successfully, was to stay true to Theodor Geisel’s vision,” says Carl Ludwig, vice president and chief technology officer at Blue Sky Studios.

Elastic Models

Sixteen modelers under the supervision of Dave Mei worked on sculpting characters and environments in Autodesk’s Maya to capture the Dr. Seuss world in three dimensions. It was a world in which characters assumed extreme poses and environments had no 90-degree angles.

Work began with the two hero characters, Horton (Jim Carrey) and the Mayor (Steve Carrell). “The Mayor got the most attention,” says Mei. “We based the entire look of the Who world on that asset, so he went through many iterations. Horton, too. They both took a solid year of modeling and sculpting development.”

Once modelers finalized the Mayor’s look, they created six unique council members. “Some were short with no neck, and some were lanky,” says Mei. “But, for the most part, the Whos were pretty much the same.”


Animators used a 2D style of animation based on isolated movement in which, for example, they would pose a character and then move only the arm or a hand.
 
To create the rest of the Whos, the modelers worked from a dozen “daughter” models and four generic male and female models to spawn 96 daughters and thousands of Who-ville citizens. They used rigging controls to sculpt variations in geometry and relied on materials and fur to create more variation. “We had thousands of Whos in the movie,” Mei says, “but probably only 30 or 40 Who assets on the system.”

Despite his size, Horton the elephant—like all the characters in the film, large and small—needed to be light on his feet, balletic, and funny. In addition, he needed to function as a quadruped and a biped, and his trunk and his ears needed to contribute to the performance. “His trunk always held the clover, so his ears became his hands,” describes Mei. “His smiles were huge, his trunk inflates.” Similarly, the other main jungle of Noor characters—a kangaroo with a baby in her pouch, monkeys, and a black-bottomed eagle named Vlad—needed to adopt extreme poses to match Dr. Seuss’s drawings.

Thus, modelers paid close attention to the characters’ topology. “The three-dimensional meshes needed to do more than we’ve ever done before,” Mei says. “The topology had to be light; just enough to do what it had to do.”

Moreover, because the rigging team had to set up so many characters, they devised a new, component-based rigging system, which affected the models. “Because of the interchangeable parts, we had to be strict in the way we set up our rows so they could apply the rigs efficiently,” Mei explains.

Some of the same rigging components used for the characters appeared in the buildings as well, but the main difficulty for modelers constructing the sets and props was in adapting to the style. Mei describes the buildings as balloons filled with pudding. “It was really challenging,” he says. “Everything is limpy, blobby, sexy, and cool. Nothing has a corner. Every building, every vase, every plant is its own soft sculpture.” To help sculpt these shapes more quickly, modelers often started with Pixologic’s ZBrush or Autodesk’s Mudbox, and then moved the models into Maya to refine them and add animation rigs.

Drawing with Rigs

During much of the story, Who-ville is on the move, carried by one character or another. “Pretty much everything in this movie moved or deformed,” Mei says. “Who-ville is on a clover, so it’s getting jostled all the time; the buildings sway with Horton’s physics. So we had to tweak the models to make them friendlier for the rigs. The buildings weren’t rigid boxes. They’d have some poetry in them, some squash and stretch, as they left the ground and landed again.”

The characters, however, demanded the most intricate rigs. “I think people will be blown away by what these characters can do,” says Steve Unterfranz, rigging supervisor. “The animation is very 2D-driven. They could draw whatever they wanted frame by frame. It was more than a milestone for us.” Animators, that is, could position the characters in any frame in Seussian poses, as if they were drawing them by hand.

For facial animation, the riggers developed a system based on blendshapes, knowing that a muscle-based system wouldn’t handle the broad range of expressions. But for the characters’ bodies, a large number of characters needed to be fitted with rigs that could accommodate the extreme poses, so the rigging team built a new system. “We nailed down the functions animators always needed and packaged them into modules that we could assemble on a sort of template level,” Unterfranz says.


Vlad, the black-bottomed eagle, flies over clover fields grown with Blue Sky’s fur technology.
 
The procedural rigging system, which Erik Malvarez and Scotty Sharp developed, had three components: A script that automated the installation of a rig, whether it used IK, FK, or some mixture; a node that represented the rig; and an interface to set options for that part of the character. “You just put the nodes together to create a blueprint,” Unterfranz says. Animators could then select modules from a library of options that applied specific controls to the blueprint rig.

One module named the “bendo,” for example, provided extra layers of controls with which animators could twist a Who’s arm into a pretzel. “We used it for most characters, not just Whos,” says Mike Thurmeier, senior supervising animator. “The bendo is a traditional rig with controls built in to pull geometry off the rig; it gave the animators the ability to create any shape. It was so versatile that we could tie it into a knot.” That versatility meant the riggers and animators used bendo beyond posing limbs: The riggers even put a bendo rig on the edge of the kangaroo’s pouch.

“These components are more about functionality than the parts of the character,” Unterfranz states. “We built them for a purpose; we got away from naming them after parts of the body.” Thus, the same rig built initially for a shoulder became useful in a hip and under Horton’s ear. Another rig component that found its way into Horton’s tail also moved a clover stem.

“Once we had a library of modules, we could rig something that might have taken six weeks to four months by hand in just under six minutes,” Unterfranz says. Moreover, when a component changed, the team could easily distribute the fixes or features all at one time to update many characters simultaneously.

Boundless Performance

While the modelers and riggers perfected the characters’ shapes and animation controls, the animation team worked on developing the performance styles. During production, the team of approximately 40 animators grew to nearly 70.

“It’s so complex,” says Thurmeier. “This type of animation has no boundaries, especially in Who-ville. We had a couple philosophies when we started, but we all leaned toward the graphic style of Chuck Jones. It’s a fun style.”

For reference, the animators plastered the walls with pages photocopied from every Dr. Seuss book, watched Looney Tunes cartoons, and, especially, Chuck Jones’ Grinch. “There’s a feel to how his team of animators animated the characters,” says Thurmeier. “The posture and posing is very particular. So, we came up with a philosophy based on isolated movement.” Animators would hit a pose for a character, and then animate only the arm, or maybe a hand or the face. “We used that 2D style of held poses,” Thurmeier says. “We tried to hit familiar poses from the book.”

Sometimes, in fact, the animators lifted entire scenes from the book. For example, animators tried to match Geisel’s drawings of Whos banging pots and pans to be heard, and the iconic illustration of Horton looking through the clover.

“It was good to get the weight and motion right, but as long as something was entertaining, the physics didn’t have to look real,” Thurmeier notes.

Artistic Effects

The effects team needed to match the animation style using cartoon physics, as well, to create water, clouds, rain, fire, dandelion seeds exploding into the air, fields of clover, smoke, dust, flapping flags, a waterfall, and so forth. “We wanted the effects to look like the drawings in the book,” says Kirk Garfield, effects supervisor.

For fluids, the team used a combination of the studio’s proprietary tools, Next Limit’s RealFlow, Maya particles, and hand animation. RealFlow helped create 2D simulations for the water surface in the beginning scenes when Horton is in the cool of the pool in the jungle of Nool. “For the iconic shot of Horton spraying water out his trunk, we had to cheat the forces,” Garfield points out. “We changed the gravity angle to direct the water into the shapes we wanted, and used custom wind forces.”


Instancing techniques in the renderer helped the team populate a variety of trees over a wide landscape. The artists could still edit the procedurally generated points by hand.
 
3D simulations in RealFlow also poured coffee out of cups and sloshed water in fish bowls. But, when the artists couldn’t tweak the simulation enough to, for example, help a character cry or spray water, they hand-animated the fluids using spheres in Maya.

For the waterfall, though, programmers wrote a custom particle generator in C++ based on new particle replication technology. The new technology helped them avoid creating the puffs and splotches inherent in particle replication that fills spaces uniformly.

“We created a procedure that added complexity and details to the particle simulation at render time by intelligently interpolating extra particles between existing ones to fill in the gaps,” Garfield explains. “We actually look at how far the particles are from each other and the direction they’re moving to fill in particles.” A point-particle rendering system handled the resulting volume.

For the dandelion explosion and many of the clover sequences, the effects team collaborated with the fur department, using fur to put fluffy tufts on each little dandelion seed and to grow the “hundred miles wide” fields of clover. “The goal was to develop simulations that could be directable, even after the fact,” Garfield says. “We could remove clovers or dandelion seeds we didn’t like, change them, move them around.”

Blue Sky’s custom system renders fur, whether used for seeds, clover, or characters, by accumulating and averaging the position, density, and colors of semitransparent curves in voxel space. A hero clover could easily have 100,000 curves and a low-resolution version, 1000. So to manage shots with millions of clovers, the team developed C++ plug-ins that dynamically built hair directly in the voxel system using level of detail based on camera position. The number of hairs in final shots with clover ranged from a few thousand to 2.5 billion.

To control the simulations, technical directors used Maya particles to, for example, blow wind through the field. In addition, they could change the look of the clover using fur-grooming tools.


A bendo rig gave animators the flexibility to move a character’s limbs into extreme positions. For facial animation, the animators used blendshapes.
 
Similarly, the effects team used the re-engineered voxel system to shoot dandelion seeds, which they generated from fur, into the air. “We positioned the seeds around a core and wrote collision rules,” Garfield explains. “When a giant nut rolls off a tree and hits the dandelion, the stalk seeds flew off based on magnitude of forces from that collision. The stalk bent, and as it recoiled, we threw the seeds off.”

These same techniques—the particle replication, the point-particle rendering, and the use of fur technology for effects—helped the crew create such other effects for the film as smoke, steam, clouds, and dust.

Moving Assets

A three-person team led by Tim Speltz, a production engineer who had previously worked in the effects department, helped manage the movement of assets between departments. “We didn’t know we needed this department until Horton,” he says. “But, we’re at 300 people now, and we needed a more coordinated effort to build libraries, write tools, and avoid duplication.” And the three-person team soon grew to five.

Sometimes, the engineers tried doing production work themselves to see whether they could improve the process. Often, they’d attend meetings and walk around the studio looking for ways they could help, perhaps by writing tools to automate tedious processes.

Working with R&D, for example, the team developed ways to move crowd assets into Maya. “We made sure the proper materials would be on the characters and helped create tools so the animators could switch colors on various Whos,” Speltz says. “We are the liaison between R&D and production. We know both sides.”


Blue Sky implemented such rendering techniques as translucency and subsurface scattering throughout the film.
 
They also helped the assembly department with set dressing, again working with R&D, but this time, helping to improve the forest propagation system. “Then we used the same technology to propagate the town of Who-ville,” Speltz says. “Our renderer is very good at instancing. We can create a variety of trees from 12 and populate them over a huge landscape and still render them with a reasonable memory footprint. And yet, we can still edit each point by hand after it has been procedurally generated.”

Transcendent Rendering 

Remarkably, Blue Sky rendered all the leaves in the jungle with translucency and subsurface scattering. “Quite a lot of work and computational intensity went into that,” says Ludwig. Indeed, keen to duplicate the richness they had achieved by using translucency and subsurface scattering for some shots in Ice Age 2, the studio decided to implement the techniques throughout Horton. Blue Sky uses a custom raytracing-based rendering system within the object-oriented programming language CGI Studio, originally developed by Ludwig and Eugene Troubetzkoy.

“We don’t use the classic dipole method,” Ludwig says, referring to a common technique for gathering light from a scattering object. “Because our renderer is based on a raytracer, we use sampling techniques; we don’t need to store points on surfaces.” To optimize the process, the renderer selects the items to sample.

“What really matters is what the eye sees,” Ludwig says. “You need to know what you can throw out and what you must absolutely keep, and have a robust solution that works regardless of the geometric or lighting situation to get real optimization.”

Even more remarkably, Blue Sky implemented radiosity for this film—the first use of true radiosity, Ludwig believes, in a feature film. “This is true Monte Carlo sampling of environments at render time,” he says. “It’s not just mapped on. This is very difficult and expensive to do unless you really optimize it, and we managed to do that.”
 
Horton’s gray skin became the perfect surface on which to show the results. “When Horton passes under a translucent green leaf, he picks up the color from that leaf,” Ludwig says.

“The artists didn’t have to struggle to achieve that effect with fill lights. We sampled the environment at every point on Horton’s surface to see what the surface sees. And then, we gathered that light and added it to the natural sources.”

As Horton moves, his skin picks up the brownish color of the earth or a tree, the color of the leaves, and even a blush of pink caused by subsurface scattering of light shining through his ears. “And as his ear moves, the color changes,” Ludwig says. “It all acts according to the position of things at that particular frame. The way the light plays over a surface is what defines its shape and gives depth to the frame. But, you can’t get this subtle effect unless you sample in real time because the environment is constantly changing.”

 At first, the lighting team used radiosity only for Horton, but then, they began applying it elsewhere.

“We’ve been itching to do this for a long time, but everyone was a little afraid because of potential render times, potential noise issues,” Ludwig says. “But we wanted to push our look further with this film, so we decided to give it a shot. Thank God, we have that attitude: That’s what allows progress. This business of prudent risk-taking is so important to making discoveries in any discipline. It’s important to put things in front of yourselves that you think you can’t achieve. It stretches you and makes you grow.”

Ludwig points to other challenges in the film that the crew met, as well: the extreme animation, the art-directed effects, the need to respect Dr. Seuss’s style.

“We challenge ourselves constantly,” Ludwig says. “And we love it. Because when you accomplish it, you have done something special.”

Finally, at last! From that speck on that clover

Their voices were heard! They rang out clear and clean.

And the elephant smiled. “Do you see what I mean?…

Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at BarbaraRR@comcast.net.