Fast Forward
Issue: Volume: 30 Issue: 4 (April 2007)

Fast Forward

The theme of Walt Disney Pictures’ second 3D animated film, Meet the Robinsons, is “keep moving forward.” You could as easily apply that theme to the production of the film.

In the movie, Lewis, an orphaned boy and brilliant inventor, wants to find his birth mother. He hopes that his latest invention, a memory scanner, will help. But, a scoundrel wearing an evil bowler hat, named Doris, steals the invention. When Lewis nearly gives up hope, Wilbur Robinson appears out of nowhere, whisks him into the future to look for the Bowler Hat Guy, and introduces Lewis to the wacky Robinson family.

In the studio, the artists who created Chicken Little rolled from that film straight onto this one. “We pretty much used the entire Chicken Little team,” says Robinsons’ producer Dorothy McKim, a veteran Disney production manager. When she joined the project, director Stephen J. Anderson had already boarded the entire movie and it was up on reels.

Anderson, who had been the storyboard artist on Tarzan and head of story for The Emperor’s New Groove, had identified with this story from the moment he saw the script, derived from William Joyce’s children’s picture book A Day with Wilbur Robinson.

“It’s a story about an orphan boy who wants to be adopted,” Anderson says. “I was adopted. You couldn’t pry the script away from my hands. The theme of letting go of the past and looking to the future came from my experiences.”

New Tools

When Mark Hammel, technical supervisor for Meet the Robinsons, evaluated the future for the crew assigned to create the film, one thing stood out. “We were scheduled to release in December 2006, a year after Chicken Little,” he says. “We ended up releasing in March, but a year between releases—or even a year and three months—is a tight time frame.”

With no time to make big changes to the pipeline, which might have altered a familiar workflow, the technical team looked for other ways to improve efficiency. Disney Feature Animation uses Autodesk’s Maya enhanced with in-house tools, plug-ins, and add-ons, Pixar’s RenderMan for rendering, Side Effects Software’s Houdini for particle animation and physical simulation, Next Limit’s RealFlow for fluid simulations, and Apple’s Shake for compositing. New tools helped “lookdev” (look development) artists quickly create complex environments, while character riggers set up characters swiftly, and technical animators moved cloth efficiently.

For look development on Chicken Little (see “The Sky’s the Limit,” November 2005), the studio had developed two tools, XGen and Shader Expressions. For Meet the Robinsons, the technical staff developed a proprietary 3D paint system. And then, they unified all three tools. “We created a consistent system among all our lookdev tools,” says Hammel. 

For Robinsons, look development artists used XGen to grow hair and grass, sprinkle pebbles and dirt on a rooftop, stain sidewalks, and more. “It’s such a central tool for look development, we started using it anytime we needed an efficient way to make something lush and detailed,” says Marcus Hobbs, CG supervisor. The crew even used it to sculpt topiaries and to plant trees.

“It’s an arbitrary primitive instancer generator,” says Hammel. “It’s a tool that can procedurally generate anything.”

Shader expressions, the second tool in the new unified look development system, allowed artists to create procedural expressions on their own without writing procedural shaders.

And with the third tool, the 3D paint system, artists could work in 3D or import and export views into an external painting program like Adobe’s Photoshop. “We extended and added our shader expressions to the 3D paint program and combined that with a similar feature in XGen, which gave us the consistent system,” Hammel explains.

As a result, artists working in the 3D paint system could use the expressions, for example, to generate a texture map with a pattern and then, using the same language, create expressions in XGen to drive the twisting and drooping parameters for hair or leaves. “If the artists know how to use the expressions in one application, they know how to use them in the others,” Hammel says. “They are very adept at finding uses for tools we don’t think of, so the more general we can make a tool, the better.”

Inventing Characters

There are three primary characters in the film—Lewis, Wilbur, and the Bowler Hat Guy—but there are dozens of secondary characters such as the Robinson family, the dinosaur, and Carl the robot, as well as crowds of schoolchildren and other background characters. “We’ve had as many as 65 animators working on the film,” says Mike Belzer, animation supervisor. 

With many characters and little time, the crew developed techniques and tools to prepare the characters quickly for animation and rendering, starting with the models. “We had only two different sets of geometry for the head and hands for the entire cast of characters,” says Corey Smith, CG supervisor. All the characters, except for two who needed more facial detail, the Bowler Hat Guy and Grandpa, have the same topology—the same underlying geometry with the same number of points and the same ordering.

“Obviously we moved the points into different positions for each character,” says Smith, “but having the same topology saved us tons of time. We could transfer blendshapes from character to character and weights for the facial setup, and lookdev could transfer weight maps and UV maps for texturing.”

Modelers built blendshapes for Lewis first and then, using a tool called Blend Shape Adaptation, moved subsets of those shapes to other characters’ heads. “We had about 90 percent accuracy,” notes Smith. “The modelers and animators worked together to make adjustments, but it saved us so much time.”

For facial animation, Disney animators used a composite of blendshapes and deformers. “Deformers are good for tweaking, but the animators always want that shape they worked with the modeler to get,” says Smith. “We try to hit the main shapes that the animators want, and then layer the deformation rig over that.” Although Lewis had approximately 45 shapes, most characters had less than 30. The goal was never to have a dead spot on the face.

“The blends got us 60 percent of the way there,” says Belzer. “The deformers pushed the facial animation further. We tried to push it as much as we could to make the characters more believable and fleshy.”

To quickly rig characters, three members of the model development team, Jesus Canal, Ryan Roberts, and Russ Smith, devised the Studio Rigger tool. With this tool, character riggers created a template that became a starting point for rigging different characters with similar characteristics. “Where we really pushed the efficiencies was in the character setup,” says Smith.

The riggers started with the template, moved the bones into the right place for a character, and told the system to “build rig,” and it applied all the template information to the new characters, from the deformers in the hands to the way the head squashed and stretched. “We have low-res proxy modes for the model in animation,” says Hammel. “But if the animators need to see the characters in detail, they can crank it up.”

Using a system called “Shelf Control,” also built into Maya, technical directors designed user interfaces for the entire rig so that the animators could see the character with buttons and controls. “Because the animators have the same interface, they can move from character to character quickly,” Smith points out.

Doris, the villainous hat, had one of the most complex rigs. “She flies and walks like a spider on six little legs that look like metal folding blades,” says Smith. “Sometimes her top opens up and a claw comes out, she unscrews things, sometimes a lens or a harpoon comes out of the hat. She’s the Swiss Army knife of hats.”
 
 
At top: A new Studio Rigger tool devised by the model development team helped the riggers
quickly set up multiple characters with similar characteristics from one template. At bottom:
The rig for Doris, the villain, handled the complexities of a character that is shaped like a hat
but walked on six legs and discharged a variety of clever tools.

Wrinkle While You Work

To help wrinkle the characters’ clothes, the technical animation team built a new tool, named Shar-Pei, into the rig. “We didn’t want to run cloth simulation on every character,” says Smith.

For this process, the cloth-animation team printed the characters in different poses, animators drew wrinkles onto those poses, and then riggers and modelers worked together to paint texture maps that defined the wrinkles. “They painted little height maps that displaced a surface in different directions,” explains Hammel.

Animators could select which wrinkles they wanted to animate—for example, wrinkles at an elbow when a character bent its arm—and then dial in the animation with sliders, using the wrinkle page in that character’s rigging user interface. When the animators selected a wrinkle, the rig automatically increased the model’s resolution. Thus, Shar-Pei made it possible for the animators to see the final wrinkled silhouette.

“We were trying to be efficient,” says Hammel, “but we ended up giving the animators a tool they probably would have loved to have had anyway if they’d thought to ask.”
 

Hairy Problems

In addition to speeding cloth animation, the technical crew needed to find ways to work with new types of hairstyles, from Grandma’s curly hair to Wilbur’s slick hair. “We had to enhance our hair system to grow long hair, and also had to invent a number of grooming tools,” says Hammel.

Disney had designed its Maya-based hair system primarily for furry characters, not for human characters. For grooming the Robinsons’ characters, the technical team provided tools for drawing profiles and sculpting rough shapes. “A sculpted hair shape came with the model, so the animators didn’t get bald models,” says Hammel. “We could use that shape to help define where guide hairs would live.” When hairstyles were too complicated to represent with simple surfaces, the groomers drew curves in space to outline shapes that turned into representations for XGen. XGen handled the instancing—that is, grew the hair.

“We have a broad range of technical skills within the lookdev staff,” says Hobbs. “Some people are mad geniuses with technical skills, and some are artists and painters focused mostly on how stuff looks. So, sometimes we control color and density with texture maps, and sometimes we like to use expressions to get a look or hair behavior we like. We hop back and forth.”

Fading the opacity at the hair tips and making every hair slightly transparent helped add a sense of depth, and the transparency enhanced the backlighting. “We rely on backlighting to punch the hair silhouette and help separate the characters from the background,” explains Hobbs. Painted density maps controlled the amount of subsurface scattering on the characters’ skin.

Here Today, Here Tomorrow

Lewis moves back and forth between the present and the future, so the environments for each needed to be distinct and instantly recognizable, but not so dissimilar that it looked like he occupied two different films. “In our present day, everything is dark, dreary, and rectangular,” says McKim. “But the future has open spaces, blue skies, and beautiful colors.”

Although painters created a few backgrounds, modelers built most of the environments in 3D—the present-day orphanage and its rooftop, the Robinsons’ house inside and out, the invention company, the school gym, the present-day city, the future city, the evil future city, and others. “I’d say we had a dozen major environments, but the most complex were the future city and its counterpart, the evil future city,” says Smith. “Those were massive.”

Modelers blocked out those virtual sets using simple shapes, working with layout artists to position the camera. Then, they added details in areas where the camera would spend the most time. “They flushed out the shapes to get closer to the designs the art directors wanted,” says Smith. “We don’t have a separate set-dressing department.”

In addition to environments, modelers also built a flying car, a time machine, and several props. “We spent a lot of time detailing and rigging those,” says Smith. “Anything the characters interacted with, even something as simple as a peanut butter jar, had to have a simple rig in order for its lid to come off.” Moreover, because Lewis is an inventor, modelers built many of his inventions so that he could put them together, take them apart, and even have them explode.


Much of the fi lm used ambient occlusion for soft lighting and shadows.
Selective raytracing added realism by giving such elements as the toaster the off-screen refl ections.


“Effecting” Change

Although the effects team created a sprinkler system and sent lava spurting out of a volcano in a science exhibit, most of the effects happen during the climax of the film. “One of the possible futures is evil, so instead of a paradise of blue skies, puffy clouds, and grass, we have tons of smoke and smog,” says Hobbs.

To art-direct the pollution belched by thousands of fiery smokestacks, the team used sprites. “We used RealFlow to create the volumes, and then baked the results onto cards,” Hobbs explains. The team also used RealFlow to simulate other fluids—water, jelly, peanut butter, and so forth.

For the swarms of evil Doris hats, the VFX artists used Houdini, and relied on a combination of Maya and Houdini to create the time-travel effect for the spaceship. “We know how to integrate layers to transition and reveal certain elements, so we use the right package to get the right look,” says Hobbs. “We might have Maya particles on one layer and Houdini particles on another.”

The team often repeated elements and textures to help keep the kids in the same spatial, if not temporal, universe. For example: “The orphanage bedroom is painted the same in the present as it is in the future, and the elements have a common texture language,” says Hobbs. “The memory scanner, Lewis’ invention, is the same object in the present and the future.” And so, too, are the children.

Thus, to help distinguish the present from the past, the artists changed the lighting. In the future lab, the light from the blue sky pours through a giant dome-like window and reflects on the memory scanner. The bedroom, by contrast, has only one light source and one window. The future is a happy place; the present—soon to be the past—is a place to leave behind.

Much of the film is raytraced, and the lighters used ambient occlusion throughout, finding efficient ways to do both. “Reflections are huge,” says Hobbs. “But we got good about when to turn them off. We also got good at off-screen reflections and at faking off-scene stuff.” It’s efficient not to build geometry behind the camera in a 3D set, but that meant reflective objects in the set had nothing to reflect. So, the lighters put something there—perhaps a painting on a card or a bit of animation from another scene.

Also, to speed rendering yet use ambient occlusion throughout the film, the lighting and rendering team watched for areas where the camera moved, but the background didn’t. “We’d raytrace once, get the occlusions, and save them as texture maps,” explains Hobbs. “For the next frame, we’d look up the values in a texture map instead of re-raytracing.”
 



The rectangular present day (at top) uses warm, autumnal colors;
the future world of new beginnings (at, below) is spherical splashed with the colors of spring.


Forward Thrust

Three years into the project and a few months after production had ramped up, Disney had bought Pixar Animation Studios, and as a result, Disney Feature Animation had three new sets of eyes evaluating the film: Pixar’s John Lasseter, Ed Catmull, and Andrew Stanton.

“We were about 85 percent finished with animation when Pixar saw the movie,” says Anderson. “After we got the notes from the group, we had a six-hour note session. It was murder. But at the end of the day, John, Ed, and Andrew said, ‘You heard the notes. Now go away and figure out which ones will help you make the movie better.’ They put the control back in my hands, and all the clouds parted.”

One of the major changes was to make Doris, the Bowler Hat Guy’s hat, the real villain, but Pixar’s influence extended beyond that. “They helped us plus the story,” says Anderson. “We redid about 60 percent of the movie.”
 

Layers of particle effects created in Autodesk’s Maya and in Side Effects Software’s Houdini, and then

combined in Apple’s Shake, helped effects artists create a dark, climactic sequence.

The animators were hardest hit by the story tweaks. “We dropped from over 80 percent animated to about 30 percent,” says Belzer. “The characters and sets didn’t change, but a lot of things hit the editing floor.”

For the rest of the crew, the impact was minimal—even helpful. “It only affected a couple environments,” says Smith. “While Steve [Anderson] worked on story notes, we did optimization passes. We sat with the animators and technical directors and cleaned up any issues and problems with the rigs. That little window gave us a great opportunity to catch up.”

In addition, the technical wizards at Disney and Pixar began exchanging ideas. “We didn’t have an opportunity to integrate anything new in terms of tool sets for this film, but we could see how Pixar implemented hardware,” says Hammel. “We could see what they were doing with their disk system, and that helped us make our choices. I’ve always said that I’d love to spend six months or a year at another studio to see how their pipeline works, and now we’re getting that opportunity at Pixar.”

“I’m really energized about the future.  We’re moving forward,” Hammel adds. “And I just realized I quoted my own movie.” 
 

Deep into Storytelling

As with Chicken Little, Disney is releasing Meet the Robinsons in stereo 3D and mono versions simultaneously. Phil McNally, who helped convert Chicken Little into stereo while at Industrial Light & Magic, led a stereoscopic team at Disney for Robinsons. His earlier involvement in this project resulted in new ideas for enhancing stories with stereo. 

“I thought stereo 3D was just a gimmick,” says Meet the Robinsons director Stephen J. Anderson. “But Phil quickly pointed out that we could use it to tell the story.”

Rather than arriving at the tail end of the process and converting a completed film into a stereo 3D version, McNally began working with Anderson to pick scenes and shots best suited for stereo. “I made what looks like a heart-rate printout for the movie, with red zones and green zones,” McNally says. The red zones—the big chase scenes and other exciting shots—were targets for stereo 3D. The green zones gave the audience a chance to relax.”

McNally and a team of eight at Disney used three methods to set up the stereoscopic camera and control the appearance of objects in stereo: depth (how far back or forward the object appears), position (in front or behind the screen), and framing.

The two most common techniques are depth and position. “We control the depth in a way that doesn’t require anything to be remodeled,” says McNally. “The separation between the cameras, the intraocular distance, puts depth into the scene. Positioning the zero parallax point determines whether characters are in front or behind the screen.” (The zero parallax point is the point at which the two cameras line up perfectly.)

The unique technique implemented for this film is an optical floating stereoscopic window frame. Although it goes unnoticed, a black edge always frames the stereoscopic window—that is, the hole through which you look deeply into space, or from which something flies out at you. The frame, typically placed into the image, makes it look like the screen moves. McNally’s team separated the frame from the image.

By moving the frame separately, they were able to use stereo for shots that otherwise would have been more difficult, and they were able to increase excitement and tension. For example, rather than move the Bowler Hat Guy from a great distance toward the camera, they moved the frame away from the camera. “The audience won’t see the frame move,” says McNally. “They’ll see the camera move toward us.”

During a shot in which a dinosaur starts chasing people, however, McNally’s crew purposely allowed the dino to break through the frame to make it seem as if the animal had jumped out of the screen and into the theater.

Once the Disney team completed setting up the camera for the stereoscopic work, they sent RIB files for the mono movie (the left eye) to Digital Domain, where a crew rendered and composited the final image for the right eye and applied the floating window.

With each film, the potential audience has grown. For Chicken Little, Real D had installed its projection system in only 84 theaters (see “Supersized,” January 2007, pg. 24). The goal for Meet the Robinsons is 700 theaters. The potential for using stereo 3D to help tell stories is growing as well. Look for Meet the Robinsons to open some eyes.

“You can control stereo from shot to shot and sequence to sequence,” Anderson says. “You can dial it back on dialog scenes and pump it up for the climax. It’s really exciting to see your story that way.”


 
Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at BarbaraRR@comcast.net.