If you were to dismiss Disney’s family comedy Beverly Hills Chihuahua as just another film starring real animals with digital lip-synching, you would fall for the con. Although it’s true that most of the dogs in the film are real with digital muzzle replacements, two of the leading animal characters, Manuel the rat (Cheech Marin) and Chico the iguana (Paul Rodriguez), are not.
Raja Gosnell of Scooby-Doo fame directed the film; Michael McAlister, who won an Oscar for Indiana Jones and the Temple of Doom and received an Oscar nomination for Willow, supervised the visual effects. Two London-based studios handled the muzzle replacements (Cinesite) and digital environments (Svengali). Tippett Studio in Berkeley, California, created the digital characters: the stunt dog Doubless, crowds of Chihuahuas, and Manuel and Chico. The rat and the iguana are always CG, con men who dupe Chloe, the lead Chihuahua (Drew Barrymore), out of her jewelry. Manuel is the twitchy leader of the two. Chico is the mellow one.
“We could never have gotten the personality and performance with real animals,” McAlister says of the rat and iguana. “You can train dogs, but….”
On set, stuffies gave the camera operators something to frame and supplied lighting reference for the postproduction crew. At Tippett, real animals—a wood rat in a local wildlife rescue facility and three iguanas that visited the studio—provided modeling and animation reference.
“None of us had seen a wood rat,” says James Brown, who supervised the 16 animators on the film. “We watched him burrow and climb around in his cage, and then used his mannerisms, his fast metabolism, and high tempo.”
The opposite was true for the iguana. “They use as little energy as possible,” Brown says. “Most animals settle themselves into a comfortable position when they stop, but if an iguana is crooked, if its elbows are bent funny, it doesn’t matter. They don’t move. We used that to our advantage; we really played up the contrast between the two.”
The technical challenges concerning the two characters centered on Chico’s scales and Manuel’s fur. Tippett uses a pipeline based on Autodesk’s Maya, Apple’s Shake, and Pixar’s RenderMan. In addition, modelers use Pixologic’s ZBrush and Autodesk’s Mudbox. Scott Liedtka, CG supervisor, managed the technology, working with R&D to create solutions for problems beyond those the commercial packages provided out of the box.
For Chico’s scales, the technical crew started by working with a procedural shader to cover his skin quickly, and then realized it wouldn’t work. “He’s an organic animal and every scale was different from every other depending on where it was and how the anatomy flowed,” Liedtka says. “So, because we had spent time on the shader, we thought maybe the painters could use it as a starting point. But, we ended up throwing that shader out.”
Instead, the painters put scales onto the creature using displacement. Then, to create the undercutting that is so difficult to achieve with traditional displacement, the technical crew devised a technique they call “vector displacement.”
“One of the ways we [create the undercutting] is to use ZBrush or Mudbox to create two models,” Liedtka explains. “Then, we difference the two models. We developed a shader that can take the difference and create a displacement along a vector rather than along a normal. It’s a way to capture model detail in a displacement map. The modeler models. We process the models and turn them into vector displacement maps.”
A second Chico challenge was his dewlap. “Every time the iguana moves, this flappy piece of skin has to move realistically underneath him,” Liedtka says. Because the crew didn’t want to spend computing resources by simulating the entire iguana model, the technical directors removed the dewlap, simulated it separately using Maya NCloth, and then stitched it back onto the model. For the creature’s muscles, the TDs used the CMuscle plug-in for Maya, now offered by Autodesk.
Manuel presented more problems. “Our main concern was that he was too ratty,” Brown says. “We were afraid people would be disgusted by him and wouldn’t empathize with him.” Because animators work with a malleable rig, they adjusted the rat’s behavior to make him more appealing, rather than asking the modelers to make changes. “We dipped his head down to make his eyes more prevalent and brought his ears forward so you’d see him as a bigger, rounder shape, to make him a little cuter overall,” Brown says. “We also didn’t let his arms get too far from his body, because as soon as you do that, he becomes too human.”
The effort to make the rat more adorable also put pressure on the pipeline. “The art department kept pushing us to have more and more hair,” Liedtka says. “They wanted him to look softer and softer, cuter and cuter. We were running out of memory, but once the director signed off on the character, we had to figure out how to get the rat to render.”
First, Liedtka and his crew looked for other memory hogs—for all the problems they needed to solve. One issue was the number of layers.
“We do breakouts for compositors when we create characters, and we were up to 27 breakouts between the fill lights, specular, shadows, color codes for different features, and so forth,” Liedtka says. “If we turned all of them on, sometimes the system would crash.” In addition, they looked for ways to reduce render times for Manuel close-ups, motion-blurred action shots, and other shot-specific problems. Lastly, they discovered that a new version of RenderMan had a slightly higher memory requirement, which in itself wouldn’t have created a problem, but coupled with everything else became another ingredient in the soup.
“We normally don’t render in 64-bit mode because it is a little wasteful,” Liedtka says. “But, we did switch to the 64-bit version of RenderMan for a few shots to access more memory and hardware. It was the most straightforward solution.”
Tippett created emotional performances for the two digital characters Chico (left) and Manuel (right) with support from a technical team that grappled with the creatures’ CG scales and fur.
Tippett uses Furocious, a proprietary fur system, to groom and move the hair, and the R&D department is currently in the process of re-writing it. “It really looks good,” Liedtka says. “We have a dog with a collar, a rat picking up and carrying things, interaction between the characters, and a CG animal riding on the back of a live-action animal.” As with many fur systems, the model department creates guide splines to do a rough groom, the paint department adds further grooming and color, and then the TDs run simulations on the guide hairs.
“Our fur tool has gotten really robust,” says Colin Epstein, compositing supervisor. “Before, we’d be in the 30 percent range, so we’d do a softening pass in compositing to make it look less prickly. But, now, we’re in the 90 percent range. The fur comes out of rendering looking so much more organic. It frees us to do fine-tuning and finessing; our work is subtler by comparison. We just check focal depth, edge detail, and tweak a little bit here and there to get the animals to sink into the image—that last 10 percent gives us the seamless results.”
Going to the Dogs
After lessons learned on Manuel, the technical crew persuaded the art department to keep the fur count down for the digital stunt doubles and to make sure that the director approved those digital characters with a more reasonable number of hairs. “We didn’t have any problem with the digital stunt doubles,” Liedtka says. “The big German shepherd dog with shaggy hair looks great.”
For the most part, Tippett’s digital stunt dogs appear in action shots. “We got the shots that real dogs couldn’t do—jumping from a train, one dog carried in the mouth of another dog, and so forth,” Brown says. “We had to match the real dogs, but Tippett is very good at matching live action and making it look the same.”
For reference, they used DVDs, YouTube, and brought live dogs into the studio. “Having live dogs is best,” Brown says. “You see little nuances you might not see in a video, and you can feel what the dog is doing. You get to hold the dog, feel how muscular it is, and really discover what makes the dog move.”
In addition to the digi-dog performances, animators also created cycles for the studio’s proprietary crowd system to populate scenes with dogs hanging out, cheering, barking, and chanting “No mas.”
As many as 240 Chihuahuas appear in some crowd shots, and the dogs are digital. “It was a bit more complicated than usual,” Epstein says. “We were going back and forth between other studios, and sometimes we were adding crowds to backgrounds that didn’t exist yet, so that kept us on our toes.”
The compositors received crowds rendered in small sections so that animation changes, if necessary, could happen more easily. “If we had three dogs facing away from the camera, we wouldn’t have to re-render the entire crowd,” says Epstein. Compositors also adjusted the colors to, for example, dim white dogs that might draw attention away from Chloe, and change the color of black dogs that, when grouped together, looked like dark holes. And, they often fixed shadows.
“We had a massive temple set covered with Chihuahuas,” Epstein says. “We didn’t have time to finesse each dog into each section of the plate on the 3D side, so we used our 2D magic. Sometimes it’s easier for us to apply a rotoshape or garbage matte to the dogs to cast a shadow that the TD didn’t do or match a shadow falling onto a character.”
In addition, the compositing team created a shot with a German shepherd and a Doberman squaring off by re-purposing an existing plate. In the original plate, a real German shepherd acted like it was trapped under Styrofoam rocks. But, the shot design changed to have the dog beneath an intact archway instead. Rather than rebuild the set and bring back the canine actor, Tippett assembled a digital set and placed its digital stunt double in-plate of an intact archway, and then assembled the result into the original plate with the Doberman.
A quick, last-minute change like that might seem like another day on the job to the Tippett crew, but it’s a sign of how far the visual effects pack of artists and animators have evolved.
That doesn’t mean we should take the sleight of hand that Tippett performed for granted. “The scenes with Chico and Manuel are some of the most amazing scenes we’ve done,” says Brown, who received a VES nomination for his work on the chipmunk Pip in Enchanted. “These characters hang onto each other, touch, sit on each other, and protect each other. You can see the weight and mass of each character as they interact.”
It’s a sweeter moment, though, that pleases Brown the most. “There’s an intimate scene, a tight shot, when Chico persuades Manuel to do something and Manuel realizes he needs to,” he says. “And then Chico congratulates him for doing a good thing. You aren’t watching CG characters. You’re watching two friends have a moment. You believe that these two characters care for each other. That’s what we strive for. I think we knocked it out of the park.”
Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at