|On the other side of the fence, digital artists overcame their own obstacles to enable the animals to speak and deliver their lines with feeling. Many of the film’s digital effects, distributed among four studios, were of the talking animals, ranging from Goose the gangster pelican to a cranky Shetland pony named Tucker. And creating lip sync for such a menagerie presented its own challenges, says Pierre Raymond, president of Hybride Technologies, a Quebec, Canada, postproduction facility that was hired by Digiscope to animate the manic CG horsefly duo Buzz and Scuzz, the wise goat Franny, the lazy bloodhound Lightning, Tucker, and the star, Stripes.
In fact, this project was Hybride’s first opportunity at lip-syncing animals. So before bidding on the project, the studio conducted internal tests on some nontalking animals it had created for Spy Kids 2. “We started working on expression for that project, and realized that we were on the right track,” says Raymond.
|Hybride devised a script that automated a good deal of the lip sync process, giving the artists more time to work on facial expressions for each animal.
The Hybride artists then refined their lip-sync technique and applied it to their assigned characters, using the same process for each one, whether a live animal, such as Stripes, or 3D models, such as Buzz and Scuzz. The process involved digitally replacing not only the mouths of the characters, but also their eyes and ears so they would be more expressive while they spoke.
To accomplish this, the team generated approximately 75 expressions for each animal. “Some needed more expression than others, depending on their morphology,” says Raymond. “So each detail had to be worked out differently depending on the animal and the animal’s personality.”
In order to free up as much time as possible to focus on the expressions, the artists wrote a script that automated the actual lip sync on the first pass so the artists wouldn’t have to adjust the lip movement frame by frame. According to Raymond, the proprietary software recognizes the phoneme on a tape of each actor’s voice and then generates the digital phoneme from one of the 75 preestablished expressions created for each animal.
“That rapidly gave us a talking animal whose lips moved in perfect relation to the audio track, but the animals were very robotic in their movements,” Raymond says. “So we designed sliding bars and adjustments that let us alter the expressions in the first draft, and then all we had to do was polish it.”
Hybride’s lip-sync program runs within Softimage’s XSI, which was used to create and animate the mouth, ear, and eye models for all the animals the studio animated. The group then surfaced the models using projected textures from the film plates of the live animals. After tracking each animal’s head using Science-D-Visions’ 3D-Equalizer, the group used Discreet’s Inferno to rotoscope the eyes, ears, and mouths from the frame and to add the CG replacement objects.
According to Raymond, the learning curve for the project was steep, mainly because it was the Hybride artists’ first attempt at talking animals. After proving it can compete in this arena, the studio plans to improve its techniques, Raymond says, making them even more efficient, which will allow the team to reach the finish line even faster the next time. -Karen Moltenbrey