Issue: Volume: 24 Issue: 9 (September 2001)

Send in the Clones


"Reduce, reuse, recycle." The battle cry of environmentalists may soon be the mantra of computer animators. The expression is an apt description of a technique developed by researcher Jun-yon Noh under the direction of professor Ulrich Neumann in the Computer Graphics and Immersive Technologies (CGIT) lab at the University of Southern California. Noh has created a system for efficiently transferring existing facial animation sequences from one character model to another, regardless of the different geometric proportions and mesh structures.

The system, called expression cloning, preserves the relative motions, dynamics, and character of the original model by extracting critical vertex information. It then retargets the collected "skeletal" data onto the new model.

Rather than supplanting existing facial-animation techniques, expression cloning is designed to complement them. "Many techniques can produce very nice facial animations. Toy Story and more recently Final Fantasy are good examples," says Noh. But such high quality does not come cheap or easy. "Depending on the techniques employed, [such animation] requires computationally demanding physical simulation, non-intuitive parameter estimation, artistic talent, and manual tuning," he says. "And most of these efforts are not transferable across models, so the same process has to be repeated for new face models, even for similar animations."
To clone facial-animation sequences, a new technique extracts critical vertex information from source models (top row) and calculates the corresponding vertices of target models of various shapes and sizes. The system then applies the motion information t

By contrast, expression cloning enables animators to reapply the facial animations they've painstakingly created using any technique to countless models. "Since it is a fast, almost fully automated process, [expression cloning] lets animators save previous efforts to compile a high-quality facial animation library for later use," says Noh.

The heart of the new system is an algorithm that computes surface correspondences between two different face models, which ensures that the source model motions are applied to the correct regions on the target model. "Al though it is obvious to humans where the forehead, nose, and mouth are, it is not the case with computers," says Noh. Thus, the algorithm is applied to find such features on the face, align them between the two models, and estimate the surface correspondences.

After this preliminary information has been collected, the system transfers the facial motions for each corresponding face point. Because the geometric proportions of the face models are typically different, the system must adjust the motion at each face point. "The motion adjustment is crucial. Without it, the resulting animation may not be well suited to the target model," says Noh. "The adjustment guarantees that a smile of the person with the big mouth will be scaled down for the person with the smaller mouth."

Among the challenges faced in developing the cloning technique, perhaps the most obstinate one was handling the contact line between the upper and lower lips, says Noh. "Al though they are closely positioned, motion directions are usually opposite for upper and lower lip points. Severe visual artifacts occur when a target point belonging to the lower lip happens to be moved by a source upper lip, or vice versa." Because a misalignment of the lip-contact lines results in wrong-direction lip motion, Noh developed a special routine that carefully aligns the contact lines prior to the motion transfer.
The geometry-independent expression-cloning system preserves the relative motions, dynamics, and character of the original model by extracting and re-targeting computed vertex information.

The potential applications of expression cloning are numerous. "The technique can be applied anywhere conventional facial animation is called for, such as games, broadcast, 3D avatar chatting, and so forth," says Noh. In particular, the movie industry will be very interested in the technology, he notes. "Notice that in Final Fantasy, there are not many characters. There are likely various reasons for that, but clearly a major one is the tremendous amount of manual work involved in keyframing for each character model." In contrast, with expression cloning, "generating animations for many models is as easy as generating for one. And better yet, if the animations were compiled as a library in advance, any component sequence could be grabbed off-the-shelf for cloning onto new models."

Expression cloning is an ongoing CGIT project in association with USC's Inte grated Media Systems Center, the school's NSF-sponsored engineering research center. Currently, the researchers are focusing their efforts on transferring exactly the same expressions from a source to targets. In the future, says Noh, "it would be useful to include control knobs that amplify or reduce a certain expression on all or part of a face." Such an interface would open the possibility of mixing the motions of a set of expressions to produce a variety of speech and emotion combinations for any target model. "The flexibility provided by control knobs could provide varied target animations from just a few source animations," he says.

In addition to expression cloning, Noh and his colleagues are also investigating color and texture cloning. "Texture contains lots of information and easily replaces or complements complicated geometric structures with a simple image. Consequently, we're considering texture manipulation techniques for animating such things as vascular expressions, wrinkle generations, and aging effects," says Noh.

With an eye on commercial possibilities, USC has already filed for a provisional patent on the expression-cloning technology and is currently seeking possible licensees. Noh foresees the system being packaged as a plug-in for commercial 3D software or as a standalone application.

More information on the expression cloning research can be found on the project Web site at