Issue: Volume: 23 Issue: 8 (August 2000)

A Matter of Style



Motion-capture techniques are only as good as the motion they're being used to capture. If the subject is generally lifeless, so will be the resultant motion data. Similarly, the moves of a less-than-stellar dancer will be equally dismal when transformed to the digital realm. In such cases, an animator's only recourse is to hand-edit the digital motion on a frame-by-frame basis-a nightmare when dealing with long, unsegmented motion-capture sequences.

To eliminate the need for such hands-on dirty work, researchers Matthew Brand of Mitsubishi Electric Research Laboratory and Aaron Hertzmann of New York University have developed a system that learns motion patterns from a varied set of motion-capture (mocap) sequences and generates new sequences in a broad range of styles. Called the style machine, the system builds a statistical motion model that encompasses the range of motion in a given dataset and identifies common choreographic and stylistic elements across sequences. The smart model can then synthesize new motion data in any interpolation or extrapolation of styles.
A pirouette (above) and promenade (below) can be re-created in styles ranging from ballet to modern dance using a new system that resynthesizes mocap data.




While quality motion capture is still the "gold standard," says Brand, it is often beyond the reach of animators because it can be difficult to elicit the exact motion desired from the actors during a limited studio session, and the studio time itself is expensive. In addition, he says, "real mocap often suffers from poorly calibrated cameras, inconsistent marker placements, occlusions, and ad hoc methods for smoothing data dropouts and noise." In contrast, with style machines, an entire performance can be resynthesized in a broad range of styles, and can even be used to generate totally synthetic motion data.

In addition to the ability to generate large amounts of quality motion from a modest amount of mocap data and to improve "bad" actor performances, the style machine can be used to create huge crowds of people by producing thousands of unique motion choreographies. It can also be used to change such defining factors as the mood and energy level of a motion sequence. And because it can be driven by a range of input technologies, including but not limited to motion capture, computer vision, data gloves, and even noise, it provides a more flexible, affordable option to traditional mocap.

Implementing the style machine involves a one-time training session during which the system analyzes mocap sequences. From this analysis, it extracts fine-grained motion primitives and a description of how the primitives can be linked to form plausible choreography. "For each motion primitive-the kick-off of a pirouette, for example-the system learns how the motion is performed in a ballet style, a jazz style, a modern style, and so forth, then finds a low-dimensional 'style space' that contains all the variations," says Brand.




To apply the style machine, an animator chooses a point in the style space. The system then synthesizes appropriately styled motion data from any sequence of primitives. A style machine can also be set up to extract choreography from an arbitrary signal. "So far, we've gotten reasonable to very good results driving style machines with mocap of a pretty bad dancer, low-res video, and even noise," says Brand. "From the bad mocap, we got much more graceful dancing. From the video, we got cheap virtual mocap. And from the noise, we got novel, plausible-looking choreography in ballet and modern-dance styles."

One limitation of the style machine is its inability to enforce basic physical constraints. "The system doesn't know anything about the physics of bones, joints, gravity, and so forth," says Brand. "I'd like to combine the technology with inverse kinematics to eliminate little glitches like sliding feet, although this is complicated by the fact that dancers often like to slip and slide."

One of the keys to the style machine's ultimate success, says Brand, will be the availability of more and better training data. "If there were a repository of varied, high-quality mocap data, you'd see a wave of new tools coming to market," he says. "It's scandalous how many production houses have once-used mocap sequences archived away in their closets. Those skeletons could be learning new tricks in research labs."

Diana Phillips Mahoney is chief technology editor of Computer Graphics World.