Issue: Volume: 23 Issue: 9 (September 2000)

Reviews: FamousFaces 2.0



By George Maestri

Capturing the facial motion of a live performer and precisely matching his or her movements to those of a virtual character is an arduous task. People are experts at watching facial expressions, so they quickly notice when this type of animation is less than dead-on. Also, the face is a complex surface, and manipulating it realistically is difficult.

Famous Technologies' FamousFaces is one of several products that has been designed to simplify the facial motion-capture process The system consists of two modules. The vTracker facial tracking software measures the 2D movement of a performer's face by reading a video signal emitted by blue and green markers placed on the face to mark key parts, such as the eyebrows and lips. The different colors help the software distinguish between neighboring areas of the face. vTracker then translates the video signal into mocap data that can be massaged by Animator, the second application.

Animator works by defining "cluster" areas (regions of the face that move as a unit) on the 3D model and assigning them to the markers on the performer's face. As the face moves, the clusters on the virtual character move in sync, and the face animates. Besides working with vTracker, Animator can also create morph targets-the various model positions for the ranges of facial motion that will be required-within Alias|Wavefront's Maya, NewTek's Light Wave, Soft image, Discreet's 3D Studio Max, and Kaydara's Film box. It can also be used with mocap systems from Oxford Metrics, Motion Analysis, Xist, and Phoenix.
The Animator module takes motion-capture data and applies it to a virtual character, where it can be rendered or exported to other packages.




To accurately capture a face with vTracker, you need a marker for each eye and eyelid, six for the mouth, and another five or six for the eyebrows. Typically, the more markers, the better.

To align the markers on the face with the virtual markers in vTracker, you position the virtual markers over those on the face in the first frame of video. After the first frame, the virtual markers automatically follow the real ones. Each virtual marker is named to correspond to a part of the face-for example, right cheek or left brow-and falls in one of two categories. Anchor markers track non-moving parts of the performer's head so that the software can eliminate unwanted motion or track it to automatically move the entire head. Dynamic markers track the motion of the face as it changes shape.

After the motion-capture data is created, it can be brought into Animator, where it's applied to the virtual character. The software supports NURBS and polygonal models, and the resulting motions can be exported into several leading packages. The software also has a live mode for real-time applications.

Before the captured motions are applied to the 3D character, the model must be set up properly. This involves creating clusters of vertices on the model and assigning them to the appropriate markers. Clusters can overlap and have varying degrees of influence to ensure smooth transitions between different facial areas. When animating, the user can set clusters to translate, as in the motion of the cheeks or brows, and rotate, as in the motion of the eyes or lids. A third cluster type is the spline cluster, in which multiple markers define a closed loop used to animate the mouth.

In both keyframe and mocap animation, the use of clusters as a facial animation method has some inherent problems. The precise shape of the face can be hard to control on any given frame, causing the resulting information to look unnatural and rubbery. This isn't to say that good animation cannot be produced (Famous has some nice examples on its demo reel). But when employing clusters, a user must be precise.

After the model is set up, the motion data can be applied so that the markers on the actor's face manipulate the clusters on the virtual character's face. The resulting animation can be edited in Animator's keyframe editor. Each marker has two separate animation tracks, one for the mocap data and one for "tuning," or tweaking the animation. The keyframing interface is basic, however. For instance, I found no way to zoom in on the timeline, making it difficult to set keyframes. Also, there's no undo function.

Overall, I found FamousFaces useful, but lacking in a few key areas. For instance, setting up a face is tricky, and the interface should be reworked to make things such as keyframing and aligning markers easier to accomplish. In spite of this, however, the software does its job well and would make a good addition to any mocap artist's toolbox.

George Maestri is a writer and animator living in Los Angeles.

Price: Animator, $4990; vTracker, $4990. Bundled price: $7990.
Minimum system requirements: 133MHz Pentium; 128MB of RAM; OpenGL accelerator recommended for Animator; video capture board for vTracker
Famous Technologies
San Francisco
415-835-9445
www.Famoustech.com