Take One!
Issue: Volume: 28 Issue: 11 (November 2005)

Take One!

Pixar’s RenderMan for Maya software is a rendering and a programming environment designed to seamlessly integrate with Maya running on a Mac, Windows, or Linux system. Installation on a Windows XP workstation was easy, and RenderMan is accessed through the Maya plug-in manager. 
 Until recently, studios had no choice but to mocap these actions in separate sessions using different setups, and hope they matched up aesthetically. However, with advancements in motion-capture technology, it is now possible to simultaneously record both the intricate expressiveness of an actor and the person’s extensive full-body movements. As a result, the expressions and actions elicited by the actor can be recorded accurately for a comprehensive performance that is more holistic and complete, and easier to direct.

Ideal for film and computer game animations, the technology has quickly found its way to the broadcast realm. Some of the first TV commercials to utilize the simultaneous motion capture were from design collective Psyop, in conjunction with advertising agency DDB Chicago and House of Moves mocap studio, for McDonald’s Fruit ’N Walnut campaign. Psyop created the four spots, which blend equal portions of reality and animation, each rendered in cartoon-like colors and simple line drawings, yet featuring characters who move and speak in a realistic fashion.

“DDB initially came to us with several scripts,” says Justin Booth-Clibborn, executive producer at Psyop. “The group wanted lots of talkative description about the salad, but also wanted very stylized characters and environments. For the characters in particular, DDB wanted them animated in a realistic way, but not photorealistic in their appearance. The agency asked us to replicate the expressions and emotions of the characters while keeping them artistic.”


Using Vicon MX40 cameras, the crew at House of Moves acquired the facial and full-body motions of actresses during the same single mocap session. Later, 3D artists at Psyop applied those realistic movements to highly stylized digital characters.

And, according to Booth-Clibborn, that balance presented both a creative and technical challenge.

First, real actors taped their lines at a recording studio, and brought a copy of the recording to the mocap session for use as a reference. At House of Moves, the actors then lip-synched their parts as they performed the required actions. Within a 10x15-foot capture area, technicians recorded the movements of the actors’ faces, hands, and bodies, using a 32-camera Vicon MX40 optical motion-capture array.

Toward one end of the capture volume was a more facially oriented setup with additional cameras for higher data resolution.

Later, artists at Psyop applied the mocap information to CG character models. “In the past, everyone expected a person to make the right facial gestures based on a full-body performance the actor did the day before, and no one ever duplicates those actions exactly,” says Tom Tolles, House of Moves’ CEO. “In this instance, the actions matched because they were performed at the same time.”

For the McDonald’s campaign, the mocap crew not only captured the whole performance of one actor, but did so as well for a second person whose digital double likewise appears in each scene. This meant that the pair’s interactions with each other also were acquired at the same time, making the performances extremely accurate.

Yet, processing and delivering such a large amount of data is particularly challenging. To accomplish this task, the crew employed Vicon’s IQ software and House of Moves’ Diva program, and then delivered the motion files to Psyop.

“In addition to the directive that the spots be realistic yet stylistic, the scripts were fairly dialog-heavy,” explains Psyop’s Todd Akita, one of the spot’s technical directors. “So we had to think about how we could do so much facial work in such a short time frame. By using simultaneous full-body and facial motion capture, the director could watch the actors in real time and sculpt their performances on set, rather than waiting days for us to animate the characters by hand. With keyframing, sometimes you get the performance the director wants, and sometimes you don’t.”

Akita estimates that the group, overall, saved nearly a week’s time by using the new mocap process. However, he is quick to point out that a timesavings is difficult to ascertain because the artists may never know how close or how far away they might have been in meeting the director’s expectations. “But with motion capture, you are guaranteed that the overall structure of the performance-the broad gesturing, emphatic points-are there,” he says. “Getting that basic idea is the most difficult part. From there, it’s a matter of adding keyframes to augment and finesse the performance.”


Simultaneous mocap let technicians record a total performance that accurately portrayed the interactions between the actors (top) so they could be applied to CG characters (bottom).

Although the animation data was extremely accurate and realistic, the design of the characters was quite the opposite-per the client’s request. According to Akita, the art team worked with a fashion illustrator to come up with the visually appealing, highly stylized design.

Then, the Psyop team translated those images into 3D, building the CG models in Softimage’s XSI and texturing them in Adobe’s Photoshop. Yet, the models-because of their stylized appearance and the fact that they were initially designed in 2D-did not project the same physiology as the actors who were motion-captured. This was particularly true of the facial area. As a result, the Psyop artists had to develop a method for warping and sculpting the animation so it would conform to the dissimilar shapes.

Later, the team rendered the final frames with XSI’s Toon Shader. However, the key to the spot’s distinctive design was in the compositing methodology, accomplished with Adobe’s After Effects and Autodesk Media and Entertainment’s Discreet Flame.

According to Akita, the group broke up the rendered image into subtle islands of color, similar to a lithographic process, whereby each of those islands were individually adjusted to create a slight offset effect. “We did many passes to achieve the final look,” he notes.

Without question, the work Psyop did in the McDonald’s campaign illustrates what can be achieved when characters, even those in TV commercials, are given a heavy dose of reality.

Karen Moltenbrey is an executive editor for Computer Graphics World.