Technique & Technology
Issue: Volume: 27 Issue: 8 (August 2004)

Technique & Technology

To create a trio of hybrid vampire brides—characters with the heads and faces of actual actresses fused with computer-generated bodies—the studio developed a first-of-its-kind technique that enabled it to acquire motion-captured data during a live film shoot.

For the sequence, Dracula's brides metamorphose from sultry females to bloodthirsty vampires who use bat-like wings to swoop through a Transylvanian town and attack the villagers. Despite the transformation, the creatures still had to have the same facial performances and look like the actresses, only creepier. The ILM group decided to accomplish this by applying makeup and prosthetics to the head and face of each actress, and then "marrying" the actual head to a digital body to give the team more control over the complex, acrobatic movements that animation director Daniel Jeanette was seeking.

ILM had done this type of work before using a more time-consuming procedure of manually tracking an actor throughout the shots and then replacing the person's body with a 3D version in postproduction—a process that may not extract all the nuances of the performer. This time, with Jeanette's prompting, the group devised a novel technique that allowed it to use motion-capture cameras for acquiring the movements of the actresses, while the film crew simultaneously shot the backplates of the women's facial performances.

Previously, a simultaneous capture like this was out of the question because the bright stage lights rendered optical-based motion-capture systems—which track reflective light from markers placed on an actor's body—useless. And while magnetic mocap systems are not affected by lighting conditions, they can be influenced by metal, such as the film cameras and harnesses that were used in the scene.

To overcome these issues, ILM's principal engineer, Kevin Wooley, developed customized markers using high-powered infrared LEDs that could be tracked by the Vicon mocap cameras ILM was using. Because they worked outside of the visible light spectrum, the markers were invisible to the naked eye—and the film cameras.

A CG body of vampire Marishka (played by Josie Maran) is driven by motion-capture data acquired during a simultaneous mocap/film session.




The markers were fitted onto customized motion-capture suits worn by the actresses. A main cable fed power to 45 of the special markers through flexible, lightweight ribbon wires, enabling the actresses to move easily. On the set, the actresses—who were adorned with makeup, wigs, and special contact lenses, and completely covered in blue from the neck down to aid in the rotoscoping process—moved with the help of a harness and stunt puppeteer. "It was critical that the [mocap] information be exact when it came time to line up the computer-generated body with the real neck and head," maintains Doug Griffin, ILM's motion-capture supervisor.
The animation is blocked into the scene using data collected by the motion-capture cameras.




Acquiring precision tracking information on a live set, as opposed to a typical controlled mocap environment, was especially difficult due to all the activity that inevitably would bump one of the 24 mocap cameras. In the past, this would have made those cameras unusable. However, this is no longer an issue, says Griffin, because with Vicon's new IQ Camera Re-Section tool, the group can easily recalibrate affected cameras to avoid losing a vital camera view.
The hybrid live-action/CG character is added to the film plate in the final shot.




Aside from motion-capturing the movements of the actresses, ILM mocapped the movements of the film cameras—a procedure typically achieved in postproduction. This helped speed up and improve the matchmoving process, and helped to align the heads with the 3D bodies, which were created in Softimage|XSI.

ILM's new mocap process also required a great deal of development behind the scenes. For instance, motion-capture lead Andy Buecker wrote custom tools within Softimage|XSI to process the motion data so it could be applied to the 3D model. And, technical animator Jeff White devised a rig that allowed animators to manipulate the brides within the scene. The rig maintained the relationship between the 3D geometry and the 2D image, accounting for camera distortion as the imagery translated through the scene. Moreover, the system's flexibility allowed the director to choose different sections of the facial and motion-capture performances to be blended together.

According to Griffin, the team also devised a setup so the animators could tweak and even alter the mocap data without disrupting the alignment of the 3D torso with the real neck. Using reverse constraints—built from the neck down, rather than the usual pelvis up—the animators could match the director's vision for the shot while the rig maintained the 2D-to-3D relationship, notes Griffin. As a result, the animators retained the facial and body performances of the actresses as they layered on the dynamic flight motion.

"We didn't do motion control when we shot the mocap, so what we captured wasn't going to perfectly match the perspectives of the backgrounds shot in Prague," says Griffin. "So Jeff White developed techniques to distort the plates to match the backgrounds, without disrupting the crucial alignment between the 2D head and the 3D body."

In all, 45 shots were filmed with the new hybrid technique. According to Griffin, the motion capture saved almost two weeks of man-hours per performance compared to using more traditional techniques.

ILM hopes to advance this technology for future projects, notes Griffin, improving on what he calls practical issues, like the wires becoming caught in the stunt harnesses. "If we move forward, we'll probably develop wireless markers that do the same thing, but are easier to integrate on set," he says.

Key to the mocap breakthrough was the development of small, customized infrared LED markers that work outside the visible light spectrum so they're not affected by set lights.




For now, ILM couldn't have been more pleased with the results for the hybrid vampire brides. "You look at these shots and you can't tell what's real and what's fake," Griffin says. "I spent six days a week for months working with them, and I can't tell." —Karen Moltenbrey




Computer-user interfaces have taken many forms over the years, but few have made use of a person's entire body. Now a team of researchers at the University of British Columbia has developed a swimming simulation that enables a user to navigate a virtual environment while suspended in a hang-gliding harness. The swimmer is outfitted with a head-mounted display and eight sensors that track body movements and send motion data to a PC controlling the simulation.

The system, which will be demonstrated in the Emerging Technologies pavilion at SIGGRAPH 2004 in Los Angeles, simulates the sensation of floating in water. As the virtual swimmer travels through the water, the system tracks arm and leg movements and updates the display. A water-splashing algorithm adds to the realism.





The swimming interface could aid researchers in developing new ways of engaging the body to explore virtual spaces or knowledge bases. The researchers explain that body movement could take on new meaning when the setup is used to navigate various data spaces. For example, operations such as arm strokes, kicking, and diving could enable medical students to swim or fly through virtual humans to study anatomy. The device could also be implemented as a controller in a variety of water-sports simulations. —Phil LoPiccolo