Moving on up - PartII
Issue: Volume: 30 Issue: 11 (Nov. 2007)

Moving on up - PartII

Motion capture has moved to the forefront in CG films, animated television series, and games.
Part II of a two part series, we look at entertainment-related applications of mocap.
By Barbara Robertson
 
Part I - click here

This year, two feature films that used computer graphics to create principal characters won Oscars—Happy Feet (Best Animated Feature Film) and Pirates of the Caribbean: Dead Man’s Chest (Best Achievement in Visual Effects). But, they had something else in common as well: For both films, the animators based the performances of the CG stars—the animated penguins in Happy Feet and the tentacled Davy Jones and his half-dead crew in Pirates—on state-of-the-art motion-capture techniques for their performances.

For Happy Feet, Giant Studios used a passive optical system to capture dancers who had studied penguin behavior. As the dancers performed, director George Miller could see their movements applied to the penguin characters. Then, animators at Animal Logic used that data as a basis for the dancing penguin movement (see “Happy Feat,” November 2006). For Pirates, Industrial Light & Magic used a proprietary system dubbed iMocap to capture actor Bill Nighy and others on location interacting with Johnny Depp and the other live actors during principal photography as director Gore Verbinski called the shots. The data captured on location gave animators a basis for performing the sea-life encrusted CG pirates in the film (see “Yo Ho Ho!,” July 2006).

In both cases, the motion-capture systems allowed the directors to work in traditional ways. That is, the system accommodated the directors; the directors didn’t change the way they worked to use the technology. And this show—the innovative use of motion capture for feature films—has just begun.
 

The active LEDs in the PhaseSpace space system blink specifi cnumbers that simplify and, therefore, speed the application ofmotion data to a CG skeleton. The system can control charactersin real time using a driver built into a game engine.

Robert Zemeckis, one of the first to use motion capture to direct CG characters, recently directed Beowulf, an animated film with live-action performances. Sony Pictures Imageworks captured the performances—facial animation, body performance, and finger movements—using a Vicon-based passive optical system, applied the data to digital humans, finessed the body animation and facial expressions, and created the film’s virtual backgrounds. With Disney’s backing, Zemeckis has now formed a new studio, ImageMovers Digital, that is devoted entirely to creating films with motion capture. The state-of-the-art performance-capture studio is starting production on A Christmas Carol starring Jim Carrey, scheduled for release in 2009.

And, James Cameron, the Oscar-winning director of Titanic, one of the first films to use motion capture to animate background crowds, is pushing the technique further, as well. For the stereo 3D film Avatar, also scheduled for release in 2009, Cameron is shooting a real-time mix of live-action actors and CG characters animated with performances captured from actors on set. Weta Digital is handling the visual effects. Giant developed custom tools that facilitate the new capture process.

“Jim [Cameron] can use a camera on set and see the actors and CG characters in virtual backgrounds in real time,” says Matt Madden, Giant Studios’ production and development head. Giant previously provided mocap technology to Weta Digital on The Lord of the Rings trilogy and King Kong. “It’s useful for Jim to see CG elements composited into the live-action plate on the fly. When he can direct the CG character and the actor simultaneously, he has more freedom and creative control.”

Although filmmaking is not the top moneymaking market for most vendors and motion-capture studios in terms of revenue, the demands of filmmakers push the technology hardest. “Filmmakers want to direct a CG film on set as if they were directing a live-action film,” says Gary Roberts, president of Vicon’s House of Moves motion-capture studio. “Games, which push volume, want more realism.” Roberts says the majority of House of Moves’ business in terms of revenue comes from game developers; films, commercials, TV shows, and previz account for the rest. Parent company Vicon recently introduced new open-architecture software, called Blade, to help customers in all those applications create their own tools.
 

Motion Analysis has demonstrated the capture of multiple actors and the application of theirdata to multiple characters in real time using the company’s passive optical system.
 
Television
Like Vicon, Motion Analysis, the other top mocap vendor, offers capture services in a Los Angeles (Hollywood)-based studio. Dave Blackburn, who manages that facility, has a similar estimate: Half the company’s work is for games, with the rest split evenly between television and film. Noting that all the film projects they’re involved with are under “tight NDA” (non-disclosure agreements), he cites two television series as pushing the state of the art. The first is Super 78’s work on Cartoon Network’s made-for-television movie Ben 10, which combines live action and CG characters. “We composited the live motion-capture performance into live-action background plates in real time,” says Blackburn. “The director was able to previz the CG performance, as it would be composited later.” (For more on this project, see “Real-time Aliens,” pg. 9.)

A second example of state-of-the-art animation that employs motion capture is under way at the Jim Henson Company, which is gearing up to produce an animated children’s television series for PBS. At the Henson Digital Performance Studio, while puppeteers perform a digital character’s facial expressions, data captured from a live actor moves the character’s body thanks to Motion Analysis’s passive optical system (see “Real-time Digital Puppeteering,” pg. 10).

An animated children’s television show in the UK is also using motion capture to perform characters, albeit a different type of mocap technology: Animazoo’s cameraless inertial marker system takes measurements from the limbs rather than from markers placed on joints. As an actor moves, the gyroscope-based sensors record angles, velocities, accelerations, and impulses. That attracted the attention of animation company Blue Zoo, which recognized that it could have problems with occlusion when motion-capturing people hugging and touching to create the CG characters in The Baba House, a preschool television series.

 

Animators at Blue Zoo used Animazoo’s cameralessinertial marker system to capture actors hugging andtouching without worrying about occlusion for thepreschool television series The Baba House.

Thus, the company decided to use Animazoo’s sensor-based system rather than a camera-based system. Because the most recent Animazoo system provides data straight from the sensors, the crew could see the animation in progress during the capture session.

Games
Game developers are also using gyroscopic mocap. Per Slycke of Xsens, which makes the Moven system, says one reason is that the actors performing the motions can wear full gear. “People move differently and act differently if they’re wearing a heavy backpack,” he says. “Motion capture is about saving time compared to keyframe animation and about getting the subtle human motions that are hard to animate by hand.”

Also providing data direct from markers is PhaseSpace’s active LED-based system. “We’re working with several game developers to drive characters in real time inside the game engines,” says Tracy McSheery, president. “Because each marker blinks its own number, it simplifies the data acquisition tremendously. You can suit up and see a character in the game live, right there.”

Captured libraries of motion have powered the bodies of monsters, athletes, warriors, and heroes in games for years. Now, for cinematics, developers working with the latest generation of game machines are beginning to ask for the same techniques as filmmakers. “We realized when we started working with the PlayStation 3 that we wanted a stylized reality with believable human characters,” says Amy Hennig, game director for Uncharted: Drake’s Fortune. “Having animators try to animate a scene from beginning to end with subtle human motion would have taken forever.” (For more on this application, see “Subtle Games,” pg. 14.)

Although the animators on Drake’s Fortune keyframed facial expressions and hand movements, some other game developers often use facial-capture systems to give their characters expressions. Image Metrics senior research engineer Mike Rogers, for example, notes: “Our technology is fundamentally suited to computer games because of the high throughput.” However, he noticed a shift in the company’s clientele toward movies after the company opened its Los Angeles studio in 2005.

Image Metrics can work with any video delivered to the company, but, ideally, the video comes from an actor wearing a head-mounted camera pointed at his or her face to catch expressions during a motion-capture session. The team at Image Metrics analyzes the video image and defines the relationship between the actor and the character. “We might take a few frames with extreme poses and set up the relationship between the CG character with its mouth wide open and the actor’s open mouth,” Rogers says. “Then our software fills in between.”

The end result is either an animated character that looks like the actor, with the same wrinkles and expressions, or, more often, an animated animal, monster, or character that doesn’t resemble the performer at all. “We return animation data that’s keyed on every frame,” Rogers says.

Mova’s Contour system, which also captures facial expressions and textures, is being used to retarget performances from humans to characters, as well. “Rhythm & Hues did some ‘quick and dirty’ tests, as they put it, but they still showed subtle motions—for example, movement in the temples when the performer was chewing,” says Steve Perlman, president. Recently, some studios have been using Contour rather than scanning actors to create FACS (facial action coding system) poses that they apply to CG models for animators. “Before, the actors had to hold a position,” Perlman says. “The FACS poses captured with Contour during a performance have a more natural look.”
 
Next Moves
For filmmakers, in addition to providing the ability to direct CG characters, motion-capture technology helps them plan scenes or previsualize films. But, for games and sometimes for television and film, the quality of a previz is often good enough to land in the final product. “We’re transitioning into coproductions with writers and directors for games and animation direct to DVD,” says Giant’s Madden. “We’re not using previz for the ideas; we using it for the actual shot.”

And Motion Analysis’s Blackburn notes: “If you’re tracking a live performance and at the same time tracking a real camera or a prop acting as a real camera, you can bring the handheld camera aesthetic into CG productions and leverage the talents of cinematographers. Cinematics in games are ripe for that kind of production technique, but that doesn’t mean it can’t or isn’t being used for film.”

Vicon is playing with another way to track the camera. “We have special features on a greenscreen that are invisible to the film camera,” Roberts explains. “A special camera mounted on top of the film camera detects them, and from that, we can deduce position and orientation of the principal camera and generate a virtual camera.”

And, Judah Graham, creative director of Icarus Studios, which developed the massively multiplayer online game Fallen Earth, has another application for real-time motion capture. “Motion Analysis set up a motion-capture session in our small booth at the Virtual World Conference,” he says, “and streamed real-time motion capture from a live actor into our game engine.” Graham imagines capturing musicians for live performances within a virtual world on the Internet.

Similarly, Andrew Tschesnok, CEO and founder of Organic Motion, a markerless, optical system, imagines using his system to drop people into simulations, whether in virtual worlds on the Internet, in military simulations, or in public spaces. “Imagine a karaoke machine where you look like the star,” he says. “Or, a coin-op machine that puts you in the game.”

Moving On Down
In addition to improving technology, several companies and studios are working to bring down motion-capture costs. One of them, PhaseSpace, plans to open a motion-capture facility in Los Angeles. “We’re going to charge $5 a second for motion capture,” says McSheery. “There’s still a setup fee of around $5000, but $5 a second is the new price point for high-quality motion capture.”

NaturalPoint, an Oregon-based company that has made eye trackers for a decade, is releasing OptiTrack, which the company calls a “prosumer” motion-capture system. The system can capture one person in a 20x20-foot volume with a 100-frame-per-second camera. People who want to do motion capture can shop online for the OptiTrack system’s marker sets, cameras, suit, and so forth. A complete basic system with six cameras costs $7000; each camera can read 16 markers. “Once prosumers start using this tool, we’ll see tons of stuff we’ve never seen before,” predicts Jim Richardson, founder.

Also doing motion capture with an eye toward bringing costs down is Mobility Art, a motion capture studio in Hydrabad, India, which has two 60x60-foot stages, 48 Vicon MX40 cameras, and a large pool of nonunion actors, dancers, fighters, and stunt people to draw from. 

“I see us supporting the larger facilities in the US,” says Hans Van der Sluys, partner and executive director. “We can do the secondary characters, the smaller stunts.” The studio currently is working on creating an online library of moves that the facility plans to sell at competitive prices. “Also, we have a division of the company that offers data cleanup, solving, and retargeting onto 3D characters. Because of the time difference, if a studio sends us data from the US at the end of its day, we can have it back to them in the morning.”



Mobility Art in India can mocap Bollywoodactors and dancers, as well as interestinganimals, for secondary characters andcreatures used by studios around the world.

Who would have thought there was enough demand for motion capture to support outsourcing? But, with established companies pushing the technology forward, and new companies nipping at their heels with innovative technology or lower prices, the market for motion capture is growing dramatically. “Last year was the biggest year at the House of Moves by a large amount,” says Roberts. “We more than doubled the year before, and I know other studios have been busy as well. And the upcoming year will be even better.” 
Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at BarbaraRR@comcast.net.
 

Stylized Digital Humans
 

Sony Pictures Imageworks animated the digital actors in Beowulf by capturing the performances of Anthony Hopkins, Angelina Jolie, Ray Winstone, Robin Wright Penn, John Malkovich, and other leading actors, sometimes in groups, and applying the data to CG characters. In addition, the studio captured stunt actors and others who played supporting roles in this all-CG feature film. Animation director Kenn McDonald estimates that animators spent 25 percent of their time perfecting motion captured from the actors’ bodies and 75 percent of the time turning facial-capture data into star-quality performances. (For an in-depth look at this cutting-edge application, see “Heroic Effects,” pg. 16.)  —BR
 

Empowering Animators
Lucasfilm’s Industrial Light & Magic has captured actors and animals for visual effects in films for years, ratcheting up the technology as projects demanded. “We’ve used motion capture on 30 films in the past 10 years,” says Mike Sanders, digital supervisor, who is in charge of data acquisition for the studio. “You’d be amazed by the movies that have digital doubles and digital crowd scenes you didn’t know about.”

Now, Sanders finds himself without a department. “Other companies are ramping up big motion-capture divisions,” he says. “We’ve gone the other direction. We can pipeline 90 percent of a performance through automatically. The last 10 percent can take a little time, especially for stunts, but we’re talking minutes and hours, not days and weeks. I, or someone else, tends to process the data. So, we don’t have a team. We’re very well tuned.”

ILM uses a Vicon-based system for passive optical capture on a 50x50x40-foot stage equipped with 40 four-megapixel cameras. The area is also a full-shooting soundstage with a permanent bluescreen installed, so they can film and composite live-action and CG characters in real time.

Although the studio often uses the system to capture principal actors and specific talent, Sanders encourages animators to drop in. “Animators have felt animosity towards motion capture,” he says. “But, by giving them freedom to use it without major cost incurrence, it becomes like a playground. Rather than videotaping themselves, they can capture motions in 3D and analyze them. They can come down to the stage, put on a suit, explore their character, and the performances will be on their desk by the time they get back there.”
 



(TOP) ILM captured actors performingon location in various lighting conditionsduring principal photography.(Bottom) Data derived from video footagedrove the CG Davy Jones and his crew.

The animators working on secondary characters in Pirates of the Caribbean: At World’s End and the robots in Transformers did just that. “Sometimes we audition actors,” Sanders says. “But the animators are already exploring the character; they’re closest to the performance.” On Transformers, animators used motion capture for reference and sometimes for key poses; on Pirates, some animators used their own performances for the secondary characters.

Animators using the mocap stage can wear any of the approximately 30 suits ILM has created over the years. Because many of them use motion capture repeatedly, the studio has skeletons already created for them. “They just click on their name, and they’re off and running,” says Sanders.

In addition to ILM’s visual effects work, LucasArts, Lucas Digital’s game division, uses the new facility to capture motion for real-time game cinematics, and directors are using it for real-time previsualization. “We’ve had just about every top-name director in here playing with the previz system,” says Sanders.

The previz system is the result of a mandate from George Lucas (see “Preconceived Motions,” October 2007). ILM first used motion capture, with an Ascension magnetic system, to previz Star Wars: Episode I—The Phantom Menace, released in 1999. A Vicon passive optical system captured data for hundreds of droids in that film.

For the 2001 The Mummy Returns, ILM’s motion-capture crew captured actor Arnold Vosloo and applied the data in real time to a simple shaded CG character that performed in matchmoved live-action plates shot earlier. In 2004, ILM acquired motion-capture data during a bluescreen shoot of the actors performing the vampire brides for Van Helsing by devising active markers, or LEDs, that didn’t interfere with the film camera. Then, for the 2006 Pirates of the Caribbean: Dead Man’s Chest, ILM became the first studio to capture the performances of multiple actors in leading roles on location during principal photography.

Now, the studio is pushing forward its research into facial capture. “We’ve been developing vision-based approaches for seven or eight years,” says Sanders. “For photoreal facial techniques, you need to acquire the performance, and I think you also need a deep layer of global math and character exploration. You need to have a core shape space for what your actor or creature is capable of in 3D. When you develop shapes regionally or specifically, like in a FACS-based approach, you can lose subtle deformation around the entire face.”

Sanders also keeps abreast of new technologies developed outside ILM—everything from stereo-based photography to any type of motion-capture system. “I like to look at how I can take advantage in another arena or create a hybrid,” he says. “To replicate a performance takes a host of technologies and solutions. The integration of the right tools will ultimately lead toward photoreal CG humans. That’s what I look forward to: believable emotional CG realism.”  —BR
 

Real-time Aliens
For the Cartoon Network’s original movie Ben 10 Race Against Time, the studio Super 78 used Motion Analysis’ Hollywood-based studio to capture real-time data for two CG characters. “It’s a live-action film, with 3D characters,” describes Super 78’s Brent Young, “about a boy who finds a watch that gives him the ability to transform into 10 different alien forms.” Super 78 created the visual effects and animation for all the 3D characters. For two of the characters—Heatblast and Diamond Head—they used motion capture. For Heatblast, the animators found 90 percent of the data useful; for Diamond Head, they could directly apply about 30 percent.
 

Super 78 used a Motion Analysis systemto capture motion for this CG character.

“Heatblast’s physical being is similar to the motion-capture actor,” Young says. “Diamond Head is a 9-foot-tall, 1000-pound chunk of rock, so the animators needed to give him a different weight. However, because the director gave the motion-capture actor specific directions on how the scene should play out, the captured data served as a good foundation.”

During the capture, the director and crew could watch the CG characters perform in a live-action plate shot earlier. “Our mocap actor drove a 3D model that was live in the plate,” Young says. “We could watch the character on a giant projection screen, probably 10x15-feet wide.”

Adds executive producer Dina Benadon, “The director could direct the action as if it were happening in real time. It was very effective. We got a lot of shots out in a short amount of time.” —BR
 


Real-time Digital Puppeteering
Puppeteers at the Jim Henson Company are using a variety of proprietary tools with motion-capture technology from Motion Analysis to control CG puppets in real time. To puppeteer a character using the Henson Digital Performance Studio (HDPS), data captured from a performer moves a CG character’s body, while at the same time, a puppeteer using a set of hand controls performs the face and other attributes. A CG character runs using motion-captured data but expresses emotion and moves its lips through the puppeteer’s hand controls. Often, the puppeteer also voices the character. For hand movement, the studio uses proprietary gloves that capture motion in real time with a mechanical-based system.
 

Puppeteers using the Henson DigitalPerformance Studio move a character’shands, create its expressions, and performother attributes, all in real time.

The company is currently concentrating on creating a children’s television series for PBS by performing the characters in real time using the HDPS. They’re building and rigging the characters in Autodesk’s Maya and plan to begin shooting in January.

“We have a full digital performance studio that’s similar to a real, live television broadcast studio,” says Kerry Shea, head of digital production. On stage, three camera controllers allow camera operators to move virtual cameras in real time in the CG environment. A switch director calls the camera during the shot.

“Everyone on stage can see the CGI environment,” says Shea. “It’s a great way to bring in a director from any discipline because, just like with live action, the director can call takes.” In addition to the mechanical camera heads, the switch director can also see the virtual environment through a markered steadicam camera.

“Imagine a TV show with someone calling out ‘Camera 1, Camera 2,’” Shea says. “It’s all rendered in hardware and fed into the viewer. Once shot, the hardware renders go into editorial. Our goal is to perform a perfectly lit, perfectly rendered character in real time with no post, and we’re collaborating with several software and hardware partners to get the finest quality performance we can in real time.”

A real-time system allows the directors to make creative decisions during the production process. “Anything that moves on the floor is decided at that time,” says Nicole Goldman, vice president of marketing. “If a door opens or closes, the decision is made at that time. It gives us spontaneous looks, and motion capture is part of that.” —BR
 

 
Gameplay
To create the players for such sports games as NBA08 and other titles for the Sony PS3, Sony Computer Entertainment America (SCEA) created a dedicated studio in San Diego to capture performances using a Vicon-based passive optical system. The new building has laser-leveled floors and appropriate acoustics for voice capture.

“We’re running two zones—a full-body zone and an integrated zone,” says Brian Rausch, senior manager in the visual arts services groups, which includes the cinematics and motion-capture department. The full-body zone is a 50x30-foot area in which we capture the body, head, and hands, but no fingers or faces for in-game action. There, we use 12mm markers. In the integrated zone, we use 6mm markers to capture full body, face, fingers, and voice for dramatic sequences used in cinematics.”
 

Sony Computer Entertainment America’sdedicated motion-capture studioin San Diego uses a Vicon-based passiveoptical system for capturingathletes and actors for PS3 games.

All told, there are approximately 88 cameras, which can move to either zone. Normally, the group shoots with 56 to 60 cameras to capture the smaller markers in the integrated zone, and 56 cameras in the full-body zone. They’ve captured as many as 13 people in the full-body volume and four in the integrated volume. “Those aren’t hard limits,” Rausch says. “That’s all people have asked for. We could put more cameras in there and capture more.”

As the company ramped up for the PS3, the team was producing 13.5 character hours per month for “signed, sealed, and delivered cinematics in games,” and enhancements to the core animation library, according to Rausch. “It was ruthless,” he says.

Now that the PS3 is out, SCEA can take more time to produce titles. “We’re capturing performances, not just capturing movements,” Rausch says. “Within the last four years, we started putting actors in suits, not just anyone we could grab to do a walk cycle. I think part of the reason motion capture got a bad name was because of the talent put into the suits. I want to take what an actor can do and preserve that in digital characters.” —BR