It has been used to give lifelike performances to a wide range of characters. Gollum. Caesar the ape. Hulk. Tintin. Davy Jones. Beowulf. The Na’vi. The list is extensive and seemingly endless. And the technology? Motion capture.
One of the more popular films to break the animation barrier by using motion capture extensively was Final Fantasy (2001). At the time, the industry had strong feelings toward mocap, either embracing it despite its technological limitations or avoiding it altogether. But as time passed and the equipment and software became more robust and finely tuned, motion capture became a popular animation technique.
Today, motion capture encompasses so much more than replicating a walk or a jump. The rise of facial motion capture through the use of camera rigs or laser systems has given animators nuanced performances that would have been extremely difficult, if not impossible, to achieve manually via keyframing.
Facial mocap can be done using markers or a markerless tracking system. With the former, a multitude of tiny markers are placed on the actor’s face and then the movement is tracked with high-resolution cameras (as was the practice used to replicate the facial expressions of actor Tom Hanks and apply them to the characters in 2004’s The Polar Express). Active LED marker technology has refined this process, enabling real-time feedback. Markerless facial tracking, meanwhile, recognizes facial features (nose, lips) and tracks them. This technology was used in
Harry Potter and the Order of the Phoenix (2007).
Both marker and markerless facial systems must obtain very high-resolution data in order to capture the subtle raise of an eyebrow or the barely perceptible movement of the lips, for instance. And, the end result is often a particularly emotive performance of a CG character that is more relatable and believable to the audience.
Another recent trend has been capturing the movement of an actor’s hands and fingers. For this, various marker and markerless glove solutions are used. While hand capture has many applications within entertainment and beyond (motion analysis, 3D input, biometrics, puppeteering, and so forth), it is expected to play a big role in virtual and augmented reality as more applications in this realm spill forward.
Facial capture along with hand/finger capture has led to the evolution of “performance capture,” whereby an actor’s body movements and facial expressions are acquired and translated onto a digital character. Robert Zemeckis’s now-defunct ImageMovers Digital evolved its moviemaking process around performance capture in CG films such as The Polar Express, Beowulf, and
A Christmas Carol.
There are many examples of successful performance capture, and there is one person who has perfected the art and helped propel it to a new level. That is Actor Andy Serkis, whose movements and expressions have been applied to several memorable CG characters such as Gollum, Caesar, and King Kong – with Weta’s technological know-how, of course. In these cases, Serkis’s acting skills were captured, translated, and applied to the CG character. The performances were so successful that many had discussed whether the performance of Gollum should lead to a Best Actor nomination for Serkis.
Moving Mocap Forward
Like facial capture, full-body motion capture has evolved by leaps and bounds. While mocap’s roots were as an analysis tool in the fields of science, sports, and education, it is well known for its use in the video game Prince of Persia. Similarly, the film
Sinbad: Beyond the Veil of Mists stakes its claim as one of the first movies made primarily with motion capture, an optical system whereby the actor was covered with tiny reflective tracker balls whose movements were triangulated within a computer.
Initially expensive and time-consuming, motion capture was first used as a specialized tool, often getting the performance “80 to 90 percent there,” but that last 10 to 20 percent was extremely difficult to tackle. For those who know their CG history, you will recall that optical capture also was used in The Mummy, for the crowds in
Gladiator, and for the first main, fully digital character Jar-Jar in
Star Wars: Episode I – The Phantom Menace. Although
Final Fantasy: The Spirits Within tanked at the box office, the film set a benchmark for the technology, with more than 1,300 live-action scenes that were turned into CG animation.
Yet, it was film legend Peter Jackson whose visions set mocap on a meteoric rise. When Gollum became a main character in The Lord of the Rings: The Two Towers (2002), he did so thanks to a new advancement: real-time performance capture that could be achieved on set with other actors “in the moment”. Well, almost. First, Serkis performed the scenes on set with the other actors, and then the same scenes were done without Serkis. Later, Serkis would replicate his performance back at Weta while wearing a mocap suit with markers.
For The Two Towers, Gollum’s facial expressions were hand-animated, as animators used reference footage of Serkis as a guideline. Not so for Jackson and Weta’s
King Kong, for which tiny markers were attached to Serkis’s face to track subtle muscle movement. This process was used for other well-known digital characters, including Davy Jones in
Pirates of the Caribbean: Dead Man’s Chest, but in that instance, actor Bill Nighy’s full performance was filmed on set, not separately on a mocap stage.
Avatar, Happy Feet, and
The Adventures of Tintin proved that mocap could successfully transform a digital character into one that is very humanlike, one that can dance, and one that can act. But
Rise of the Planet of the Apes (2011) became an evolution point for the technology, with Serkis transforming Caesar into a lead digital character like no other. Weta had experimented with on-location motion capture for
The Lord of the Rings and had pushed the state of the art for the performances in
Avatar, but what the studio faced with
Rise went far beyond those requirements.
That is, until the group topped that for the sequel Dawn of the Planet of the Apes (2014), for which they used a new mocap system that enabled them to gather performance-capture data in extreme locations and weather conditions – ideal, since this marked the first time Weta filmed entirely on location (sometimes in conditions that were less than ideal). The sequel also entailed Weta capturing the actors’ performances on location during filming using mocap cameras, helmet-mounted facial rigs, and spots of LEDs on the actors’ bodies. However, the new system was wireless, which could be placed freely around the set.
Since then, with movies like The Avengers and
The Hobbit series, performance capture has continued to evolve, as the equipment has become more durable and can sustain the physicality of the actors on set, while the fidelity enabled by the hardware has reached new heights. And let’s not forget about the software, which has simplified the process and made the technology attractive for just about everyone.
Without question, the applications and the equipment have evolved tremendously over the years. Early low-frequency magnetic systems tethered performers, thus limiting their range of motion. Mechanical systems, which measure joint angles, are also limiting in terms of motion but can be used in any environment. Optical systems use markers, which are usually tracked by infrared cameras. This continues to be a very popular method, though occlusion can be an issue.
There are active (pulsed-LED) markers, passive (reflective) markers, and semi-passive markers. There are reflective and optical systems. And there are markerless systems, inertia systems, and magnetic systems. Each offers advantages and disadvantages, as does motion capture itself. There are expensive high-fidelity systems and low-cost options. Many systems require Lycra form-fitting suits with embedded or attached markers, while others are marker-free.
Motion-capture options come in many flavors to suit a variety of needs. Today, motion capture – or the more-encompassing performance capture – is a vital animation tool, whether for background characters, crowds, lead digital characters, characters that are humanlike, and those that are not – thanks to the technologists and visionaries that developed the systems and daringly put them to use in new, unique ways.