For more than 35 years, facial capture technology has been a progressively-useful and sophisticated tool in the filmmaker’s repertoire. From the earliest days of facial laser scanning in the 1980s to recent advances in performance capture, technology is enabling 3D facial animation that is increasingly indistinguishable from real life.
In some cases, it’s clear to the viewer that a digital face is being used, such as when famous actors are scanned for full-CG animated movies or for video games. But often now, facial scanning is used to empower post production wizardry and deliver demanding visual effects without cluing audiences in to the magic behind the moment. And the technology is only getting better and more ubiquitous.
Facial capture in movies really started with Cyberware, which developed the ability to scan the face of a completely-static actor in 3D and then use the data for computer-generated graphics. For example, 1986’s Star Trek IV: The Voyage Home scanned the likes of William Shatner and the late Leonard Nimoy, and used their digital likenesses in a dazzling dream sequence.
Writer Colin Urquhart's digital double capture
That opened the doors for visionary filmmakers to use Cyberware’s technology in ever more demanding ways. James Cameron helped lead the charge in The Abyss, which mapped captured facial animation onto a liquid pseudopod. He then took things to another level in 1991’s
Terminator 2: Judgment Day with Robert Patrick’s stunning, shapeshifting T-1000 assassin, which eerily swaps between human and liquid metal forms.
As the decade went on and Hollywood produced more ambitious blockbusters, filmmakers needed more sophisticated facial-capture technology. In came the use of facial markers, which made it possible for creators to capture more nuanced facial movements and translate them onto CG characters.
We saw them in films like The Polar Express and
Beowulf, both directed by Robert Zemeckis, as famed actors such as Tom Hanks and Angelina Jolie had their performances captured and translated onto the big screen as fully CGI digital doubles. It was clear at this point that recognizable, big-name actors were driving demand for digi-doubles that looked and moved like their real-life counterparts.
In the case of The Matrix Reloaded, Hugo Weaving’s iconic Agent Smith was multiplied by the hundreds using digi-doubles that simultaneously attacked and tried to overwhelm the heroic Neo. And in
King Kong, mocap legend Andy Serkis donned facial markers along with a bodysuit to create the captivating performance of the titular giant gorilla in Peter Jackson’s film.
Writer Colin Urquhart's digital double
Facial capture also began to be used in video games, such as Activision’s Apocalypse on the original PlayStation, which captured both the likeness of Bruce Willis and his performance using marker-based motion capture. As video games like
Beyond: Two Souls and
Detroit: Become Human became more advanced, they made more expansive use of the captured likenesses of human actors, which were dropped into their immersive 3D worlds.
Although they were rudimentary at first, head-mounted facial performance capture rigs began to be used in the 2000s to allow simultaneous face and body motion capture of multiple actors. Early head-mounted capture (HMC) solutions used sparse facial markers to capture the movements of an actor’s face with only relatively low fidelity. However, later HMC systems were able to surpass the fidelity of facial motion capture that was possible with traditional longer-distance fixed camera mocap systems.
Facial capture developed quickly towards the end of the 2000s, driven by the introduction of photogrammetry, technology that can measure the 3D position of an individual pixel from its position in multiple images. For example, rather than using a sparse set of markers, the Mova Contour system used fluorescent speckled makeup applied to an actor’s face to capture dense 3D facial scans at video rates. One of the earliest examples of 4D facial capture, the Mova Contour system was used by Digital Domain to both age and de-age Brad Pitt in 2008’s The Curious Case of Benjamin Button, and was later used for movies such as
Guardians of the Galaxy and the game
Rise of the Tomb Raider.
Colossus in Marvel’s 'Deadpool 2'
Later, Depth Analysis created the MotionScan system to capture actors’ facial performances using photogrammetry without the need for markers or special makeup. This system was used extensively in the development of Rockstar Games’ 2011 title L.A. Noire, one of the first video games to rely on nuanced facial animation of digital doubles.
Over the last 10 years, DI4D has pioneered the use of photogrammetry-based 4D capture of facial performance data for the most demanding entertainment projects. The nine-camera, DI4D Pro system can be used to capture the highest fidelity 4D data for a single-seated actor, or stereo camera DI4D HMC systems can be used to simultaneously capture 4D data for multiple dynamic actors. Most recently, DI4D launched the hybrid Pure4D solution, which combines the benefits of both approaches to meet the demands of next-gen game developers to produce high fidelity facial animation for digital doubles at scale.
Such photogrammetry-based 4D capture systems have been used for a range of leading movie and video game projects in recent years — including Blade Runner 2049 to help re-create actress Sean Young’s performance as the replicant Rachael, and for the metal mutant Colossus in Marvel’s
Deadpool 2. The same technology has also been used for highly lifelike digital human characters in Activision’s
Call of Duty: Modern Warfare and EA Sports’
As more and more movie and game fans watch and play on 4K screens, and as filmmakers and video-game developers aim to deliver ever more awe-inspiring and hyper-detailed content, facial capture is being used with increased frequency.
Fortunately, not only has the technology become more powerful over the years, but also more versatile and easier to use. Today’s facial-capture solutions help enable storytelling and bring creative visions to life like never before.
EA Sports’ 'F1 2021'
For video games, new technologies, including DI4D’s Pure4D, are driving a shift, not only in the character detail, but also how they are created. Characters featured in the cut-scenes of the recent F1 2021 game, for example, were created in their entirety from acting performances, as motion capture, facial capture, and audio from an actor’s performance combined to drive their digital double counterparts in the game.
Performance capture as a tool is nothing new to the entertainment industry and has been used for many years. The difference now is that the body and facial-tracking technology is so accurate that digital characters can embody the life and feel of a real actor. We’re moving into an era in which the performance of the actor will be absolutely key to delivering the best possible results from a digital double — without the need for any subsequent artist intervention. It’s a shift from post production to pre-production.
From their origins in blockbuster movies and video games, digital doubles are expanding their reach and utility. It is becoming more common for these characters to act as digital customer service reps in new verticals like retail and even healthcare. In tandem, we’re also seeing the fidelity of digital doubles evolve to the point where it is almost impossible to tell them apart from their real-life counterparts. This use across multiple industries is only likely to fuel the demand for even more realistic digital double character creation in the years to come.
Colin Urquhart is a 20-plus year industry veteran and innovator in the field of facial capture, and the founder of DI4D, which has worked on a host of entertainment projects, including 'Blade Runner 2049'; 'Call of Duty: Modern Warfare'; 'Love, Death + Robots'; and 'Quantum Break'.