Facial Animation
Colin Urquhart
Issue: January-February-March 2023

Facial Animation

The evolution and challenges of capturing realistic facial performances.

Visionary filmmaker Robert Zemeckis has often used his movies to push the boundaries of visual effects. From the spectacular climactic clock scene in Back to the Future (1985) to the seamless interaction of real-life actors and cartoon characters in Who Framed Roger Rabbit (1988), his movies have often utilized cutting-edge technology to help bring memorable characters and stories to the screen. 

In particular, Beowulf (2007) and The Polar Express (2009) charted a revolutionary new direction for the future of the movie industry and beyond, as the first films where the CGI characters were recognizably based on real-life actors, including Angelina Jolie and Tom Hanks. They were also the first movies to use full performance capture (PCAP), where face and body motion capture and voice recording were all acquired simultaneously.



These techniques were further developed with the de-aging of Brad Pitt in The Curious Case of Benjamin Button (2008) and the pioneering performance capture process in James Cameron’s sci-fi epic Avatar (2009). Video games also started adopting similar techniques, with Activision’s Apocalypse (1998) being one of the first to use laser scanning to capture Bruce Willis in 3D, and Rockstar Games’ L.A. Noire (2011) among the first to rely on nuanced facial animation of digital doubles. Remedy Entertainment’s Xbox-exclusive game Quantum Break (2016) won awards for its in-game digital doubles of actors Shawn Ashmore and Aidan Gillen.  

For over twenty years, DI4D has been bringing animation to life through facial performance capture across movies, television, and video games. We’ve pioneered the use of photogrammetry-based 4D capture of facial performance data for many demanding projects. From capturing actor Deep Roy for the Oompa Loompas in Charlie and the Chocolate Factory (2005) to providing over four hours of facial animation for 2022’s best-selling video game Activision’s Call of Duty: Modern Warfare II, we’ve transformed the technology and pipeline for bringing digital doubles to life. 



One of the main challenges has been to give actors the flexibility and freedom to perform naturally, while capturing their facial performances accurately. The use of 4D scanning normally requires the actor to keep their head quite still while they are captured. Our DI4D PRO system comprises nine synchronized 12-megapixel machine vision cameras, all zoomed in to capture the actor’s face with the highest resolution possible. 

While the resulting 4D facial performance data is incredibly accurate, remaining still during capture can inhibit an actor’s performance, especially for a dynamic action scene. Acting is about reacting, and the requirement to capture only one actor at a time can also inhibit natural performance. It is a testament to the acting talent and directors we have worked with that so many memorable performances have been captured with the DI4D PRO system. 



To address the limitations of restricted actor movement, we have spent the last eight years developing and refining the ability to acquire 4D facial performance data from a helmet-mounted camera (HMC) system that actors wear.  The result is our DI4D HMC, a wireless, helmet-mounted stereo camera system designed to obtain high-quality 4D facial performance data simultaneously from multiple actors during PCAP. The ability to capture face, body, and vocal performance simultaneously in PCAP avoids the timing issues that arise when these are captured separately and “Frankenstein-ed” together in post-production. PCAP also allows actors to move freely and to play off one another, resulting in much more natural and engaging performances. 

The number, resolution, and quality of cameras that can be mounted on an HMC and worn comfortably are limited, usually to only two 3-megapixel cameras with fisheye lenses. This reduces the fidelity of 4D data that can be captured. Our recently launched PURE4D pipeline is designed to address this by combining lower fidelity 4D performance data captured from an HMC system with higher fidelity 4D expression data captured from our DI4D PRO system. This results in the best of both worlds: high-quality performance derived from PCAP and high-quality data derived from DI4D PRO, bringing us even closer to creating digital doubles that are indistinguishable from reality. 



All of the main characters in Activision’s best-selling 2022 video game Call of Duty: Modern Warfare II are highly realistic digital doubles of the actors who play them. Developer Infinity Ward used PCAP extensively in production, using stereo HMCs to capture the actors’ facial performances, which were then processed using PURE4D. This ensured that both the highest possible level of fidelity and acting quality were obtained and then transferred faithfully onto the characters, achieving a level of realism previously only seen in movie visual effects and pre-rendered game cinematics.

By using 4D facial capture, digital double characters will become increasingly indistinguishable from real life, delivering the most emotionally engaging storytelling experiences possible. As performance capture and facial animation technology continue to evolve, we will see that early promise demonstrated by pioneering filmmakers like Robert Zemeckis and James Cameron being realized increasingly across movies, television, video game and even metaverse projects. 

About the Author
Colin Urquhart, a thirty-year industry veteran and innovator in the field of facial capture and animation, is the is the CEO and co-founder of DI4D. The DI4D team has worked on a host of leading entertainment projects, including Blade Runner 2049, Call of Duty: Modern Warfare, Love, Death & Robots, and Quantum Break.
 
Subscribe to the free digital edition of CGW Magazine: https://bit.ly/3N8BmJx