Issue: Winter 2019


A digital fountain of youth poured onto the big screen this year, giving several leading actors the opportunity to play characters at extremely younger ages. In three films, The Irishman, Gemini Man, and Captain Marvel, lead actors Robert DeNiro, Al Pacino, Joe Pesci, Will Smith, and Samuel Jackson appeared younger during their entire films.

Artists at three studios - Industrial Light Magic, Weta Digital, and Lola - took the lead in "youthenizing" actors for these feature films. De-aging is a two-part process: capturing and applying the actor's performance and changing the physical appearance.

Each studio took a different approach.


The Irishman: Industrial Light Magic

In this film, the aging former truck driver and self-confessed hit man Frank Sheeran reminisces about his life and relationship with Teamster boss Jimmy Hoffa and mobster boss Russell Bufalino.

Martin Scorsese directed the film adaption of the book "I Heard You Paint Houses" by Charles Brandt. Netflix released The Irishman in November. It powered out of the gate with film festival awards and a 96 percent approval rating from critics compiled by Rotten Tomatoes.

Three septuagenarian megastars led the cast and played characters at various younger ages. Robert DeNiro, who was 76 when filming, plays Sheeran from ages 20 to 80. Al Pacino, 78, plays Jimmy Hoffa from 37 to his disappearance in 1975; and Joe Pesci, 76, played Bufalino from ages 47 to 72.

ILM created the youthful characters. Pablo Helman was visual effects supervisor, with Leandro Estebecorena, Nelson Sepulveda, and Ivan Busquets as associate visual effects supervisors. Artists in San Francisco and Vancouver worked on the show.

This is not the first time ILM has created a digital lead actor - the studio won an Oscar for turning Bill Nighy into a half-dead pirate for Pirates of the Caribbean: Dead Man's Chest in 2006. Ten years later , they brought actor Peter Cushing back to life through a digital character that plays Tarkin for a few shots in Rogue One: A Star Wars Story.

Now, for The Irishman, the crew created younger versions of the characters played by DeNiro, Pacino, and Pesci who appear throughout the entire film. To do so, the studio developed a new system called Flux. Stephane Grabli led the R&D Flux team.

"We could have captured them using head cameras and dots on their faces," Helman says. "But when I met with Martin, he said, 'No head cams. No volume. I want them to be on set with theatrical lighting. You figure it out.' "

They did just that. And more. No head cams. No volume. No special lighting. And, no keyframe animation.



Acting their Ages

The actors in The Irishman played characters many years younger than themselves throughout the film. Even so, they didn’t have body doubles or digital body doubles. Instead, the actors did yoga, and a body analyst was on set every day. “Marty [Director Martin Scorsese] had a specific idea about what the characters’ lives had been,” says Visual Effects Supervisor Pablo Helman. “Frank Sheeran (Robert DeNiro) had a really rough life. It was OK if he didn’t always walk as a younger person might. The majority of the movie is conversation. And, the whole movie is from Sheeran’s point of view, from the memory of this character. When there was action, they worked it out with the movement person.”

No Head Cams, No Markers

"The idea was to capture the most amount of information we could without markers," Helman says. "And, if there were no markers, the software we would develop would need to derive everything from the light and textures captured on set. So, we came up with a rig that used infrared cameras and didn't stop Marty [Scorsese] from doing anything. We worked closely with Director of Photography Rodrigo Prieto and Arri Los Angeles."

The rig has three cameras placed side by side on a 30-inch bar, narrow enough to fit through a door. In the center is the RGB camera, and on either side are two infrared film-grade Arri Alexa Minis.

"We needed to neutralize the light without changing the lighting on the set," Helman says, explaining the need for the infrared cameras. "In effect, rather than taking the actor into a controlled environment, we created a controlled environment on the set."

No other "witness" cameras were needed. The infrared light didn't interfere with the theatrical lighting on set and produced images without shadows. The actors and director didn't see it.

The actors could sit at a table in a crowded, busy restaurant and lean toward each other to talk. Scorsese could film them in close-ups, and as he did, ILM captured their facial expressions using the three cameras on that one rig.

Two camera operators controlled the cameras remotely. One operator managed the main camera. Another operator controlled the infrared cameras, which have a different depth of field.

No Keyframe Animation

Then, the magic happened. Helman describes the process used to create DeNiro's more youthful Sheeran from footage and data captured from the three cameras.

"Once I got the take, I brought the footage from the three cameras here to ILM," Helman says. "We also had the data gathered on set: HDRIs for light and density, and Lidar data to know where all the lights were. The footage went through layout to solve the camera [determine the camera view], and we did matchimation for the bodies and heads. All that data - the layout, roto, HDRI, Lidar - went into Flux with information from the three cameras. The software made a cocktail of it. Flux figures out where the actor is in 3D space and derives geometry from the three cameras to create a digital double of the actor."

Flux produces an albedo model showing a representation of light and textures and a plastic shaded render. The software then compares its digital double to a model built of the actor and deforms the model on a per-frame basis to match the actor's performance.

"Then, we retarget this performance to a younger version of the model and render it through lighting and texture," Helman says. "We had no keyframe animation in this project at all. We didn't want to change the performance."



Models and Textures

A team led by Digital Model Supervisor Paul Giacoppo sculpted contemporary and youthful models of each actor, changing the geometry in the chin and neck as needed.

"Each model started as an accurate scan of the actor at his current age using a combination of [Disney Research's] Medusa for likenesses and facial expressions and Otoy for facial detail," Giacoppo says. "Then, by looking at past films, we sculpted younger faces. We had a slider that could take us from current ages to previous ages."

The models provided the form and larger--scale bumps and pores. Texture artists led by supervisor Jean Bolte added the finer details, working from the Otoy scans and photographic reference.

"We de-aged them in stages," Bolte says. "Each stage had to have wrinkles and age spots painted out judiciously as we figured out how much to take away. I'd look pixel by pixel, zooming in to make sure of the integrity. We didn't want them to be too pretty. We wanted to keep things a makeup artist might have taken out. I was well aware that we could have ruined the movie if we didn't nail it."

She smiles and says, "I think we pretty much nailed it."

For what Helman calls a "sanity check," the crew spent two years gathering a library of performances for the three actors at the targeted ages from different movies. An AI-based program they've dubbed Face Finder found frames to match rendered frames in terms of age, expression, pose, camera angle, and lighting.

But, they weren't aiming to exactly match the actors at those ages.

"Martin said he didn't want us to take DeNiro from Taxi," Giacoppo says. "He had to be the younger self of the character he was playing in this movie. A young Frank Sheeran."

Adds Bolte, "We didn't have a clear goal. Not only do these actors appear different from one film to another, they're different even from one shot to another. It was a matter of, who is this character Frank Sheeran? We had to find that. I spent days studying images of DeNiro. He can change his expression with the raise of an eyebrow. He is a master at being a chameleon."

All told, the crew spent nine post-production months working on the 1,750 shots, but that was after they'd spent four years making the post-production possible.

"We knew the risk we were taking," Helman says. "So, we invested those four years of development to have something completely performance-driven. If you follow the natural progression of visual effects, you find that the next thing is markerless performance capture.

"I remember Ewan McGregor's reaction when he walked onto a bluescreen stage," Helman continues. "He said, 'What the [expletive] is this?' I thought an actor could imagine the set. But, after working with these actors on The Irishman, I realized what he meant. It's crucial for actors to be where they're meant to be."

Gemini Man: Weta Digital

In this film, 50-year-old actor Will Smith plays both the character Henry Brogan, an aging former Marine scout sniper now an assassin; and Junior, a clone of Henry at about age 25, who is also an assassin. In some scenes, the two are face to face.



Deep Shape

Animators at Weta Digital use a sophisticated system based on a massive number of shapes to create and refine a facial performance. For Gemini Man, Stuart Adcock, head of facial motion, added a technique now called “Deep Shapes,” which provides even more subtle control. “It’s a brilliant idea,” says Guy Williams, visual effects supervisor at Weta Digital for Gemini Man. “The shapes travel linearly based on skin depth. To me, it looks like inertia. The skin slides for a moment in the last direction it was moving, even though the muscles are already moving in another direction. It feels like an insanely high-resolution simulation, but it isn’t. The animators have control and can see the effect as they use the system.”

Directed by Ang Lee, the action thriller received mixed reviews - with Rotten Tomatoes' aggregate scores for critics a rotten 26 percent approval, but audience scores at a fresh 83 percent. Lee chose to shoot the film in 4K 3D at 120 frames per second, which bothered some critics and made life difficult for Weta Digital's rendering team. Bill Westenhofer, who had received an Oscar for Lee's Life of Pi (and another for The Golden Compass), was the overall visual effects supervisor; three-time Oscar nominee Guy Williams supervised the visual effects created at Weta Digital.

To create Will Smith's cloned character at age 25, the Weta Digital crew adopted what has become a traditional method of capturing performances, one honed especially through three award-winning Planet of the Apes films and this year's Alita: Battle Angel. To devise Junior's look, however, researchers at Weta Digital developed state-of-the-art technology for skin color and textures.

On set, when both characters appeared in a scene, Smith's stand-in, Victor Hugo, played Junior, knowing he would be replaced later. Then, Smith played Junior in the same shots wearing motion-capture "pajamas" and a head-capture rig. In the film, Junior is 100 percent digital.

"A performance doesn't exist only in the face," Williams says. "Everything you do moves you from your feet to your eyes, and all of this adds to how we recognize a person. It isn't solely about facial motion. So, we choreographed all the motion together. Otherwise, you end up with a bobble head."

The face carries most of the emotion, though, and Williams notes two challenges in creating and performing Junior's face: "First, it becomes easy to lose his likeness," he says. "And second, Will Smith hasn't aged much in 25 years. We had to get into the deep science of youth versus age to create enough of a difference. We knew the distance from lips to nose changes, and the jowls and cheeks sag. But, we also had to put youth in his pores, in the color around his eyes, in the moisture of his lips and in his eyes to make sure everything our brains perceive as youth is properly represented. Digital humans live or fail in insanely subtle nuances. Skin turned out to be a major component."



Poring Over Details

Weta Digital started by creating a digital model of Will Smith at his current age using photo shoots, photogrammetry scans, skin lighting capture at ICT, and two FACS sessions. Then, they modified the digital model of 50-year-old Smith to change his facial structure and appearance. For reference, they had Smith's early films and 23-year-old actor Chase Anthony, whose skin looked like Smith's. Initially, the crew considered relying on their standard approach in which they use a live cast for skin textures.

"But, one of our shader guys thought he could grow the pores," Williams says. "Early tests gave us hope, and in the end, he created a pore structure that was better than anything we'd have gotten from the live scans. It's not 100 percent accurate, but it's incredibly accurate. If Will Smith had 35 pores in an area, we might have 34."

The simulation is controlled with empirical maps that define how to grow the pores - deeper here, isotropic there, denser, sparser, and so forth.

"What happens is that we 'pelt' a number of points to distribute the points evenly across the surface, and then flow the points across the face," Williams explains. "From every point, the simulation draws lines to neighboring points without crossing another line. The software interprets the flow field and can take a bias from the flow. That creates anisotropy: The flow of pores in one direction might be more dominant than in another direction."

The simulated pore structure resulted in a nine million tetrahedron mesh.

"The beauty of this is that we can move it," Williams says. "We can pipe the facial animation into the simulation software with the mesh. The mesh moves based on motion capture cleaned up by an animator. The way the face moves shapes the pores and changes the shape of highlights in an anistrophic way. We can get micro-wrinkling; the pores can collapse into fine wrinkles."

Thus, the simulated pore structure provided the model for skin texture. For color, the crew simulated melanin and blood flow. Rather than painting multiple color maps, they first created pale-pink skin using blood flow under the surface, and then layered in two types of melanin to color Junior's skin.

As a result, the color of Junior's face comes from a complex interaction of simulated melanin and blood flow with light. It doesn't depend on colored light bouncing off a textured surface.

More Than Skin Deep

"Melanin is a pigment layer with thickness," Williams says. "The density creates the color, and it's angle-dependent. At an angle, you see more of the thickness, so it looks darker than when you view it straight on. We would squeeze blood and melanin into parts of the face. As the face moved, the color flowed, and the overall compression of the skin affected the color. We ran that as a simulation, and software applied it to a shader. When I say that we put melanin in the skin, we actually measured it so it interacts correctly with light - our renderer is based on wavelengths of light, not RGB colors."

This level of detail extended into the eyes. Weta Digital artists used a volumetric sphere for the eye built for previous shows. This digital eye has a cornea with fluid inside, an iris, and layers of sclera.

"It's gorgeous as is," Williams says. "But, we added a couple more things." A conjunctiva surface with pigmentation and thickness that covers the sclera put color in the corner of the eye. A choroid layer beneath the sclera created a dark ring around the iris. Oil added to the thin film of water covering the eye created a proper meniscus effect, a curve in the upper surface of the liquid, and appropriately dimmed harsh reflections.

"We couldn't set a value of oil to water," Williams says. "The ratio changed from day to day. We had to modify it from shot to shot."

And as the number of simulations used for Junior's face grew, so, too, did render times.

"Our bakes are slow because there is so much simulation," Williams says. "But the thing that really slowed us down was 120 fps. And, Ang [Lee] likes to linger on a performance. We had many shots that were over a minute long, and two that were over two minutes. Our bakes could take two weeks."

Avengers: End Game and Captain Marvel: Lola Visual Effects

Lola is famous in the industry for its artists' digital cosmetic enhancements to actors' filmed appearances. But in 2006, the studio also pioneered de-aging by creating youthful Magneto (Ian McKellen) and Professor X (Patrick Stewart) for the film X-Men: The Last Stand. So, it's no wonder Marvel Studios turned to Lola to de-age (and age) characters in two major blockbuster films this year: Captain Marvel and Avengers: End Game. (Christopher Townsend was overall VFX supervisor for Captain Marvel, with additional supervisory help from Janelle Croshaw; Dan DeLeeuw was overall VFX supervisor for End Game.)

For End Game, Lola artists worked on more than 200 de-aging and aging shots. Most of the Avengers needed to appear four to six years younger, and in the case of Captain America, young Cap had to appear with the current Cap.

"Those were, I guess, the easy ones," says Trent Claus, visual effects supervisor at Lola.

More difficult was removing 30 years from actor John Slattery's character Howard Stark, and changing 70-year-old Michael Douglas's character Hank Pym into a 25-year-old. (They also aged Captain America.)

"That was our largest age range," Claus says of Michael Douglas's character. "It wasn't just wrinkle removal and eye-bag lifting. It was almost a complete facial reconstruction because his facial structure and proportion changed so much in 45 years. It was a huge challenge."

For Captain Marvel, the artists changed actor Samuel L. Jackson throughout the length of the film to bring a 30-year-younger version of the character Nick Fury to the screen.

"It was a big step for us," Claus points out. "We worked to have the actor look consistent shot by shot. It's easy to get one shot approved, but to have all the other shots in a sequence and in the next sequence and through the whole film to be consistent is harder. We always wanted the character to look like Sam [Jackson] and always like Sam in 1995, no matter which angle, or lighting, or which artist worked on the shot."



No Head Cams, No Volumes

Actors in these films did not wear head cams or work in volumes.

"Our approach has always been to allow the most freedom possible for the actors on set. They're in costume and makeup, and act as they always have. We like to have tracking dots on their faces, but we don't always do that. The actual de-aging (or aging) is all done in 2D by compositors."

Most of the artists at Lola work with Auto-desk's Flame software - 90 to 95 percent, Claus estimates. The studio also has Foundry's Nuke, and typical 3D software programs Autodesk Maya and SideFX Houdini.

"We do some work in 3D," Claus says, "but that's mostly for things like set extensions and vehicles. We get questions like, 'Can you guys do a greenscreen?' and the answer is, 'Yes, we can handle that.' But, when it comes to aging and de-aging, our goal has always been to make it as realistic as possible with the least impact on filmmakers on set and to maintain the actor's performance as much as possible."

While visual effects artists are often known as digital nomads as they move around the world from studio to studio and project to project, that's less the case at Lola.

"Our hero de-aging and aging artists have been here 10 to 12 years in many cases," Claus says. "They have a lot of experience and have learned what changes happen in anatomy over time."

On Set

If, as in the case of Samuel Jackson, the artists' task is an extreme age range, the crew usually has a double on set waiting in the wings while the hero actor performs. The double then mimics what the hero actor did as closely as possible to give Lola's artists reference for lighting and camera angles.

Once the crew has the footage - the plate - and reference shots with the double, they track the work in 3D. They also take a 3D scan of the actors when possible.

"We use the 3D scan for tracking," Claus says. "The high-resolution scan can be a guide to make sure registration points are accurate. And, the compositors use it as a surface to project onto, track onto, and apply elements to manipulate."

For example, they might project a wrinkle onto a 3D scan that is tracked to the plate, and the wrinkle sticks to the photographed face.

"The scan is never visible," Claus says. "It's always there for elements to stick to, but as you're working on a shot, you don't see it. You're painting on the footage as a painter naturally would, manipulating light and shadow to fool the audience into thinking you've changed the person's face, and you're animating the soft tissue movement.

"It's relatively easy to age or de-age in one frame," he continues. "But as soon as the actor moves, we see all the changes that come with that. The skin flexes, a blink pulls some wrinkles away, a head turn obscures part of the face. It's very complex to get all those changes to stick in the right spot and not look like they're taped on. It needs to look like it's part of them, like it's their skin. But it's all 2D tricks."

And, a lot of artistry. When asked if the artists automate this work somewhat by using scripts, Claus answers, "Oh, if only we could. That would be great!"

The artists do, however, have some general guidelines for how to go about the work.

"We don't have templates," Claus says. "That would be too rigid. But we do have general guidelines for the sequencing of things - this first, then this, then that."

Also, supervising artists generally choose hero shots in a sequence, dial in the look, and then assign artists to the sequences based on their strengths.

"We want consistency amongst the artists," Claus says. "We try to keep them confined to similar shots. We don't want every shot they do to have the character at a different angle. They'd have to learn what to do all over again. If they have consistency in their shots, we get a higher quality in the end."

AI Synthesized Faces

So-called “deepfakes” use neural networks that “learn” how to superimpose one face in a video – look and movement – onto another. With software like FakeApp readily available, deepfake swaps of eyebrows to chin have become ubiquitous on the Internet. Although mostly applied to embarrass celebrities and politicians so far, insidious possibilities have caused a scramble to create deepfake detectors. Could deepfake help change actors’ faces for feature films? “We’ve been experimenting with deep-learning AI,” says Trent Claus, visual effects supervisor at Lola. “There are a lot of things it can do well, but at the size of a movie screen, imperfections become clear quickly. We’ve been curious to see whether it might be useful for intermediary steps. I find it very promising.”

A high-quality effect that, at its best, is invisible to the audience and that began with a nearly invisible presence by the crew on set. No head cams. No volume. No witness cameras.

"With our method, what you see 100 percent of the time is the actor who was on set," Claus says. "It's always them. Every nuance is there for the audience to see. We just manipulate their appearance as they perform. It's hard to re-create a person digitally, and it's even more difficult for that digital re-creation to maintain the tiny micro-expressions and subtle movement that an actor does on set to embody their character."

The feedback they receive from actors is positive.

"What we're doing is an intimate and delicate thing," Claus contends. "We're affecting their appearance - and their livelihood, actually. We got glowing reviews from Sam Jackson. At one point, he tweeted about how excited he was about the film. He said, 'They've got this Lola thing now.' And, on Jimmy Kimmel, he said how much he liked the Lola process."

"It was reassuring," Claus adds. "We saved that clip."

And More…

In addition to the feature films in which a youthenized character appears throughout the entire film, two other projects this year were significant. For Terminator: Dark Fate, artists at ILM, led by Visual Effects Supervisor Jeff White, created a young Sara Connor (Linda Hamilton), John Connor (Aaron Kunitz), and the T-800 (Arnold Schwarzenegger) for a sequence in the opening of the film.

"There was a lot of stunt work in the sequence, which necessitated using body doubles," White explains. "We weren't always able to have the actors on the show. So, our approach was to replace the heads."

And, remarkably, for the episodic television series Righteous Gemstones, the studio Gradient Effects created a younger version of the character played by John Goodman for flashback sequences. The studio used technology developed by its sister technology company Secret Lab to extract facial muscle movement without tracking markers.


"We did 180 shots in six weeks," says Olcun Tan, Gradient Effects and Secret Lab president.

More information on the processes used to de-age these characters can be found on www.cgw.com.

One consistent theme runs through all these projects: The assignment the studios had was not to create a digital human, or a younger actor. It was to create a younger version of the character the actor plays.

In creating those digital fountains of youth, VFX artists gave actors the flexibility to play any age they want, and gave directors new, unencumbered choices. Visual effects at its best.

Barbara Robertson (BarbaraRR@comcast.net) is an award-winning writer and a contributing editor for CGW.