Picture Perfect
Issue: Volume 39 Issue 1: (Jan/Feb 2016)

Picture Perfect

More often than not, visual effects are big, bold, and in your face. However, not all VFX fall into that category, and yet they still elevate the story in dramatic ways.

Consider a handful of films released in the last quarter of 2015: In the Heart of the Sea, The Walk, Bridge of Spies, Everest, and The Martian, to name a few. All five movies feature digitally constructed environments that were vital to the story line. Four – In the Heart of the Sea, The Walk, Everest, and Bridge of Spies – required an accurate re-creation of a real-life period locale. The Cold War era backdrop for Bridge of Spies. The harsh watery landscape battering the 1820s vessel “Essex” in The Heart of the Sea. The frigid landscape of Mount Everest during the ill-fated March 1996 trek. And, the heart-thumping void atop the 1974 World Trade Towers. The Martian, meanwhile, provided a barren backdrop of desolation based on scientific data mixed with artistry.

These environments set the tone for the respective films, transporting audiences to unique locations where they, too, could experience the drama, fear, isolation, and anxiety of the main characters. 

Here, CGW reaches high and deep, looking at the amazing work that placed us on a thin wire atop the Twin Towers of the World Trade Center and on an 1820s whaling ship in the South Pacific. Online, CGW transports you behind the Berlin Wall during the Cold War, atop the highest mountain in the world, and to the bleak surface of Mars, to finish this digital journey.


Highwater Mark


Effects artists battle rough seas for the film In the Heart of the Sea

In he Heart of the Sea is a literal whale of a tale. 

The story is inspired by the 1851 classic novel “Moby-Dick” by Herman Melville, which recounts the obsessive quest of a sea captain hell-bent on revenge after losing his leg in a previous struggle with a huge white whale. The movie, though, is based on the 2000 non-fiction book In the Heart of the Sea: The Tragedy of the Whaleship Essex by Nathaniel Philbrick, about the sinking of the “Essex” by a vengeful mammoth white whale that turns the tables so the hunters become the hunted, and proceeds to stalk the survivors during the ill-fated voyage. This is the true life-and-death struggle recounted by survivors that formed the basis for Melville’s book many decades ago.

The movie’s drama, naturally, takes place on the high seas. A good deal of Sea was filmed practically, offshore in the Canary Islands and at Warner Bros. Studios Leavesden in the UK. According to Double Negative’s David Hyde, effects supervisor for Sea, filmmakers sailed a tall ship similar to “Essex” to the Canary Islands, enabling the visual effects department to shoot oceanscape plates from the deck en route that they later augmented with CG for panoramic seagoing scenes. 

At Leavesden, meanwhile, an exterior bluescreen was constructed for the harbor shots at Nantucket’s port, along with an “Essex” deck atop a hydraulic gimbal system used to mimic the pitch of the stormy seas. An interior water tank was constructed from shipping containers to house the smaller whaling boat rigs, also on a hydraulic puppetry system. 

The hydraulic rig was used to shoot close-ups of the so-called Nantucket sleigh rides – when the smaller whaling boats launched during a hunt were subsequently dragged across the water by a harpooned whale. 

“A substantial amount of the water was caught on film, which was very good for us,” says Hyde, noting that in some cases the practical water was then augmented with CG, while in others it was all-digital. 

“When we had a whale breaching or a dolphin breaking the surface, we would have to simulate that,” says Hyde. “And sometimes, particularly when using the interior tank, we would have to replace the water around the whale boats, to make them look like they were traveling at 17 to 20 knots while being dragged by a harpooned whale.”  

There were four main areas when CG water was required: for large seascapes that were not caught on film, for depicting boat and whale/dolphin interaction, for underwater effects, and for augmentation, such as white
water to make the sea look choppier and for expelling water through the blow hole of the whales – creatures that are also computer-generated (see “Whale of a Character,” page 34). 

Making Waves

Generating the various types of water was done using Side Effects Software’s Houdini as well as DNeg’s in-house water simulation solver, Dynamo. Both are so-called FLIP solvers, a hybrid between a particle-based and a volume-based fluid simulation. 

“We were fully aware from the start that the turnaround times for complex simulations would be a big hurdle. And final animation could not be approved until we could review how the animation and FX interacted with each other,” says Hyde. “Even with the talented FX crew we had, the fact of the matter is these simulations can take many hours. It’s a slow process.” 

Of the 30-so effects artists on the feature, nearly three-quarters of them worked on the water creation.

The effects leads – Robert Pearson, Chris Ung, and Menno Dijkstra – were constantly developing tools and techniques through production to streamline the sim process. “If you can save a couple of seconds per frame of simulation, on a 200-frame shot where you do the sim a number of times and then across a production with many shots, that time-savings can be substantial.”


The movie called for a range of water requirements, from close-up to wide shots.

The group used Sea to push the studio’s Dynamo in-house solver forward with new features. “We picked a number of shots we said we would push through the software, and it was quite successful. We would jump back to Houdini if we sensed we could get better results, and vice versa – there were certain features in Dynamo that were better for certain shots,” says Hyde. “This gave us more than one option and the freedom to try different things.”

Of course, having an in-house R&D team working on a piece of software has lots of advantages, and many of those involved changes that resulted in the important fast turnarounds. 

While Dynamo proved its mettle, Houdini was the main solver used. In fact, the decision was made early on to run the majority of the production’s pipeline through Houdini, with the reasoning that everything was so tightly coupled together – from postvis, to effects, to lighting. Rendering was done in Mantra for the most part, with the “Essex” destruction sequence rendered in Maya with Pixar’s RenderMan. “Keeping everything within one package was the ideal route because it meant everyone was working off the same platform, and it proved successful,” says Hyde. “We were able to develop a very streamlined pipeline by staying with Houdini.”

One of the main challenges recognized early in the process was how to integrate the live action, animation, and effects simulations. Hyde explains: “A whale breaching or interacting with the surface, for example, would need a representation of the CG water surface for animators to realistically animate to. Likewise, with the action footage of the hydraulics, the water surface has to behave so it looks like it initiated the movement of the boats.”

To this end, the group implemented a high-level postvisualization stage before proceeding to shot work. 

Whale of a Character

DNeg artists chase their own white whale for In the Heart of the Sea

For the feature film In the Heart of the Sea, the digital artists at Double Negative had to create a number of CG creatures: dolphins, remora fish, seagulls, eels, great white sharks, and a plethora of smaller fish. They also gave life to families of bull, cow, and calf sperm whales that appear in approximately 80 shots. Their “biggest” task of all, though, was to create the film’s antagonist: the vengeful behemoth white whale that appears in just over 60 shots.

The peculiar behavior of this whale seems almost unbelievable, yet the heart of the Sea story line is indeed real, prompting the filmmakers and digital artists to research and analyze the behavior of sperm whales. 

The whales, including the antagonist, were brought to life via CGI by the visual effects team, led by VFX Producer Leslie Lerman and VFX Supervisor Jody Johnson. “It was particularly challenging, with a creature of such immense size and power, to push the envelope without going over the edge, since we didn’t want to pluck the audience out of this real world and take them into a fantasy realm,” says Johnson. “Each time we conceptualized an action sequence that involved the main whale or any of the secondary whales, we sent it off to our experts and we’d discuss how plausible it was and what other behaviors they might suggest. It gave us a great spectrum from which to work.” 

Processing the Whales

DNeg’s Tosh Elliott modeled all the sperm whales in the film. While there was an abundance of available photography, anatomical drawings were few and far between. To help maintain realism, whale expert Dr. Luke Rendell provided feedback during the process. “For instance, we discovered that whales do not have binocular vision, so both eyeballs would not be visible when directly facing a whale,” Elliott explains.

Elliott started with the sperm whales and built them to accurately reflect real whales. A great deal of concept art, however, was done for the main whale, which needed to be an older character, not a sea monster. According to Production Designer Mark Tildesley, it was important to ensure that this whale felt like a living presence in the film. 

The modelers shared topology among the adult secondary whales. Only the calf and dead bull were different enough to warrant their own unique structure. Extra details, such as wrinkles on the tails, were added to the characters in Autodesk’s Maya and Pixologic’s ZBrush. “We decided against a full-blown muscle system for the whales because the amount of blubber surrounding the muscles obscured them,” says Kieron Helsdon, CG supervisor at DNeg. 

The final bull and cow models were a realistic 52 feet and 36 feet long, respectively, while the behemoth was a whopping 95 feet long, weighing approximately 80 tons, with a tail spanning 20 feet – nearly double the size of the other whales in the film.

Laetitia Gabrielli textured the whale family in The Foundry’s Mari, using variants so the whales did not look the same. She also added the small remora fish attached to the whales, which provided scale to these ocean mammals.

Battle Scars

The large whale, though, required more attention. Initially, the group tried a few images of white whales, and while they looked “fantastic,” the pure white color also gave them an ethereal, calm presence – certainly not the look that was needed for this vengeful creature. Research, though, revealed that many older whales start to lose their skin, so the artists made the whale darker but with visible white in patches where the skin has flaked off. 

“He is also scarred from previous battles with humans and other predators, so his appearance conveys the harshness of his history,” adds Lerman. 

The texture artists reviewed whale scarring reference and how they heal after skin damage. They also considered the size of the creature and that of a human so the scars would have the correct scale for a harpoon injury. “While creating the scars, we found reference of how they shed their skin, and we ended up having cloth simulations of floating skin, to add to the damaged skin textures,” says Helsdon. “This also created some movement and interest underwater.”

Lookdev Artists Chris King and David Mucci then developed various looks for the creature, depending if it was below water or breaching above the surface and needed a wet look. “We found that with the underwater environments, after we tested with raytraced caustics generated with water simulations, we needed to develop a multi-layered caustic projection setup because we wanted art to direct the look of the caustics depending on the shot feedback, as opposed to using a totally physically correct approach,” Helsdon says.

According to Robyn Luckham, animation supervisor at DNeg, achieving the correct buoyancy and scale of the whales in animation was very difficult, as it had such a major effect on the water simulations. “The speed of a tail slap, for example, could create supersonic drops of water,” he says. To keep this in check, Luckham had to maintain a delicate balance between the emotional intent of the shot and the technical constraints.

Of course, each department faced their own white whale, so to speak. For animation, it was creating realistic behavior for the massive sizes of the underwater and surface pods, which were undocumented. For lighting, it was the close-ups of the huge whale. For effects, it was getting the detail and scale into the water simulations. 

The work on Sea was indeed a large undertaking, requiring a team effort among departments. This was especially important due to the tight integration of the sims with modeling, animation, and more. 

“Working with David Hyde, effects supervisor on the film, the VFX artists created height wedges with low-resolution water simulations, which could be fed back to the animators,” says Luckham. “This helped us select the right balance of being able to see the whales and the realism of their movement before committing to a simulation that could take several days to solve.”

And in the end, it was all about the realism in this movie. After all, In the Heart of the Sea is a real story, about a real event, a real creature, and a real struggle for survival. So re-creating that realism was paramount. “For [the great whale], it was about him being a character, not a monster. So we had a lot of discussions about how best to portray that with the limited options we had. After all, whales cannot smile,” says Helsdon.

“Here we generated procedural water surfaces rather than dynamically simulating them, and Artist Geoffrey Coppin matched the look to the various sea conditions caught on film for animation timing,” Hyde says. “So when the whale comes up to breathe or the boat dips a certain way, we used these surfaces for pumping the waves into our dynamic water surface simulation.”

Once timing was worked out on the procedural surfaces at this postvisualization stage, the team ran a low-level simulation, using this procedural surface to pump in the wave velocities, to get the same type of waves flowing through the simulation dynamically. If all behaved accordingly and the broad movement was approved, the effects artists kicked off a high-resolution simulation. Depending on the size of the area, the simulation could take a few days to calculate and generate a couple of terabytes of data. 


Artists used Houdini and Dneg’s proprietary Dynamo solvers for the CG water in the film.

“The turnaround times were quite lengthy when we got into the high-quality simulation settings,” says Hyde. 

Next, the effects artists used that dynamic patch of water the whale or boat would interact with, and they would add it back into the original procedural surface. As a result, the dynamic simulation would only be simulating the area of interest around the object. “It was very efficient,” says Hyde. “We were not doing large-scale simulations that we didn’t really need when the procedural water representation was just as good.”

Difficulties at Sea

According to Hyde, there were four sequences involving “heavy” CG water simulations, and they all had different requirements – from wide, expansive shots to others extremely close-up, and some underwater. “That is something we hadn’t faced before, having to accommodate such a variation of ‘water’ environments, but we were able to adapt and meet those shot requirements through tool and pipeline development. And it worked across most of the shots the majority of the time.” 

One sequence involves the initial whale hunt, when the “Essex” first comes across the large pod of whales. “The idea being the whaling crew had been at sea for months and didn’t see anything, then suddenly they come across all these whales,” he says. “We had to simulate many whales for this epic look.” 

Early on in production, the animators built multiple animation cycles for the whales so that the effects artists could build up a library of generic sims for placement in some scenes later on, such as this one.

Another challenging sequence involved the second whale hunt, when the leviathan first fully appears. “Big, wide shots with the whale charging out of the water at the ‘Essex,’ ” says Hyde. “Here the challenge was having convincing scale to the water in simulation areas approximately 300 feet wide in-camera.” 

The third is the other extreme, occurring at the end sequence when First Mate Own Chase (Chris Hemsworth), in one of the small whaling boats, confronts the great beast. The actors – and the water – appear in close-up. “We needed detail from one or two feet off the top of the water,” Hyde adds. Some of those sims were done over two or three days and generated 5tb to 10tb of simulation data each. 

The movie also contains a number of underwater effects that had to be approached differently than the surface water. Effects Lead Tamar Chatterjee created an approach that enabled the group to do a localized fluid simulation around the object that would be underwater, such as the whale or dolphin. That simulation mimicked the water currents that would flow around it in a real-world scenario. Next, the artists used the same approach for the channel bubbles, krill, and general detritus, to make them appear as though they were interacting realistically within a body of water. 

“We took the opportunity to re-create the style of the cinematographer and what he created on the production side, placing our CG cameras onto the whales and using very wide fish-eye lenses,” explains Hyde. “That, along with the effects, helped create a claustrophobic underwater feel and put the audience right there with the whales.”

Indeed, water scale is always an issue, and it was here. However, it is the attention to detail that sells the realism. “You are constantly trying to get a realistic look with the scale of the water. And to a certain extent, you can push the simulation within the constraints of memory and time. But then you have to augment it with white water, foam, spray, mist, and so forth. That is generally what sells it at the end of the day,” Hyde contends. “If you don’t see those details, it does not sell the scale. We were constantly battling to get the scale and detail into the water.”

For both the surface and underwater shots, secondary sims were generated for the atmospherics, like mist and spray. Primary sims were done on the whales to get the initial cresting and breaching, and then secondary sims were run to get the “sheet water” running off their backs as they crested. Simulations also helped achieve whitecaps, to make the ocean appear stormier. 

Almost all those sims were accomplished in Houdini. 

Overall, Hyde estimates the effects team generated more than a petabyte in water simulation data for the film. Fortunately, this amount of data did not sink the effects crew.

For the feature In the Heart of the Sea, some of the “Essex” crew overcame insurmountable odds and eventually conquered the elements, enabling them to eventually recount their harrowing journey. In Philbrick’s and Melville’s books, words are used to paint a vivid picture of this struggle. For the film, though, that task was given to the effects crew, which used state-of-the-art visual technology to bring this rich, visceral story to cinematic life.


Hardwired


VFX experts re-create a high-wire artist’s thrilling act in The Walk

1974, high-wire artist Philippe Petit accomplished an incredible feat, crossing between the tops of the World Trade Center towers on a thin wire strung between the two buildings. His daring act is retold in the feature film The Walk from TriStar Pictures. 

Bringing the story to the big screen required tremendous determination, will, and dedication to their craft by the filmmakers, visual effects artists, cast, and crew. The end result: Audiences felt as though they were on the ledge with Petit (played by actor Joseph Gordon-Levitt) when he took that first step, and then on the wire as he traversed the void 110 stories above 1974 New York City.

Forward-thinker Robert Zemeckis directed The Walk, while Kevin Baillie of Atomic Fiction served as overall visual effects supervisor. Atomic Fiction completed the majority of the work involving the walk between the towers and replacing a stuntman’s face with that of Gordon-Levitt’s. UPP in Prague was tasked with the nighttime rooftop antics, crafted a digital reproduction of Paris’ Notre Dame cathedral, and made Old Montreal look like 1950s/1960s Paris. Meanwhile, RodeoFX handled the ground-level work, such as the Trade Center lobby and plaza, and the shots of Petit on the Statue of Liberty. Legend 3D completed cleanup work and did the 3D conversion of the film.

As the film is based on a real-life experience, re-creating reality was vital. And not just any reality. The entire set had to reflect true-to-life New York City – the towers and surrounding areas – as they were on August 7, 1974, when Petit took his famous walk. No small feat considering the scope of the work and the emotions the structures invoke following their destruction on 9/11. 

“We felt a lot of responsibility toward doing it right. We wanted to do justice to the towers and represent them in a way that was respectful and accurate, and would portray them in a positive light. This movie is a love letter to the Twin Towers,” says Baillie. 

The Towers

The World Trade Center, which comprised seven buildings in Lower Manhattan, was constructed over two decades starting in the late 1960s. The centerpieces were the Twin Towers. The original One World Trade Center (the North Tower), when completed in 1972, stood at 1,368 feet, while its twin at Two World Trade Center (the South Tower), completed in 1973, was 1,362 feet high. They were the tallest buildings in the world at the time of their completion and boasted 110 floors. 


Digital artists had to reconstruct the twin towers of the world trade center in exacting detail for The Walk.

That height had a major impact on the film’s story line. “Sometimes movie reality is an exaggerated version of real reality,” says Baillie. “But what [Petit] did was so crazy in and of itself that we didn’t need to exaggerate the buildings in terms of their height.” 

According to Baillie, the tops of the towers in the movie were “exactly as tall as they were in real life, and what you get to experience up there on the wire [in the film] is what you would have experienced out there in 1974.” The VFX artists worked from the actual architectural blueprints to reconstruct the buildings digitally “in their most perfect sense.” 

To that end, the group spent weeks with each panel and window, introducing slight imperfections. “It’s that personal touch of introducing human imperfections into this giant monolithic structure built by humans that made our CG structure look realistic,” explains Baillie.

So, the artists built the most realistic version of the towers they could possibly construct, though there were instances that required judgment calls, since every reference photo showed a different aspect of the buildings. “There were times that if we turned left, they would have looked harder-edged, more menacing, and if we turned right, they would have been softer, more inviting. When that happened, we tended to turn right, hearkening more positive memories of the towers. And the void between the towers would be something that was calling to Philippe, rather than being menacing,” explains Baillie of the direction the artists took.

Zemeckis’s directive was that the towers had to look 100 percent real, but they also had to be a supporting character, not an antagonist – they beckon and call to Philippe. “The height is terrifying in and of itself, especially if you see the movie in 3D,” says Baillie, “yet it feels inviting in a way.”

When Gordon-Levitt steps out onto the wire, there is a beautiful orange fog that clears from below, and the lighting at the start of the walk is at magic hour, with pink and orange flints off the building. 

“It is a kind of magical moment that is meant to feel more like a ballet than anything threatening. But at the end of the walk, there is a sequence when [Gordon-Levitt] begins to have doubts and feels like the towers may have had enough and are telling him to get off his wire,” says Baillie. “It is at this point when we made the weather a little stormier and the lighting grayish. The towers have not turned on him yet, but they are getting a little impatient.”

Circa 1974

A great deal of research was done to achieve the desired level of realism. Baillie and the artists looked at reference photography in books and at Google images, as many people had posted pictures of the tops of the towers from various time frames. “But it was not until right after we finished shooting when the full gravity of what we needed to do really hit me,” he says. That was when Baillie and others spent two days taking reference footage from a helicopter to see, for instance, how light changes within the city, how traffic moves at various times of the day, and so forth. 

As if re-creating New York City from scratch wasn’t daunting enough, the group also had to turn back the clock to 1974. “It looks totally different there now, obviously,” says Baillie. “We had hovered [in the helicopter] in the exact position where Philippe had been when he walked between the towers, and I remember looking down and becoming overwhelmed by this simultaneous feeling of awe and respect at what he had done, and also the sheer terror of how high we were.”

That was when Baillie realized the importance of integrating that emotional sensation of danger and awe into the film through the visual effects. “If a shot didn’t make me feel that, I knew something was wrong and we had to figure it out,” he adds. “It had to be technically accurate and emotionally accurate, too.”

Had the artists been re-creating modern-day New York City, they could have used film and photographs taken from the helicopter. However, the city today looks very different than it did four decades ago. Fortunately, the team had access to tens of thousands of period photos, yet most were of poor quality and grainy, as this was before digital photography. 

“A big creative challenge for the team was using a combination of old photos and blueprints from outside and inside the structures, to reconstruct the buildings, including every AC unit on the roof, the rain gutters, and the hot dog stand on the street. We even [digitally] added the newspaper stands on the ground and inserted newspapers from that time,” Baillie says. 

The artists constructed office interiors for the top 30 floors, including desks, bookshelves, blinds, and so on. “There is all kind of stuff in there to give that extra level of depth,” Baillie says. “That’s the thing about looking at reference photography: Every photo will have this one little thing that you can unearth, if you pay close enough attention, that will make it one percent more real. And another photo will do the same. So by the end of the day, you will have uncovered 40 to 50 things that can make very subtle and almost imperceptible changes to the imagery you are creating, but it will make the difference between good CG and realism. We spent a lot of time getting that last 10 percent of realism, to make people feel like they were really there.”

Under Construction

The digital artists used Auto-desk’s Maya for building the models and The Foundry’s Mari to texture them. Lighting was done in The Foundry’s Katana, which was also used for all the scene construction and building assembly. Rendering occurred within Chaos Group’s V-Ray.

Filmmakers used various methods to portray the vertigo feeling on the wire. 

As Baillie explains, so many people have different ideas of what the towers looked like – some thought they were white, others gray; some thought they had a blue tint. That’s because the structures were made from anodized aluminum, so if they were viewed straight on, they appeared to have a matte silvery look, like a MacBook; from the side, they looked more chrome, almost reflective. 

This resulted from thousands and thousands of tiny bumps on the surface across every square inch of the buildings that scattered light.

“The buildings were like chameleons, taking on the look of the environment,” Baillie explains. “So on a sunny day, they looked white, but on a dreary day, they almost looked charcoal gray. We had to figure out how to reproduce that.”

The solution was Sergey Shlyaev’s GGX shader for V-Ray, a physically correct shader that utilizes microfacet distribution for better matching the measured response of real light transmission from real surfaces. In short, it simulated all those tiny bumps (boxes) on every square inch of the building surface. 

“When we started using the GGX shading model, the towers transformed from good-looking VFX metal to something far more realistic,” says Baillie. 

This was important because the towers were taking on the environment around them. “Often with movies, you focus on what is in camera. Anything above or below us we don’t worry about,” Baillie explains. “With the towers, we had to worry about what was above, below, and behind us because it was all a set and reflected in the buildings. We had to build the world in 360 degrees. The towers themselves depended on us building a realistic world around them. We couldn’t think of the towers or New York on their own, but how they lived together in the same world.”

This resulted in a massive amount of rendering. “We are up on the towers for a long time, over 20 minutes of the film. So we had to figure out how to get all that rendering done,” Baillie notes.

The team opted for a cloud-based rendering solution, using a platform called Conductor to manage the workflow, which enabled the artists to access and interface via the cloud with up to 15,000 processors on demand. 

“This was the largest use of cloud computing ever on a movie,” says Baillie, noting the film required 9.1 million hours of rendering in the cloud – basically, more than 1,000 years using a single processor. “So you could say we spent a millennium in the cloud creating the visuals.”

According to Baillie, the film’s budget was far less than other VFX-heavy features, but by using the cloud, they were able to turn around the work quickly, saving an estimated 50 percent in rendering costs compared to using traditional methods. 

Also atypical was the use of matte paintings on the film. When Gordon-Levitt steps out onto the wire, it is sunrise, but as the walk progresses, the sun gets higher in the sky, and by the end, the environment is stormy. A pure, traditional matte-painting approach would have required close to 20 paintings, “which would have been completely overwhelming,” says Baillie. 

Instead, the artists followed a hybrid approach whereby the CG lighting and matte-painting departments worked in concert. Since every detail below the wire was geometry, they could blend the light bouncing off these very detailed, live-rendered CG assets with matte-painted augmentations, and change the CG light as the sequence progressed. The end result contained the beauty of a painting with the flexibility of CGI.

Vertigo

Baillie points out that The Walk is one of the great successes that happens when all the departments work together in perfect sync: production, set design, digital effects, camera, stereo 3D. To prepare for the walk, a 40-by-60-foot corner of a rooftop – about a sixth of the actual size – was used for the actors to scamper across and for the camera crew to capture their viewpoint in the rooftop scenes. 

Facial Expression

Preparing Actor Joseph Gordon-Levitt for death-deying scenes

Digital artists at Atomic Fiction created a CG version of Actor Joseph  Gordon-Levitt’s face for scenes that required a stunt performer. 


The scanning was done at Pixelgun Studio with a mobile photogrammetry rig that captures ultra-high-resolution facial expressions and full bodies. The solution utilizes an array of more than 100 cameras that surround and simultaneously photograph the subject. Proprietary software uses perspective differences among the cameras to construct a perfect geometric representation of the face, and since cameras are used as the source, the texture perfectly aligns to the mesh. 

In a half hour, the group acquired 40 to 45 poses of Gordon-Levitt’s face. “We had him act out poses where we may have needed to re-create his face as he performed on the wire,” says Atomic Fiction’s Kevin Baillie, overall visual effects supervisor. “It tells us how, when he furrows his brow, blood flows from his forehead to his temples; what the skin and blood under the skin do. Just as reproducing the offices and the bumps on the surface of the towers elevated the reality of the environments, so did this information for the performer. Every single detail on the skin and under it increased the level of reality.”

The resulting mesh and textures provided the artists at Atomic Fiction with substantial data that was later refined artistically and with the use of Chaos Group’s V-Ray skin shader. “We’ve learned over the years that reality is not always the right answer; it’s what looks right,” Baillie says.

On the last day of filming, Gordon-Levitt donned a Faceware Technologies helmet camera for approximately 50 takes that replicated moves by the stunt performer which likely would require face replacements in the shots. “Joe walked across a tape mark on the ground with the head cam, and Faceware gave us very clean and lightweight data to use,” says Baillie. “It got us 60 to 70 percent there, as opposed to a detailed motion-capture session with thousands of markers, which would have been data overload and required more time for us to undo than to animate the shot


To illustrate the notion of height, artists added various effects, including clouds, using Houdini.

Zemeckis, Baillie, and Cinematographer Dariusz Wolski primarily relied on two specific types of camera moves to produce the dizzying, vertigo effect. One is when we are looking at Gordon-Levitt on the wire and see the horizon, and the camera moves above him and starts looking downward. The other starts with the edge of the building in frame and moves over it, as if daring us to peer over the edge. Later, Jared Sandrew, 3D supervisor for Legend 3D, picked up on those cues and accentuated the vertigo feeling in the stereo conversion process.  

“There were times in the movie when we didn’t want people to feel that way, so we made the 3D feel shallower. If Petit (Gordon-Levitt) was feeling serene and comfortable, we wanted audiences to feel that way too,” says Baillie.

According to Baillie, “there were a bazillion little things we added to sweeten the look,” such as clouds that would blow by in many of the shots to provide depth. Flocks of birds would fly past, stacks would release smoke in the distance, and further down ant-size people would walk down the street and steam vents would release vapor. A lot of these effects were generated in Side Effects’ Houdini, though the birds were done in The Foundry’s Nuke 3D particle system.

A method of creating haze in the city also helped sell the illusion of height. “If you look at a photograph, it’s not just the intensity of the haze that changes as you go off into the distance, but the color of the haze changes, too,” says Baillie. “It is a little warmer closer to the sun, cooler and bluer over the water than over the city, where it might be a little brown.” To this end, the group developed a system within Nuke that used Nuke 3D with the renders so the artists could control the depth of the haze and tweak the fine grain in the way they do color.

“We blurred the line. Nuke for us is no longer a tool that is just used for 2D comp; it is used heavily in 3D to solve the illusion,” says Baillie.

Set Pieces

Before filming began, Gordon-Levitt was given tightrope lessons from the best – Petit himself – enabling him to do limited walks 12 feet off the ground. For more complex work, a steel plank with a groove in the middle was situated under the wire, giving the actor more support and stability; later, the plank was digitally erased and the sides of the actor’s feet were rounded out, making it appear as if they were supported solely by the wire.

As for the towers, the artists re-created the entire structures with geometry, but the top 30 stories and bottom 30 – which are seen in the film – contained far greater detail. In short, if audiences could see inside the windows, an office was made inside.

Indeed, the towers – which no longer stand – were the biggest visceral aspects of this virtually constructed world. However, they were not the only CG sets. The movie was filmed in Montreal, which had to stand in for NYC and Paris, where Petit had spent his earlier years. Many locales were all-digital built by RodeoFX or UPP. 

According to Baillie, the work on The Walk was among the most satisfying he has done. “Lots of movies are fun, but they are more about creating an effect that is an obvious effect,” he says. “For this movie, the goal was to have the effects serve in a supporting role rather than stealing the show. They sit in the background and are there to transport the audience and make them feel like they are there with Petit [Gordon-Levitt], a place that is very special to many across the globe.”