Growing a Giant Fantasy
Issue: Volume 36 Issue 3: (Mar/Apr 2013)

Growing a Giant Fantasy

Fairy tales brought to the screen have been a mixed blessing for the writers, directors, and studios trying to target just the right ticket-buying audience, but for visual effects studios, the fantasies provide an opportunity to amplify their ability to make the impossible real. That was the case with Warner Bros.’ Jack the Giant Slayer.

Directed by Bryan Singer, the film is a mash-up based on the English and Cornish fairy tales “Jack and the Beanstalk” and “Jack the Giant Killer.” Jack (actor Nicholas Hoult) and feisty princess Isabelle (Eleanor Tomlinson) climb a beanstalk, land on an island in the sky, and nearly fall prey to a gaggle of giants who consider humans a rare delicacy. A human villain played by Stanley Tucci controls the giants with a crown (there are no golden eggs) until the lead giant, a two-headed monster, captures the crown. Having regained control and their appetite, the giants climb down the beanstalks (many now) to the village below to find more humans to eat. There’s a battle at a castle. A chase through a cathedral. And a surprise ending that has Jack winning the crown in an unusual way to provide the fairy-tale ending.

Despite wicked business machinations grinding in the background, Digital Domain artists produced all the film’s fairy-tale giants – 15 unique characters, of which eight, including the two-headed monster, had starring roles with dialog. From those, the crew created a crowd that sometimes included as many as 100 giants and provided them with costumes, complete with cloth simulations. The Moving Picture Company (MPC) artists grew the beanstalk, a giant creature that rockets out of a farmhouse in one sequence and out the belly of a giant in another. Each of these studios developed new technology and new techniques to accomplish the work. For Digital Domain, an evolution of the virtual production system used on Real Steel, new skin shading techniques, and HDR imaging. For MPC, tricks for rendering a behemoth beanstalk.

Pre-cap

At Digital Domain, Visual Effects Supervisor Stephen Rosenbaum brought his experience working on Avatar to the crews located in Venice, California, and Vancouver, Canada. On-set Supervisor Swen Gilberg brought his experience at Digital Domain on Real Steel to virtual production.

“I came on as principal photography started,” Gilberg says. “While Stephen ran things on the West Coast, I spent five months in England working with Hoyt Yeatman [overall visual effects supervisor] on set. The idea was similar to what we did for Real Steel, but more rushed. Do mocap first, put that into a blocking previs. Then, put that in a virtual space that matches where you would later shoot. We did the ‘pre-cap’ in two weeks before principal photography.”

To do the pre-cap, Giant Studios and Digital Domain shipped their equipment and crews to England, where principal photography would take place. The Giant Studios crew handled the body capture and virtual camera, while at the same time, the Digital Domain crew used a proprietary system dubbed Beatrice to capture facial expressions for the actors playing the CG giants: Bill Nighy as the main General Fallon head and John Kassir as General Fallon’s small head, Cornell John as Fee, Andrew Brooke as Fye, Ben Daniels as Fumm, and Philip Philmar as the giant cook.

“We used four cameras to capture the faces,” Rosenbaum says, “two cameras on either side, which gave us true 3D information. We could see the jaw line, see how the lips curled out with certain words. The thing I’m most proud of in this show is that the actors’ performances came through.”

The animation team, led by Jan Philip Cramer, worked with a new facial rigging system to create the digital performances. “We did a FACs session using OnLive’s Mova system,” Rosenbaum says. “Mova gave us a dynamic mesh during every expression so the modelers could see the face deformed in specific poses on a 3D model. The new facial rigging system had between 1,500 to 2,000 face shapes because we needed that level of control to get the subtleties in the actors’ performances. [Principal Engineer] Nick Apostoloff developed a new solving algorithm that greatly improved the accuracy of the data from the performances that drove the face shapes.”

To perform the two heads on one body, Nighy and Kassir stood next to each other. “We would do real-time solves of the performances to see the characters through the virtual camera,” Rosenbaum says. “Not the faces. Those were static.”

During the capture, Singer could direct the actors and see simple CG versions of the giants mimicking their actions. “We had previs-style backgrounds that accurately matched the geometry of the location, so he could properly frame the shot,” Gilberg says. “Normally, we do a capture session, then a virtual camera session. This was rushed, so we did them together.”

The virtual camera was a tablet with characteristics that mimicked the cameras that the production crew would use later on set. “Bryan and Tom Sigel [Newton Thomas Sigel], the DP, could compose shots while the actors performed on the motion-capture stage,” Rosenbaum says. “At any time, we could capture up to 10 or 12 actors, depending on the action.”

The captured motion from the actors and the virtual camera went to Digital Domain and to The Third Floor. At The Third Floor, previs artists refined the camera and tightened the edit in preparation for principal photography. During filming, the CG giants appeared in camera as if they were on location.

On Set

This virtual production technique proved especially important for a sequence during which Fallon drags a mace (a ball on a chain) through the Norwich Cathedral. A camera on a crane follows the mace and then booms 24 feet up Fallon’s body and settles on his face as he turns into the camera for a close-up.

“We wouldn’t have been able to create this shot without SimulCam,” Rosenbaum says. “The odds of guessing where the CG character was in that space would be slim.”

Prior to filming in the cathedral, a Digital Domain crew had surveyed the location to produce an accurate previs environment. During the pre-cap motion-capture session, Nighy and Kassir, who played two-headed Fallon, had performed in that replicated environment, and data captured from them had been transferred onto the CG giant.

The CG character then moved into the SimulCam system for filming. “Giant Studios put motion-capture markers on the live-action camera and composited the virtual character into the environment being filmed in real time,” Rosenbaum explains.

Thus, the DP and camera operator could look through the eyepiece and see Fallon in the cathedral. “Our character was just like an actor,” Gilberg says. “A 24-foot actor. The camera operator could look at him or not; the giant stayed in his own world space.”

The crews used the same type of system for sequences filmed outdoors. “They filmed the show in stereo using 3ality’s Epic rigs,” Gilberg says. “The stereo rig is so heavy it always ends up being on a crane, which means there isn’t as much freedom moving the camera. That one drawback, especially for virtual production, is also a plus. On the plus side, we had Giant Studios encode the crane, which we hadn’t done before. On Real Steel, we had active markers and a motion-capture volume for the cameras.”

Even without the hindrance of heavy cameras, the crew might have used the crane anyway. “It was too windy to put up a truss for motion-capture cameras,” Gilberg says. “So, we used old-school encoding with realtime playback through [Autodesk’s] MotionBuilder. The plus was that we ended up with a much smaller footprint than if we would have used a truss and 30 or 40 cameras.”

CG Giants

The audience first sees a giant during a chase sequence. The character Crawe (actor Eddie Marsan) has climbed the beanstalk to the land of the giants and discovered Fee. “Crawe hides behind a tree,” Rosenbaum says. “Fee comes up behind him and rips the tree out of the ground, and Crawe runs. Bryan [Singer] wanted the giant to look at Crawe as if he were an ant running away. He waits. One, two, three beats. Then smacks him to the ground. Because it is the introduction to the giants, the camera goes from far away right up into his face and focuses on an eye. It’s raining. He has wet hair. And water runs off his skin surface. We had to push our texture maps to 32k on close-ups and shift back dynamically so we could see pore detail and detail in the eye. We added hairs on the skin to add subtle details. And had water dripping off.”

It was one of many shots in which the camera moves close to a giant’s face. “Their faces are 40 feet across,” Rosenbaum says. “We had to be diligent about how we handled the skin shading and eye development.”

The conceit was that an evil king had formed the giants from earthen materials. In early designs, the giants looked like mud men. In later designs, they became more humanoid, but with dirt, straw, bits of grass, and other earthly materials embedded in their skin.

“One thing that made the show difficult was that each of the principal giants had a unique look,” Rosenbaum said. “We couldn’t steal or borrow from other characters. We could start with a base-level skin shader, but beyond that, we had to start each character from scratch. They had different textures and different earthen materials embedded in their skin. And, they’re 24 feet tall, so they had a lot of surface area. We had to get the shading on their skin and eyes right.”

Realizing the team would need to develop a new technique, Rosenbaum plunged into a research project. “I pulled out a bunch of the tech papers written over the last several years on skin shading,” he says. “We developed a new approach that handles multiple layers of translucent materials.”

Rosenbaum explains that usually subsurface scattering utilizes a single-layer approach. Light hits the surfaces, diffuses uniformly, and produces a homogenous look on the skin surface. “That’s why skin often looks like silicon or honey or milk,” he says. “But skin is more complex. It has multiple layers. Dermis, epidermis, fat, muscle. All that affects light differently, so you need to build multiple layers of the scattering algorithm to account for the photoreal look.”

For rendering, the crew settled on Arnold (Solid Angle) in combination with Katana (The Foundry) rather than RenderMan (Pixar). “RenderMan 16 wasn’t out yet,” Rosenbaum says. “The big advantage with Arnold was that it was a true global illumination solution with a real raytracer, so we could get precise lighting representations on the characters, including the eyes. The eyes were really key on this movie because they were four times the size of a human’s eye and often close up. Previously, we had to cheat the shape for shading purposes, but we didn’t have to cheat now. We reconfigured the eye to be anatomically correct.”

There were disadvantages, though, too. “The disadvantage is that raytracers are slow,” Rosenbaum says. “We had fewer iterations, but each render took longer. The other drawback was with displacement. There were quite a few instances where we needed to displace the surface, so we had to come up with a clever approach. In RenderMan, we would have gotten micropolygon displacement for free.”

On location, HDRIs and light probes helped the team match the light when the giants were in live-action scenes. “Most of the big scenes, though, were digitally created environments,” Rosenbaum says. “So we would take the HDRI from the previous scene, adjust it slightly for the time-of-day shift, and use that.”

Digital Effects Supervisor Paul Lambert spent time before principal photography making sure the HDRIs captured on set matched the Red Epic camera. “It was one of the first shows to use that camera,” Lambert says. “Everything was brand new at the time. So, we would shoot with the Epic camera and with our Canon 1Ds, which we use with the HDRIs, to make sure everything calibrated.”

Lambert also discovered a way to make sure that when the camera moved into the sun, they had the correct values. “When you go to all the trouble of getting a physically correct renderer, you want the correct values,” he says. “Up to now, when we captured an outside HDRI, the camera couldn’t go far enough to capture the exposure of the sun, so we knew it would be clipped and we would compensate for it. Paul Debevec came up with a technique to put a filter in the back of the lens to capture the sun, but on set we don’t have time to change lenses.”

So, the Digital Domain on-set crew used two cameras. One captured environment HDRIs. Another was a 150mm camera that was stopped down with a massive filter. “We would point it at the sun, get the exposure, take it into [the Foundry’s] Nuke to correct for characteristics we had worked out, and then put the sun value into the HDRI,” Lambert says. “It meant we had to move to 32-bit EXR –in a lot of shots, the sun value was 80,000 and 16-bit EXR only goes to 65,000 – so we had bigger files.”

To determine those characteristics, Lambert spent time on a rooftop with the filtered camera. “I took a picture of the sun every hour,” he says. “I got a good amount of data and a sunburn.” But, it was worth it: “Knowing we had the correct values meant we could tune shaders knowing they would be correct. You wipe out a lot of cycles of trying to interpret.”

And the Beanstalk

In several shots, Digital Domain’s giants interact with a giant beanstalk created at MPC. The giants’ weight affected the beanstalk, and the movement of the beanstalk affected their performance. “It took a tight collaboration,” Rosenbaum says. “What made it work was the relationship with Greg Butler [visual effects supervisor] and Matt Welford [on-set supervisor] in Vancouver.”

Modelers at MPC matched and extended a 30-foot-tall practical set that had 12 pieces on moveable bases. “One of the challenges was to design, build, animate, and render a model that shoots upward through the farmhouse into the sky,” Butler says. “On any given shot, it could be a mile high.”

The animators would need to move the pieces to grow the enormous plant and, later, to give it life. So, modelers and riggers worked together to design and build sections that could fit together and a rig that could connect the pieces, “like boxcars,” Butler says. “We could define a master curve and populate it with beanstalk assets, leaves, and connecting vines. We based the modeling tool kit on an animation rig. The modelers would pose curves, convert the curves to geometry, then freeze and lock them. At first, this seemed like overkill when it came to modeling, but the animators needed to move pieces around and the modelers could go into a scene and use the rig to tweak the model. We kept it live the whole time.”

Lead Rigger Devon Mussato designed the system. Head of Modeling Chris Uyede worked with Lead Modeler Ryan Lim and his team, and with Animation Lead Jeremy Mesana, to define the sections and determine how they would fit and how the rig would read them in. Texture Lead Erik Gronfeldt did look development. “And finally, when we got it up and running and found out how heavy it was, Mark Williams, our lead R&D technical director, wrote a system that broke it into pieces so animators could work with lightweight scenes,” Butler says. “We had outsmarted ourselves. As the beanstalk grew, our problems grew, and we had to find our way out. By the end of the movie, we had a good system. It was definitely a technical evolution.”

One way in which they had outsmarted themselves was by treating the beanstalk as a creature. Even though it was organic, they realized only later that some techniques used for hard-surface modeling might have been more efficient.

“We’re so used to doing creatures, where skinning involves hand painting,” Butler says. “The complexity of the beanstalk, though, came from design and shading. We realized too late that we could do all the skinning at runtime because it was just defining how the geometry follows a joint. With robots and hard-surface models, we have always done that in the renderer to save calculations. We had thought of the beanstalk as a flexible object with stretching and twisting, when so much of the time it was more like a hard-surface model.”

However, the process the team went through to create the beanstalk led to an important discovery. “We realized we needed to work with R&D earlier, especially given the high complexity of projects,” Butler says. “R&D can abstract and find ways to accelerate parts of the process. The partnership we formed has led to an ongoing collaboration.”

Jack also marked the studio’s first use of a new volumetric tool kit developed during the past few years. “We used it for clouds,” Butler says. “Particularly when the camera looks off into the cloud shape. “We would model simple geometry to lay out the size and overall shape, and then we layered algorithms onto it to give it edges. Whether it was fat and puffy or small and stretched out, we could describe the clouds through procedural operations and have full volumetric clouds. We started off experimenting, and by the end of the show, we had the render times down. We have a great relationship with Pixar, and every couple months we get further into the new features they’re adding to RenderMan, integrating more of our effects technology into it.”

Even though Digital Domain rendered with Arnold and MPC with RenderMan, deep compositing, developed at Weta Digital and implemented in Nuke, made sharing shots easier.

“We’d give them approved blocking and a simple beanstalk rig they could pull around and approved blocking,” Butler says. “Their giants could interact with and affect the beanstalk. They got approval on the beanstalk and giant animation, and locked that off. And then both sides added only secondary animation. And what saved us in the finishing stage was deep compositing.”

When MPC delivered renders to Digital Domain or vice versa, the renders came with deep passes. “We’d comp each other’s work, and it turned out to be as simple as that sounds,” Butler says. “Deep compositing wasn’t invented for that reason, but it’s been a boon for sharing between facilities.”

Although it’s unlikely we’ll see another film with a giant beanstalk and earthy-skinned giants, the techniques and technologies that people in these studios created will likely see their way into future films. Life in visual effects may not be a fairy tale these days, but it is often fantastic.

Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at BarbaraRR@comcast.net.