Let Him Fly
Issue: Volume 37 Issue 3: (May/Jun 2014)

Let Him Fly

When audiences last saw the amazing Spider-Man, the young Peter Parker (actor Andrew Garfield) had begun to understand who he is and what he can do. In a moment of pure abandon, he (that is, the digital double of Spider-Man) glides over rooftops and down a long alley. Thanks to Director Marc Webb's vision and the artistry of the Sony Pictures Imageworks crew, it's a solid minute of animation within a continuous camera move.

Five years later in Peter Parker's life, the nascent superhero has mastered his powers. He's 20 years old and at the top of his game in Sony Pictures' latest blockbuster,The Amazing Spider-Man 2. "He's a virtuoso of all his talents now," says Visual Effects Supervisor Jerome Chen.

The same might be said for the filmmakers who came back to take Spider-Man on another journey. Webb, who reinvigorated the franchise with the 2012The Amazing Spider-Man, returned for the second film, as did actors Garfield and Emma Stone (Gwen Stacy, Peter's girlfriend) and many artists on the visual effects crew. Imageworks' Chen returned as overall supervisor, as did Animation Supervisor David Schaub and Digital Effects Supervisor David Smith.

As is typical these days, the workload increased for this second film, and the time decreased. The crews onThe Amazing Spider-Man had 50 weeks for postproduction. This time around, they had 34 weeks.

"I marvel at the amount of work artists can do in the amount of time they have now," Chen says, "and that quality can be maintained in these brutally short postproduction schedules. We started shooting in February 2013 and delivered the movie in March 2014. Post started in July. We had a year to do 1,650 shots. And the work was five times more difficult. Well, at least three times. On the first film, we had a single villain and some nighttime shots in New York City. This one has three villains, more use of daytime New York City, and a lot more swinging shots."

Superhero Physics

Schaub led a team of between 40 and 50 animators, mostly based in Vancouver, who moved the digital Spider-Man and the villains.

"Any time Andrew [Garfield] is in the suit performing as an actor, Spider-Man is Andrew," Schaub says. "Once he is in action and doing the physical swinging stunts, all those shots have a digital Spider-Man. There was very little wire work."

Although the crew rarely used footage of Garfield or a stunt double on wires, animators studied reference footage of Stunt Actor William Spencer swinging, jumping, and crawling up walls.

"Anything that could be done practically is just not extreme enough, but we use it to find cues," Schaub says. "When the web tightens, where does his weight go? What snaps? That helps inform the animators. We always referred to the comic books and made sure Spider-Man hit those iconic poses. But he wasn't just hitting those poses gratuitously. He doesn't just show off. There's a reason behind each pose, which is dictated by the action, and each naturally flows into the next."


ANIMATORS AT IMAGEWORKS gave the digital Spider-Man iconic poses with a purpose, even sculpting muscles specifically to catch highlights.

To help animate Spider-Man in an extreme, purposeful, and yet natural way, the team considered how physics in the real world would affect the superhero's moves.

"I call Dave Schaub 'Dr. Physics,'" Chen says. "Physics was very important in the way he directed his animators."

In fact, to bring animators up to speed on the principles, Schaub teaches a "Physics for Animators" class at Imageworks that is geared toward the challenges animators face. But how does physics apply to a superhero?

"We ask, 'OK, what is Spider-Man's superhero ability?'" Schaub says. "The answer is that he can swing through the city using his web without being torn to shreds by the g-forces. That's his superhero ability. But, it doesn't mean he can break every other law of nature. There are also specific rules that apply to the natural accelerations while swinging. If he releases the web, he isn't flying. He's falling, and gravity should impose its natural effect. When he plummets, he holds out until the last second - that's his finesse - and catches the web at a nice moment. Sometimes when he does that, he gets whiplash and it looks like it hurts."

As is typical for live-action films, Imageworks animators started with previs supplied by the production unit that described the basic action from a particular camera view for the shots.

"Sometimes Marc [Webb] was sold on the previs and wanted to use it as a gold standard," Schaub says. "In other cases, it was a starting point. We were always trying to find different methods for Spider-Man's swinging."

They labeled one new method the "overhand scramble." "The path of Spider-Man's swing is a pendulum," Schaub says. "His arc swings into a trough before he swings back up. So, to keep him continually moving upward, he goes hand over hand up the web. That gives him the ability to accelerate upward, which is important in a chase scene."

To nail a good virtual camera angle for the all-CG shots, the crew often used a similar system to the one they devised for the animated feature Surf's Up. "Once the layout was approved, we used a [real] handheld camera with a camera operator behind it to frame the shots," Schaub explains. "It's like putting a camera operator into the CG scene."

The animators captured a more casual filmmaking technique from the real world, as well: They would sometimes strap a virtual camera to Spider-Man's torso. "It was like the GoPro shots you see on YouTube," Schaub says.

In-house tools helped the animators by checking the integrity of the animation according to the laws of physics. Technical Animation Supervisor Dan Sheerin explains: "Every time Spider-Man leaps from a rooftop or transitions from swing to swing, gravity takes over and we have to remember Spidey is falling, not flying. Our tools make sure his acceleration is accurate. They work out the forces that exist, like gravity, and calculate the motion that would result."

With the proprietary jump tool, for example, the height, duration, and path of a character traveling through the air changes as an animator adjusts the anticipation phase. The tools even apply physically correct motion to the character's muscles.

"Our muscle system reverse engineers the forces acting on a character's body mass given the overall animation performance and applies appropriate oscillations to those areas," states Sheerin.

Spider-Man's muscles became a key element in the animated performance.

Sculpting Muscles

"In comics, we see so much muscle definition," Schaub says, "especially in harsh evening light. We wanted to get that same muscle definition in the highlights in our renders. So we spent a lot of time animating muscles. We would sculpt the silhouette and topology detail."

The digital superhero had two layers of muscles. The first layer had controls animators could use to dial in a shape. "We call it a 'muscle cut,'" Schaub says. "It's almost like a vacuform layer of muscle. We sculpt that volume to make the layer bigger and can get the muscle 75 percent of the way. With the next layer on top, we can fine-tune the sculpt."


Imageworks modelers built actor Jamie Foxx’s digital double; pulses of light illuminate structural elements inside; simulations sent lightning bolts through the model.

Each day artists would light the animated performance, and the shots would render overnight. "We'd look at the render to make sure we saw muscle definition where we needed it," Schaub says. "If we need to accentuate certain areas, we'd do that in shot finaling - literally sculpting as a modeler would. In some cases, we might ship it back to the modeling department."

The lighting artists would start with a blocking pass of animation, then rough animation, and so on through the post­production process, always picking up the latest version of the animation. By the time they received the final animation, their work was nearly done.

Seeing the results of these lighting passes along the way was a new step for the animators. "It led to more dialog between animation and lighting," Schaub says. "Sometimes, if we knew how the lighting would be, we might adjust the animation. Sometimes we might ask for more light. It allowed for a more interactive process."

Fantastic Authenticity

Schaub's insistence on physics-based movement had an impact beyond animation. "He was so hell-bent on making sure everything moves with physics in mind that I wanted to make sure the rest of the visual effects had that same language," Chen says. "When Spider-Man does something fantastic, we needed to see wind in his suit. The environment needed to have smoke and birds. We needed to pile on as much detail as possible to make the shots authentic and comfortable so people aren't taken out of the moment."

In the previous film, Spider-Man (and Garfield) wore a skintight suit. On the digital models, wrinkles were sculpted and part of the deformation. If Spider-Man bent his elbow, the modeled shapes forming wrinkles would kick into action. On this film, Garfield wears a looser suit, which meant the digital double did, too. 

"We had to simulate that extra fabric," says Smith. "When Spider-Man lets go of his web, we can see the suit flutter as he moves through the wind. Even when he turns his head, it had to wrinkle and bunch properly. I think seeing the suit fold and wrinkle naturally added a touch of realism."

For hair and cloth simulations, the crew relied on Maya's nCloth, supported heavily by custom, proprietary plug-ins. For effects, such as the villain Electro's energy, and destruction wrought by fights with the villains, they turned to Side Effects' Houdini and other physics -based simulators.

"[The visual effects work] centers around the three villains and their environments," Chen says. "For Electro, we augmented his makeup effect. For the Goblin, we created a digital double that I think is among our best. And, for Rhino, we built a mechanized suit from surplus '80s military hardware that was a vehicle more than a costume. We upgraded textures and the look dev for daytime New York City. We built a CG Times Square, a power plant, a clock tower, and other big environments. And, we have electrical explosions with lots of destruction."

Villain 1: Electro

Actor Jamie Foxx, who plays Electro, underwent extensive makeup effects as makeup artists at KNB sculpted protruded veins patterned like lightning bolts and applied them to his face for the digital artists to augment with lighting effects.

"As we started doing conceptual work in 3D, though, we came across footage of a lightning storm inside clouds taken by a passenger on an airplane and were intrigued by the energy happening inside the volume," Chen says. "So we decided to put energy inside Electro. Having the lighting deep in his skin compounded the work tenfold. We had to create more CG layers, and it required very tight tracking of his facial movements."

A team of animators dedicated to this task tracked Foxx's performance. "The hard track, that is, the joint rotations, was done by a team in India," Schaub says. "Our team of soft trackers in Vancouver and Culver City did the soft tissue on his face to make sure we picked up all that subtlety in animation."

Then, it was up to the crew to devise a look for the electrical effects and implement them. "It was challenging to make [the effect] look original or feel different," Chen says. "We pulled research that went back to when lightning was first rotoscoped forForbidden Planet. We looked at more recent films. Electro doesn't just have energy inside. He throws electrical energy outward. In early tests, we had elaborate Tesla arcs that were interesting, but they looked like everything we've seen before. We wanted it to feel more like a combination of what you find in aquatic forms and outer space nebulas. And, we added colors, mixing in oranges and reds into the blue that logically didn't make sense. That's the arc of exploration. You start with real and end up with what's cool-looking."


THE GREEN GOBLIN digital double was one of several CG actors created at Imageworks for the film.

The aquatic forms made particular sense because Electro becomes his villainous self after landing in a tank of electric eels, a sequence created at MPC (see "Electric Eels," page 8).

To create the light show inside Electro, the Imageworks modelers built a digital double of Foxx, and then the simulation artists fired off pulses of light to illuminate volumes inside the digital model. "We built structural elements inside - veins, nerves, bones, so we had more than a uniform volume of air," Smith says. "We needed a digital double for other shots when he's flying around, so we had that rigged model anyway."

In Houdini, the artists ran simulations through the geometry, causing lightning bolts to branch out along his limbs, inside his chest, down his neck, and within his cranium. "We came up with nine default patterns that ran at different speeds or on different networks to give us high- and low-frequency responses," Smith says. "So, we could art direct the effect. Depending on Electro's mood, we could fire them faster or slower. And if one area wasn't strong enough, we could choose a different pattern and get more activity."

When Electro throws the electrical energy outward, animators controlled the timing as the bolts zip from one location to another. Then, a particle simulation takes over. "We have a core bolt with plasma burn-offs," Smith says. "We called them 'wift-ees.' A bolt strikes. Then there are branches like you'd see in lightning. And off that, we create the plasma burn-off. He could shoot himself across the room, leaving a trail of ionized gas as he moved."

Villain 2: Goblin (and the Digital Doubles)

Dane DeHaan, the actor who plays the Green Goblin, otherwise known as Harry Osborn, underwent hours of makeup by Weta Workshop artists to transform his face, and in many shots, that's who the audience sees. But not all.

"I'm really happy with the way our digital doubles turned out," Chen says. "We took everything we've learned from all our other movies and used it to create doubles for Goblin, Electro, Spider-Man, and Gwen. You usually try to be very careful in shot design, and everyone can tell, for the most part, when you make the switch from actor to digital double. But Entertainment Weekly printed stills a few weeks ago as part of a marketing campaign, and one of them was our full-CG shot of the Goblin. I immediately called Dave Smith to tell him."

Weta Workshop also built a portion of the Goblin's glider and the boots DeHaan wore on set. "They put him in this big rig that could float him up and down for close-ups," Smith says. "When we took over digitally, we extended his boots into the wings so they could respond better to his movements. For his skin, we had nice reference from the practical work Weta Workshop had done to give him a diseased look."

The digital double of Gwen, which needed to replicate actor Emma Stone, proved more technically challenging. "We already had a digital Gwen that we had used for tiny shots in the first movie, but it's been two years and she's changed," Smith says. Also, we needed to step up the digital double for a dynamic scene at the end when she becomes more involved with the villains than a superhero's girlfriend should."

Although a team of shader writers led by John Monos had worked with Paul Debevec at ICT on one of the earliest applications of the Light Stage technology to create Alfred Molina's digital double in the 2004Spider-Man 2, this time they decided to use a more traditional method.

"We did a plaster cast of her head to get skin surface details you can't otherwise get without high-tech acquisition," Smith says. "I felt the old-school technique would accurately represent the surface textures."

To help give the digital Gwen realistic hair, the crew used four times the amount of hair they had used on any character or creature before. One shot in particular taxed the previous hair simulator. In that shot, Gwen falls a great distance, and the crew filmed Emma Stone in a rig; to move her hair, they used fans.

"She looked fine," Smith says. "But her hair didn't move accurately. So we replaced her hair. Dustin Wicke, our simulation lead, rewrote the hair simulator to handle the amount of hair we put in."

Chen points to a shot in the film in which Spider-Man and Gwen are falling toward the top of a clock tower. "Spidey catches Gwen and falls through the glass," he says. "They look like two stunt people, but they are digital doubles."

Villain 3: Rhino

"Rhino" describes the suit that the villain, played by Paul Giamatti, sits inside. "In the comic book, it was a costume, so this is a departure," Chen says. "This is a mechanized armored suit that Paul's character controls. It went through a long design phase at Weta Workshop and then moved to Blur and Imageworks for design. I used Blur's creative design group a lot. They also designed the 'Vulture' suit for a sequence they worked on that takes place in a secret lab."

So that other actors would know how high Rhino's eyeline would be, Giamatti rode in an elevated device that Imageworks artists later replaced with the final machine. "We cut his head out of the footage and put him in our animation rig," Schaub says. "But that was only the starting point. He had to maneuver as a biped and, when he charges like a rhino, like a quadruped. So, we had mechanical engineering challenges. We did animation tests of walk cycles and runs and transitions."


RHINO CAN BE a biped or quadruped, which created mechanical engineering challenges for the animation team.

As the animators delivered movement studies, the design changed accordingly. "At first, he had stubby little legs and a big body like a rhino, but that didn't look very imposing when he stood on his hind legs," Schaub says. "His proportions were adjusted so that he looked equally menacing on two legs, as well as on all fours. We finally settled on a design for a simple model that worked with the animation, and then added details, like internal hydraulics, later."

Rhino fights Spider-Man in an end sequence that takes place in the daylight on Park Avenue in New York City. "We see the Metlife building in the background," Chen says. "But, a few months before the end of the movie, [the director] wanted different choreography, so we created a full-CG version of the city. We transition between live action and five or six blocks of CG. We can do that now - do unplanned transitions between CG and live action in the daylight."

Fluid Shot Design

The director could also make unplanned changes to the shot designs. In the previs, to heighten the action, a shot might have very few frames - even in the case ofSpider-Man, as few as nine frames. And live-action filming often produces footage with many cuts per sequence. "In our world, we can make all those cuts flow together as one," Schaub says. "That happened many times. It was tricky. We might have had five or six animators working on shots that matched the previs, and later Marc [Webb] would decide to string them together. In one case, we combined 12 shots in previs into one."

Often, seaming together bits of live-action footage into an unplanned yet seamless sequence meant creating CG environments and digital doubles. "We had to make sure our doubles held up," Chen says.

Chen attributes the success of the digital doubles and the crew's ability to give the director shot design flexibility, in part, to technology improvements. "My theory is that the machines are now fast enough to give the artists the number of iterations they need to complete their illusions," he says. "You used to have to wait a day to see a version you created before you could do the next attempt. You might know what you wanted to change, but if it took two days and the deadline was tomorrow, you missed your chance. Now, it takes a couple hours to render a shot and get it back. You can see mistakes. Get notes. Make the change. The more iterations an artist can do, the better the work."

Times Square

In addition to digital environments built to smooth transitions between cuts, the Imageworks crew built three major digital environments to accommodate battles in the film. The largest and most complex was Times Square.

"Times Square is the first major action piece in the movie, and it was the hardest to create," Chen says. "It takes place at night. We have Electro with his energy effects. And, Times Square has these giant Jumbotron video screens. It's so bright there it looks like daylight."

During the sequence, there's a moment when a news crew projects Electro's image on a Jumbotron. And then Spider-Man's image replaces Electro's image. "That aggravates the villain more," Chen says. "It was important for us to come up with the right look."

On a stage in Long Island, the production crew built the center of Times Square - one square block with one-story storefronts - and surrounded it with 50-foot-high greenscreens. To extend the set to eight square blocks, the visual effects crew worked from Lidar scans, thousands of still photos of the real Times Square, and footage of the Jumbotrons that they shot with Sony F65 film cameras.

"The sequence was approximately 300 shots," Chen points out. "We worked on it for a solid year."

The amount of detail is enormous, the destruction dramatic. "We had every bump in every sign," Smith says. "The Disney store, McDonald's, chasing neon lights on signs, people in cars, traffic, the panels with LEDs that make up the Jumbotron displays. We had to be prepared to go fully digital."

The crew took still photos every 10 feet of storefronts for texture reference, had three survey scanners running for a week for geometric detail, and mounted three F65 cameras on custom rigs to capture moving footage of the Jumbotrons. "About the time we got the footage back from the F65s, we realized it changes constantly and needed to be licensed," Smith says.

To handle the massive amount of geometry, the crew used a new back end to the studio's Alembic pipeline. "We call it 'Ogawa,'" Smith says. "It makes it efficient and fast to translate geometry from the layout stage, where we set up scenes, to lighting. Our lighters only bring in as much cached geometry as needed."


TOP: Modelers at Imageworks built eight square blocks surrounding Times Square. BOTTOM: New shaders allowed artists to treat geometry as a light source.

For lighting, the crew used The Foundry's Katana; for rendering, Solid Angle's Arnold. In addition, the shading team created shaders that allowed the lighting artists to treat geometry as a light source. When the Jumbotrons were flat planes, the artists used mapped area lights, but for the more dimensional custom signs, like the 3D version of the Coca-Cola swish logo, they turned the geometry into light.

 "Any piece of geometry could have any attribute we wanted," Smith says. "We used it for shaped signage and for Electro's bolts. For example, we extruded the Disney logo into 3D geometry and turned it on by making it a light source. That way it properly cast light into the scene. We also had something we called Mesh Lights. We could take any geometry, turn it into a light, and map the light with any detail whether color or pattern."

When Electro throws a bolt of energy, something is destroyed. In Times Square, the Jumbotrons fell apart. "They are built from one-foot by one-foot plastic panels put together into six- by six-foot pieces, and those panels are wired together," Smith says. "When we dropped them, they fell apart accurately - our solvers broke them apart."

Electro also wreaks havoc in other digital environments as Spider-Man battles the villains, notably at a power plant and a clock tower. "We knew we had to upgrade our destruction pipeline," Smith says. "We do a lot of simulations in Houdini using proprietary plug-ins, and we used DMM [Digital Molecular Matter from Pixelux]. We brought those solvers into our pipeline to produce a more natural look."

In addition, modelers built internal structures for buildings that would be destroyed. "We didn't want the things that came crashing down to look like hollow, plastic toys," Smith says. "We even added hidden dirt. Practical set pieces have hidden dirt, and I took a lesson from that. In Times Square, I took close pictures at high resolution from a crane of the old buildings behind the signs and saw that they're really dirty. So every time something fell, we had a lot of dirt and dust."

Chen notes that although advances in graphics cards, machine memory, and CPU performance have made environments such as those created forSpider-Man   possible, the demands have grown greater, too.

"There is a trend," Chen says. "Digital environments are becoming gigantic. They are quicker to build now, but the scope of the data, the geometry and textures, and the vastness of these worlds are becoming bigger and bigger."

Indeed. Ten years ago, for Director Sam Raimi's 2004 filmSpider-Man 2, a crew at Imageworks created 150 digital buildings and went to great lengths to create Anthony Molina's digital double for a handful of shots (see "Another Big Leap," July 2004).

For this year's The Amazing Spider-Man 2, the Imageworks visual effects virtuosos built three massive digital environments, including one that surrounded a Times Square set with eight square blocks of detailed CG buildings, vehicles, and people. "Seventy percent of the people in Times Square are CG," Chen says. "We have crowds watching and running. We have people looking out windows."

These highly detailed environments, combined with photorealistic digital doubles of the films' stars, gave Director Marc Webb the freedom to swing a believable digital Spider-Man through New York City as he battles three digitally enhanced villains, and to invent "live-action" shots that he hadn't filmed.

"I can't imagine what it will be like in another 10 years," Chen says.

Barbara Robertson is an award-winning writer and a contributing editor for CGW. She can be reached at BarbaraRR@comcast.net.