Facebook Logo Twitter Logo RSS Logo
Issue: Volume: 27 Issue: 7 (July 2004)

Another Big Leap


The challenge for visual effects studios that work on sequels is always the same: Do better. And when the original film is as successful as Columbia Pictures' Spider-Man, which earned $404 million in the US and $807 million world wide, that pressure increases as it has for Spider-Man 2. The sequel—which brings back director Sam Raimi along with stars Tobey Maguire as Spider-Man and Kirsten Dunst as Mary Jane Watson— opened June 30.

"You can't be in this industry, even with a successful franchise, and say, 'let's do it all again,'" says Lydia Bottegoni, visual effects producer at Sony Pictures Imageworks for Spider-Man 2. "You have to improve and get more efficient so you can put more on the screen."

The first film's success provided the means. "We had a 20-percent bigger budget than the first film, and the percentage of the total budget, 25 percent, is higher," says Bottegoni, adding that the Imageworks crew nearly doubled. "We had more shots than the first movie, and the CG characters were more difficult, but we had the same amount of time." All told, Imageworks handled 836 visual effects shots for this film, approximately 300 more than for the first film. The crew created the shots primarily in Alias's Maya and Side Effects Software's Houdini running on IBM workstations equipped with Nvidia graphics cards. Compositing was handled by the studio's own Bonsai software with Discreet's Flame and Inferno playing supporting roles; rendering was accomplished by Pixar's RenderMan with an assist from Mental Images' Mental Ray. The in-house visual effects editors work on Avid systems.






In comic-book superhero movies, putting more on the screen usually translates to more villainous villains and more dangerous stunts. And, indeed, Spider-Man's new nemesis, the multi-tentacled Dr. Octopus (actor Alfred Molina), is particularly treacherous. In the first Spider-Man, the villain was masked and rode a hoverboard. In Spider-Man 2, Molina's character maneuvers and is maneuvered by four mechanical tentacles attached to his back, and he doesn't wear a mask.

"Our villain truly is amazing, and there's no paradigm for him," says Imageworks' John Dykstra, who returned as visual effects designer. "An enormous amount of energy went into designing how Doc Ock's tentacles moved, how they carried him around, and how they interacted with the environment."

In some shots, the tentacles are puppets created at Edge FX, but in many shots, they're CG. "They tried to use the puppets on the set as much as possible, but it involved a lot of setup and results in a lot of puppeteers to paint out," says Scott Stokdyk, visual effects supervisor, noting that managing four tentacles required 16 puppeteers. Moreover, because the tentacles could stretch out, sections of practical appendages in various lengths had to be swapped during a shot. "Our CG tentacles were useful when there was close action and Doc Ock had to reach around with a longer arm, as well as in the dynamic fighting shots," Stokdyk says. "They could spool out from his back."

The CG tentacles were made of vertebrae strung on a cord like beads on a string. "We built them like a big Lego set," says CG supervisor Peter Nofz. Cables that could contract and expand held them in place and caused them to vibrate. At the end of each was a claw-foot on the lower tentacles, a "death flower" with a camera tucked inside the upper. "We built them from the get go as subdivision surfaces," says Nofz. "There was no way we could have carried them through the pipeline as NURBs because they were so complicated." Sometimes the tentacles were attached to the actor, and sometimes Doc Ock was fully CG.

Animating digital Doc Ock's clothing was difficult as well, "He had layers and layers of clothes that we had to simulate simultaneously," says Nofz. "We tried to simulate separate layers, but they were interacting with each other too much."

Rather than use the system that had worked for Stuart Little, the team decided to switch from clothes made with panels and patterns (like real clothes) that were modeled separately, seamed together, and simulated. Instead, they used "object" cloth that, because it uses a model based on a cyberscan, includes wrinkles and draping. "The theory is that it's faster because you don't have to go through iterations to see how the cloth drapes," says Stokdyk. "You get accurate draping from the cyberscan, and that held true in many cases."
When Doc Ock's tentacles spool out from his back, animators found that CG tentacles were more practical to use than puppets.




Because object-based cloth simulation wasn't part of Maya, Imageworks worked with Alias to put it into their pipeline. Even so, the final result required hand tweaking. "Once the simulation looked good, we needed to separate the layers when we'd see interpenetrations," Nofz says.

For hair, however, the studio simply upgraded its existing in-house tools to take advantage of faster hardware and such improvements in RenderMan as deep shadows.

"Skin was the big thing," says Stokdyk. I knew from the start that our CG Doc Ock would be our biggest challenge because a lot of the shots were very close to him. We looked at all the skin that had been done, and I was most impressed by Paul Debevec's work with his Light Stage that was in the SIGGRAPH 2000 technical papers (see November 2002, pg. 16.). We hired Mark Sagar who also worked on the paper."

The Light Stage system allowed the crew to create Doc Ock's skin from photographs rather than calculate CG flesh using algorithms to simulate, for example, subsurface scattering in skin. "In Spider-Man, there's a scene where Spidey climbs a wall and we see Tobey's face," says Nofz. "We worked for weeks and weeks and hit the wall. When you have a shader with a zillion controls, there are too many controls and variations to dial in. With this [Light Stage-based] system, the question of whether the skin looked right or not went away. This is his skin; it looks right right away."

Nofz estimates that it took the R&D team more than six months to integrate the new system into their pipeline. "It's a completely different technology," he says. "There is no such thing as a texture. Instead, there are hundreds of textures." Rather than calculating math, the system derives an image by rapidly looking up and selecting skin textures from a huge table of photographic textures and puts that image onto the model." (See "Making Faces," pg. 48.)

For the digital doubles' facial animation, the studio used a motion capture system. "John Dykstra directed Tobey, and I directed Alfred doing performances that we later imported into CG models of their faces," says LaMolinara. "An animator could try to copy the performances, but what's the point when you want an exact replica not a caricature?"

To capture enough data to create a believable performance, each actor wore around 150 tiny reflective markers on his face. "We tried to get as much data as possible," says LaMolinara, "especially around their mouths, foreheads, brows, cheeks, and lips." Their bodies, however, were hand animated.

"When we needed to attach the tentacles to Alfred [Molina], we had guys with wires all over him to simulate the effect of those legs walking," says Anthony LaMolinara, director of animation. When the villain was fully CG... "It was like animating five characters," says LaMolinara. "There's no reference for that kind of movement. I would say that we hit our stride in showing how much power they had and how much weight they can pick up in a hospital sequence and also when he's climbing a building."

For Spider-Man, Imageworks used the same model and rig created for the first film by Koji Morihiro. "If we were to do a third film, we'd probably use the same setup again," says Stokdyk. "He sat for nine months watching video of the stunt guy trying new poses and new positions and new actions until he hand sculpted a model that matched."

Two things changed about Spider-Man's hand-animated performance. "Doc Ock and his tentacles caused Spider-Man to perform differently," says LaMolinara. "We had martial arts stunt coordinators that tried to take the fights in one direction, and I had my own ideas. The push and pull between the two resulted in a unique movement. Also, there was a little more wildness on this film, a more energetic camera."

"This movie starts where the first one left off," Dykstra says. "When Spider-Man took that final swing at the end of the first film, you felt you were that character. You sensed his joy and excitement. That's where the visual effects had to pick up for the second movie."

Dykstra used a combination of real and virtual camera moves to follow Spider-Man or look through his eyes as he swings through real and virtual environments. Sometimes Spider-Man is actor Tobey Maguire, sometimes a stunt double, sometimes a digital double. "I think the audience will feel what it's like to actually be in those upper climbs in New York City in freefall and in flight," says Dykstra. "I want to give people a real sense of vertigo, that feeling when you're up really high and you approach the edge and your human instincts tell you not to trust your own balance."

Using a computer-controlled cable rig called, fittingly, a SpyderCam, Dykstra sent a camera speeding up Wall Street for nearly three-quarters of a mile, zooming 25 stories high, sliding to the ground and climbing back up. "Then, Spider-Man would be added," Dykstra says. "Sometimes we used his point of view, sometimes we put him into the shot, and sometimes we used a virtual city. The magic is figuring out how to capture the 3D world in a 2D medium in a way that gives viewers a sense of reality."
CG tentacles attached to Alfred Molina, who plays the villainous Doc Ock, attack a live-action Spider-Man.




Effects lead Dan Abrams worked on that final shot of Spider-Man swinging on a flag pole for the first film. "I left the show with new ideas about how to make the buildings more photoreal yet allow Sam [Raimi] to make changes at the last minute," he says. "On the first show, we did a lot of instancing of buildings [see "Nitty Gritty Spider," June 2002, pg. 34]. On this show, it was more like Lego city. I treated the buildings more like characters, individualized and fleshed out, and put them through the pipeline."

As a result, Spider-Man and Doc Ock could get closer to the buildings. In fact, to make one building easier to destroy during a fight sequence, every brick was modeled. "One of the main differences between the first and second film was in how the characters interact with the environments," says Francisco de Jesus, digital effects supervisor. "In the first film, the villain was on a hoverboard. In this film, the villain is terrestrial, so they fight on the street, on the side of a building, on a train. He's like a bull in a china shop; as he climbs the sides of buildings, he breaks bricks and windows."

Most of the work went into buildings that we see in passing, however, which presented particular problems. "We learned there are huge issues with legibility when Spider-Man streaks past at 100 miles per hour," says Abrams. "If the buildings don't have interesting details, they look plastic. We had to learn how to use textures, geometry, and shader controls to make them legible."

Using Maya models and photographs of skyscrapers, painters working in Adobe's PhotoShop and Alias's Studio Paint added paint and grunge to the buildings. "It isn't as easy as it sounds," says Abrams. "We weren't just painting bricks; we had graffiti on the backs of buildings."

All told, the team created 30 buildings that, when duplicated and rearranged in various ways, produced streets and city blocks with as many as 150 buildings. "Doing the layout was like working with one of those plastic puzzles with one piece missing," Abrams says, describing how layout artists would slide buildings to create street scenes.

For interiors—the little scenes you see inside a window—Abrams improved on the shaders used for the first film. "I took fisheye photos of various properties under construction, apartments and so on," he says, "and from those photos created images for walls, floors, and ceilings. The shader constructs a 3D room in shader space and textures it. It's never built in geometry; only projected at render time." The technique allowed the team to change the lights in the interior to match exterior lighting.

Although for some shots, the instanced foreground buildings used for the first film were re-used for the middle ground in Spider-Man 2, often the camera moves made that impossible. "There's a sequence with an out-of-control elevated train moving at 100 miles per hour," says Abrams. "So even in a 100-frame shot, the buildings in the distance are in the foreground in seconds. We didn't have a single lock-off shot. Spider-Man moves so fast that everything you see in the distance at the beginning of a shot is in front at the end."

In addition to high-res skyscrapers with mix-and-match props for building rooftops, the team also built street scenes in 3D. "We created one city block with streets and intersections that we could duplicate and lay out as a city," Abrams says. Painted textures that were changed procedurally provided variety as did street props—bicycle racks, tables, a magazine stand, and so forth, all built in 3D.

"We had a whole system of 3D cars and motion-captured 3D people," says Abrams. "In one sequence with Doc Ock and Spidey on the side of a building, Sam [Raimi] realized how clearly he could see the digital people and wanted to direct them. Much to our chagrin, they became a story point. In one shot, they're looking up and pointing; in another, the crowd moves to the side of a street. We went through hell rendering all of them."

The buildings were modeled using one Maya unit per centimeter, a scale used throughout the film. "The shaders we came up with for the city used ambient occlusion and Mental Ray helped reduce rendering time for that from 40 hours to 15 minutes," says Abrams. "We created cubes and projected textures on some to substitute for the detailed buildings, but they were too simple to use unless they were far away." To render these complex scenes, the team sent between 100 and 150 files containing 30 x 30 tiles to different processors and then RenderMan stitched the final image together.
Spider-Man's model remained largely unchanged, but the buildings received a major facelift from individually detailed and rendered geometry.




"We wouldn't have attempted to do many exterior shots of New York City in the daytime as recently as the beginning of the last movie," says Dykstra. "When we began that film, we didn't know if the last shot would work or not. We were successful, and doing that provided the foundation for this movie."

It's all part of making those great Spider-Man swings through the city look and, more importantly, feel real. "I think the audience will have its appetite sated for being up high and moving fast," says Dykstra, "and have that vertiginous sense of falling. Those aspects of sensory awareness are what you want to find when you're doing visual effects."

Thus, for Dykstra and the crew at Imageworks, "doing better" has meant creating more villainous villains and more stunning stunts—the kind of eye candy you would expect in a comic-book superhero film. But they've also tried to go beyond that to generate, through this 2D medium, a fully three-dimensional visceral response from the audience. It's a big leap. "If it was easy, they'd hire the relatives," says Dykstra.

Barbara Robertson is a contributing editor of Computer Graphics World and a freelance journalist specializing in computer graphics, visual effects, and animation. She can be reached at BarbaraRR@comcast.net.




Using Paul Debevec's Light Stage at the Institute of Creative Technology, Imageworks took photographs of actors Alfred Molina and Tobey Maguire that would provide textures and lighting for the faces of their digital doubles. Four film cameras shooting at 60 frames per second surrounded each actor, who was seated in a chair. Above him, an armature with strobe lights down its length rotated around the chair in eight seconds with its lights firing at 60 frames per second. At the end of one rotation, each camera produced 480 images.

"If the actor held still, we had the head in the same position but with a different lighting condition on every frame," explains Scott Stokdyk, visual effects supervisor at Imageworks. "Because light is additive, we could combine those images based on our CG light to recreate the face. We don't apply a texture to a model; we derive textures as a combination of images, color, and intensity, all calculated by a shader."

Because the CG character needed to have the same color and brightness of light used in the live-action shots, the crew took fisheye images of the set bracketed for high and low exposures to get high-dynamic-range, 180-degree images. Combining two images gave them 360-degree environment maps, which, in effect, were giant spheres that could surround the CG character. "We used those images to calculate the color and intensity of light from the environment that would be shining on the CG character's head," says Stokdyk. For example, if the set was dark on one side, the shader would not access images from that side; it would instead use a subset of the images to derive the textures.

"At the end, a RenderMan shader accesses the series of textures, the environment map, and any added CG lights and combines them to output an image," Stokdyk says. "We remove the specular component from the Light Stage information and add it back in based on the CG camera and light, but everything else you'd normally do with layers such as subsurface scattering, veins, and so forth, is captured in the textures based on the Light Stage photographs. The only painted maps we did were to clean up those photographs."

There was one problem: the actors changed their appearance. "We got Tobey with bright red hair from Seabiscuit and Alfred with sideburns that were removed for Spider-Man," says Nofz. "That meant we had areas in the face with erroneous data, so we had to come up with techniques to create skin from adjacent skin. Also, our cameras didn't capture information inside the nostrils or ears so that had to be created artificially by copying and pasting from adjacent areas." Although some of this process was automated, much was handled with hand painting... a huge task.

"We had 480 images from each of four cameras and all the images had to line up," says Stokdyk. "It was the paint fix from hell."

Despite the advantages of creating faces from photographs, Nofz predicts that ultimately, math will win over photography. "This system is expensive to set up," he says. "You need access to the actor, put him into the contraption, color correct each image, stabilize it, paint missing areas—there's a lot of labor involved before you can say here is my first version of the character. We learned a lot in the process so the next round would probably take less time, but I think this is a little cheat that will help until the subsurface scattering approaches get better."

"If we ever do this again," Nofz adds, "we'll make sure we get the character exactly as it will appear in the movie."


3-dfx.com www.3-dfx.com/Arete.htm
Adobe www.adobe.com
Alias www.alias.com
Avid www.avid.com
IBM www.ibm.com
Mental Images www.mentalimages.com
Nvidia www.nvidia.com
Pixar www.pixar.com
Side Effects Software www.sidefx.com
Back to Top
Most Read