Issue: Volume: 25 Issue: 6 (June 2002)

Nitty Gritty Spider



By Barbara Robertson

The ImageWorks effects team modeled and rendered around 40 "hero buildings" to give a digital Spider-Man a virtual environment filled with realistic skyscrapers in which to perform his superhero stunts.

When Spider-Man used his web to swing between buildings in the film that bears his name, the real superheroes were the men and woman at Sony Pictures Imageworks who worked on making the illusion believable. The Imageworks crew created 475 effects shots for Columbia Pictures' Spider-Man, released May 3, nearly all of which involved computer graphics and many of which required new CG techniques.

Throughout Spider-Man, the star (played by actor Tobey Maguire) mutates from an everyday kind of guy into a superhero with super-spider capabilities, which meant the effects crew had to help transport viewers into and out of an illusory world. "We had to help the viewers cross the line between reality and fantasy in a way that doesn't jar them out of the story," says visual effects designer John Dykstra.
The ImageWorks effects team modeled and rendered around 40 "hero buildings" to give a digital Spider-Man a virtual environment filled with realistic skyscrapers in which to perform his superhero stunts.




To do this, the team fabricated three major visual effects: realistic CG environments that would simulate parts of New York City, digital stunt doubles for Spider-Man and his nemesis the Green Goblin, and Spider-Man's web.

Building New York
When Spider-Man runs across the front of a building and leaps off in another direction, the camera must stay with him. Often, for shots such as these, effects teams often insert digital characters into live-action plates, animating the characters to match the moves of the live-action camera. For this film, it was impossible to use a real camera to film the action the director Sam Raimi wanted. "Even before 9/11, we couldn't fly a helicopter down the streets of New York," says Dykstra. "So, we needed to build a world for Spider-Man and his camera. Our version of New York City had to be indistinguishable from the real city."

No small task: "The most difficult thing with computer graphics images is not the creation of physical objects or even the creation of the physics the object has to react to," Dykstra explains. "It's the little things. It's the dirt, the noise, the random events that give us cues about what's real and what's hyper-real. The specular reflections on a crosswalk, chalk lines on a sidewalk, discolored grout, bird droppings, distorted reflections in imperfect glass, the patina on a gargoyle. Those are the kinds of things that can be numbingly difficult to achieve using computer graphics."

The goal then for Image works was to create buildings de tailed enough for daylight close-ups, yet numerous enough to convey the feeling of a mega-city as Spider-Man glides from building to building. The team started its construction project by sending a group of photographers to New York. Then, using the "thousands and thousands" of pictures the photographers brought back, the team began building its digital city.
The virtual buildings were modeled in pieces with Maya, assembled in Houdini, textured with paintings created in StudioPaint, lit in Houdini, and rendered in RenderMan with window reflections rendered in Mental Ray.




At first, they tried image-based modeling-that is, deriving 3D models from the photographs. It didn't work. The team had difficulty changing perspective on the resulting models; they couldn't get enough resolution in a single photo to make an entire building; and it was impractical to make one large building from many small photographs.

Instead, using 3D computer graphics, the team built some 40 "hero" buildings with relatively high detail and one building in the Times Square sequence that had what senior technical director Steve LaVietes calls incredible details. "That building had statues on it," LaVietes says. "It had more detail than Spider-Man or the Green Goblin."

Rather than creating these highly detailed buildings as entire units, the team developed a complex building creation system. They used Alias|Wavefront's Maya to build parts and Studio Paint to create textures. The parts were assigned textures and assembled into buildings and then into streets using Side Effects Software's Houdini. And finally, the environments were rendered, on Dell computers running Linux and on SGI machines, using Pixar's PR RenderMan with an assist from Avid/ SoftImage's Mental Ray for reflections in windows. "We had an erector set mentality," says Scott Stokdyk, visual effects supervisor. "There was so much geometry, so many surfaces, and so many assemblies of different buildings, that Steve [LaVietes] had to build a system in Houdini just to manage the building pipeline."

Instance Browsing
LaVietes' goal was two-fold and seemingly contradictory: greatly reduce the amount of geometry that needed to be rendered yet give the lighting department the ability to change the lighting on any piece of any building.

To reduce the amount of geometry, the team decided to use instancing in RenderMan, a method by which multiple copies of an object can be rendered from one piece of geometry during the rendering process. First, modelers working in Maya built "floor components," that is, basic building blocks such as windowsills. Those components were translated, with the help of custom software, into Houdini's ASCII geometry files so that the team could take advantage of Houdini's built-in RenderMan interface.

Second, the modelers created what LaVietes calls "assemblies," collections of floor components, many of them duplicates, arranged as buildings. "Once the modelers created an assembly, assuming they followed the rigid guidelines for naming, the building got translated to a description not of geometry, but of what components were used and how they were positioned within the assembly," LaVietes says. "We had a tool running within Maya that would analyze the assembly of repeated instances and write out a Houdini geometry file. Houdini could then build a scene to match the Maya scene using only the lowest common denominator of parts."
Times Square was created with photographs using a pan and tile system; however, billboards in the original photographs changed as advertising was sold. The balloons, the Green Goblin, and his glider are CG.




Building details were added primarily with texture maps created by painters using photographs as templates and source materials. "This had to be done with photographic texture maps," Dykstra says. "No matter how careful you are, if you try to create this kind of detail with imaging techniques, it's still recognizable as computer graphics." Custom software associated the texture maps with comparably named NURBS patches, then built the textures into the Houdini geometry files.

These techniques helped meet the first part of the goal-reducing the amount of geometry to be rendered. To meet the second part of the goal, which was to give the lighting department interactive control, the team took advantage of Houdini's particle operators (POPs). "The Maya scenes were very heavy," LaVietes says. "To open up a single building would take 20 minutes." On the other hand, the Houdini scenes, which could display less geometry, didn't have enough detail. "It would have been more efficient to have the buildings always represented abstractly, but what the lighting supervisor wants doesn't always line up exactly with the most efficient process for us," LaVietes says. To solve the problem, the team created a software program that made the greatly simplified scenes in Houdini useful for the lighters. This "instance browsing" software, named LayEd for layer editor, was a separate application written in Perl that plugged into Maya and Houdini.
End points for the web were positioned by hand, animated procedurally using Houdini, rendered with RenderMan in several passes, and composited with Sony's proprietary Bonsai software.




"One of Houdini's means of instancing in RIB [RenderMan Interface Bytestream] files is through particles," LaVietes says. With this in mind, the team flagged particles with geometry and also assigned attributes such as orientation and scale. Then, using LayEd, they could display buildings using simple rectangles in Houdini and yet, upon request, could produce detailed geometry for any of the repeated elements. This meant that a lighting supervisor could pick out a specific windowsill and make it a little brighter. "We called the process unrolling," LaVietes says. "The geometry would be removed from the instanced particle system and turned into regular geometry."

For the buildings and the street scenes, the subliminal cues that made them seem real were the visual details. For the digital characters in this movie, the team had to find a way to make the stunts that the real actors and stuntmen couldn't do believable while also making them impossible. "We can launch or land a real person, but the part in the middle is tough because gravity comes into play," says Dykstra. "So, in many places we used CG characters."

Superhero Stunts


Although animators used keyframe animation for the digital stunt doubles, the team decided to use two different techniques to build the bodies. "You can see Spider-Man's musculature," says Stokdyk. "It's pretty unforgiving, so we had Koji Morihiro, CG character animator, hand-sculpt how he should look." Morihiro spent six months studying videotapes of Maguires' stunt double, Chris Daniels, going through various motions, and lined up body curvatures and musculature to match. "And then we accounted for him having 50 percent more range of motion than that and took our best guess while still trying to keep within the laws of physics," Stokdyk says.

For the Green Goblin, who has an exoskeleton, Alberto Menache, senior CG supervisor, built a procedural muscle system to drive the behavior of the skin. "I call it a volume deformer," Menache says. "It uses layers to deform different objects in different ways." Usually, the movements of a character's skeleton directly affect how its skin moves, with various bones having more or less influence on particular areas. Instead, Menache's system uses muscle primitives-cylinders with pre-programmed behaviors. He attached small grids called filaments to the muscles with volume deformers and bound them to the bones. As a result, when the bones move, the filaments expand or contract, causing the muscles to bulge or flatten, which makes the skin move. In addition, layers representing fat and membranes, also driven by volume deformers, sit between the skin and muscles to help the skin slide correctly. The same system that drove the muscles also moved the Green Goblin's exoskeleton procedurally.

Spinning a Web
The third necessary visual effect was Spider-Man's web. To create the translucent strands of the web, senior effects animator Theo Vandernoot and his team used CG fur-little pieces of hair with four points or vertices in them strung end to end. "Fur is a ribbon, like fettuccini," Vandernoot says. "It's easier to define single points on a line than multiple points on a true tube and because there's less data, it renders faster."

But for the web, they needed a tube, like rigatoni. "To create rigatoni, we used a shader in RenderMan to fold the ribbon," he says. Then, to make the web dynamic, the team used Houdini's POPs, soft body dynamics, and vector expressions (VEX). "It's as if we put a control lattice made with tinker toys connected with springs around the entire web," Vandernoot explains. "The tinker toys control the pieces of fur, and that causes the web to spring around." With VEX, the team created procedural animations to control helical webs that twist around a main web, cause a web to hit something and then cinch up tightly, and to create other dynamic operations that would have been impossible to hand animate. Finally, recognizing that the supervisors and director would want to make changes, the team wrote a system that rendered the web in 10 passes to separate such elements as specular lights and shadows. "The compositors could put it together in real time in front of the effects supervisors and give them the power to dial in each shot the way they want," Vandernoot says.

Practiced Deception
"The big challenges," says Greg Derochie, senior compositor, "were that over a bright background the web would disappear, and each web shot needed to have a traveling highlight." Using the studio's proprietary Bonsai software, the compositing team created a matte of the web to darken the brightest parts of the background under the web and added the highlight with a little "traveling matte" that slid light down the edge of the web. They also added depth blurring. "The webs never became automated," Derochie says. "Every shot had unique demands." Similarly, they tweaked the web on Spider-Man's costume.

For the buildings, the compositors adjusted "that last 10 percent" of the lighting and rendering and also developed a pan and tile system to create backgrounds for the Times Square sequence. The tiles, which were photographic elements, were arranged in Maya and then blended in Bonsai. (Interestingly, because the production company sold advertising space on the Times Square billboards, the compositing team did several billboard replacements, some after a shot was otherwise final.)

"When you begin a film, it's a struggle to imagine what will become mature technology while you're making it," says Dykstra. "If you start out using only the technology that exists, it will be obsolete by the time you're finished. You have to take risks. To create New York City in daylight, fly around it, and make it look real and not stylized was the question, and I think we were successful with that."

Virtual sets have been used before, but not to reproduce as familiar a location in as monumental a way as in this film. Spider-Man's tall CG buildings could inspire more use of realistic virtual worlds. But, Dykstra offers a cautionary note: "As we begin to create worlds with virtual environments we need to remember that just as film is not an accurate representation of reality, virtual is not even as accurate as film. You have to bring all that with you. If you've only ever made a house in a computer you don't know about splinters. You have to get out and interface with the real world. Hear the ring of the metal you tapped on. That's the soul of an image."




Barbara Robertson is Senior Editor, West Coast for Computer Graphics World.