Chilling effect
Issue: Volume: 27 Issue: 6 (June 2004)

Chilling effect

The threat of global warming was put in chilling terms on January 9 when David King, the UK government's chief scientific advisor, wrote in Science magazine that climate change is a greater threat to the world than international terrorism. In February, Canadian Environment Minister David Anderson echoed King's words, and later that month, the Pentagon released a report on the dire effect climate change would have on national security. Even so, there was no outpouring of public outrage. That could change this summer.

As we've all learned, pictures often have a more dramatic impact than words. And when it comes to drama, few pictures can match a Hollywood spectacular. Thus, the image of the effect of global warming that will now be in many people's minds is not the one painted by scientists but, rather, the storms and the frozen New York City in 20th Century Fox's The Day After Tomorrow.
ILM used a combination of 3D graphics, matte paintings, and miniatures for this shot of New York City at the dawn of a new Ice Age. The huge ship at left was a miniature tanker dressed with geometry to provide extra detail made necessary by the softening




And that seems to suit many scientists and environmentalists just fine. Although noting that the film is not scientifically accurate, the science community is aware of the film's potential impact. A May 12 BBC news story reported that King stated he hoped many Americans would see the film. Speaking in London, he described The Day After Tomorrow as "a spectacular action film which portrayed the switching off of the Gulf Stream and the Northern Hemisphere's subsequent plunge into a new Ice Age."

Directed by Independence Day director Roland Emmerich and starring Dennis Quaid, the film rolled into theaters Memorial Day weekend, giving moviegoers a view of nature that has never been seen before, which made this vision of the future an obvious candidate for digital effects.

Visual effects supervisor Karen Goulekas engaged several studios to help realize Emmerich's vision, including Digital Domain, Hydraulx, The Orphanage, ILM, Tweak Films, Yu+Co, Dreamscape Imagery, and Zoic, with Schematic providing weather graphics for various monitors. It was a massive undertaking that often resulted, by the end of production, in several studios working on individual shots. "I believed that work had to be done at large facilities," she says. "And that was true until a few years ago because software wasn't as good. Now we can go to studios based on their strengths, not their size."

For example: "Digital Domain created elements for the storm tide shots—the water, people, and buildings seen from the ground angle—and Hydraulx composited them," says Goulekas, "but the aerial storm tide shots went to Tweak."

Goulekas began working on the project in May 2002, the same time as the storyboard artists. "I hired eight previz artists and one [Apple] Final Cut Pro editor," she says. "I paid people one salary if they brought their own computer and another if we brought it."

By August, the main sequences were developed in Alias's Maya. When Goulekas went to Canada for principal photography in October, she took previz artists with her. "When we got back, we took the bluescreen plates from the Avid and added the previz environment so [editor] David Brenner had something to cut the film with," Goulekas says, and laughs, "so, it was like 'post' visualization." (Goulekas is the author of Visual Effects in a Digital World, a glossary of 7000 visual effects terms.)

The film begins with the ominous cracking of the ice shelf in Antarctica and ends with the starkly beautiful image of an icy-white New York City as seen from the shoreline, the harbor completely replaced by drifts of snow on ice, through which the top of the Statue of Liberty protrudes. Three studios worked on these shots—Hydraulx on the ice shelf and a frozen interior in New York, and ILM and The Orphanage on the long end sequences.

Describing the opening shot, Goulekas says, "We have a flyover of the icebergs that I think is the longest fully digital shot in a [live-action] film. It's beautiful. The ice cracks, crevasses form, chunks of ice fall off."

The six-and-a-half-minute sequence was created at Hydraulx, a 40-person studio in Santa Monica. The team started by creating the ice shelf in foam. The sculpture was then scanned with a Polhemus scanner, converted from point-cloud data into geometry with Headus's CySlice, and imported into Maya. Hydraulx also used two Maya plug-ins: Syflex's cloth simulator for water dynamics and Kolektiv's Stroika for particles.

With the scanned data, the team created a finely detailed 3D model in Maya and adjusted it to work with live-action plates of an ice base filmed on a bluescreen set; 2d3's Boujou handled the camera tracking. The model detail was important because of a rendering issue: The studio wanted to output separate rendering passes that would be composited on Discreet's Inferno systems. So, for efficiency, they wanted to quickly render matte passes in Maya and, separately, "beauty" passes in Mental Ray.

Unfortunately, the two software programs calculated displacement maps differently. "Everything else looked the same—the camera, the objects, the animation all lined up," says Greg Strause. Thus, rather than render everything in Mental Images' Mental Ray, they eliminated the displacement maps. "We physically modeled every little detail except where we could use bump maps," he says. "We had 10 million polygons per file." Even so, rendering the 10 to 15 layers, which included subsurface scattering to make the ice transluscent, took 2-1/2 to 3 hours per frame on dual-processor Intel Xeons with 6gb of RAM. The Linux-based renderfarm was assembled by Dell, as were the Nvidia-based workstations.
Hydraulx used 3D geometry to create the ice shelf and global illumination to bounce light into the 15-foot-wide, curving, 300-foot-deep crevasse.




To animate the falling chunks of ice, rather than rely on a rigid-body simulation, Hydraulx assigned character animators to the task. "The character animators could pound through massive scenes with 500 to 600 chunks keyframed and do a shot in two days," Strause says. "To get a random simulation to look right might have taken two weeks."

The sky was painted. "We created huge matte paintings in Adobe's Photoshop of the cloudy sky from 11 megapixel still pictures shot on a Canon 1DS digital still camera that we assembled into 360-degree sky tiles," says Strause. Adds Colin Strause, the other half of the partnership, "Once we assembled the tiles, the sky-painting files were over 2g in size."

Once compositing was completed, rather than editing with Final Cut Pro as usual, Hydraulx created edits of the sequence with sound on the Inferno. "Every Inferno has 2tb of storage on it, so we had real-time playback of the shots," says Strause. "Roland could sit with us and look at uncompressed, 2k-resolution shots on the Inferno and on HD monitors. He could sit with one artist, and composite and color correct shots in real time at film res and drop them back into the edit."

In the film, the melting ice sends frigid water into the usually warm Gulf Stream and, with a compression of time designed to shock and awe moviegoers, triggers an Ice Age within minutes. Tornados rip through Los Angeles taking the Hollywood sign with them (thanks to Digital Domain), hail the size of baseballs batter Tokyo (thanks to Hydraulx), people in the US race to Mexico, a storm tidal wave heads for Wall Street, and the wolves, who sensed the change coming and escaped from the zoo, begin hunting for food. One person goes against the tide: paleoclimatologist professor Adrian Hall (Quaid) who tries to save the world and rescue his son Sam (Jake Gyllenhaal), who was in New York City when the world began to freeze. Now the wolves hunt him.

"Roland had real wolves on set, but they were very, very shy and sensitive to noise," says Gregor Lakner, CG supervisor. "They quickly don't do what you want them to do." Because the script called for the wolves to be hungry hunters, the real animals lost their chance for stardom. ILM brought the wolves into its studio to study, photograph, and measure, and then created digital doubles (see "Making Wolves," pg. 27).
When real wolves were brought on set, they proved to be too shy to act like hunters. So ILM created hungrier digital doubles.




To animate the wolves, ILM motion-captured two dog performers. "We had a detailed set of animatics created by Digital Domain," says associate animation supervisor Scott Benza, "so we knew exactly what we wanted."

The dogs wore form-fitting Lycra suits into which motion-capture targets were sewn. "The blue suit extended to their ankle area, and a strap went between their toes," says Benza. "We also had targets on their heads. They looked pretty unhappy."

The only part of the dogs' bodies not tracked were their tails, which were animated procedurally using parameters based on reference videos, and their ears, which were keyframed. "We captured 90 percent of the motion for the shots, but we altered the data in almost every shot," says Benza. Two tools written for the film by James Tooley helped make the motion-capture sessions efficient and the animation task easier: one tool could mirror the mocap data to change a right turn into a left turn, for example, and the other could blend data from different takes so that a dog captured making a left turn on level ground could transition into running up stairs.

Because the German Shepherds being captured were smaller than the wolves would be, the crew built the set at 75-percent scale to more easily place the digital wolves' feet on the live-action plates. In one of the shots, however, the wolves were composited into a synthetic, snowy background created by The Orphanage.

The Orphanage, under the supervision of Remo Balcells, delivered the storm that freezes New York City. "It's like a hurricane but larger," says Jonathan Rothbart, Orphanage co-founder. "Because we didn't have much time and were working with 300-frame long shots, we decided to start with a matte painting and layer volumetrics on top as wispies rather then create an entirely volumetric effect."

Using Photoshop CS to take advantage of the 16-bit logarithmic color space, the team painted a huge storm base. On top of that, they layered voxel "mashmallows" created in Maya with particles driving the movement of the volumes. Then they warped the painting with Adobe's After Effects. The storm was rendered in Sitex Graphics' Air, a Renderman-based renderer. "The movement was cool and slow and had scale," Rothbart says. "Sometimes the most artistic solution may not be the most technical."

When the storm reaches New York, ice happens, both inside the buildings, thanks to Hydraulx, and outside, thanks to The Orphanage and ILM. To create the frozen city, The Orphanage again used a combination of 2D and 3D techniques, this time projecting painted maps onto 3D geometry to create 3D matte paintings using Photoshop and Discreet's 3ds max rather than rendering complex 3D geometry for the entire city of New York. "Roland [Emmerich] and I have a bad knee-jerk reaction to matte paintings," says Goulekas, "and The Orphanage used a lot of matte paintings as projection maps. I had never seen them used that extensively. I thought it was brilliant." But rather than calling the result a matte painting, they referred to the process as "projection onto 3D surfaces."
The Orphanage developed projection maps on 3D geometry to create a frozen Manhattan. Matte paintings, volumes, and particles were used to create the big storm.




The 3D surfaces, simple representations of skyscrapers, were created in 3ds max. "We'd texture-map the buildings and render them from angles that could be seen based on the animatic," Rothbart says. "The painters would paint on top of the texture maps and then we'd project those detailed paintings onto the 3D geometry seen in each frame." Because the parts of the buildings that could be seen were always highly detailed, the process made it look as if the camera were moving through a fully rendered 3D environment. "Creat-ing the city took tons of projections and a lot of shot planning," says Rothbart. The studio uses an in-house system called Donkey Base to manage the digital assets; editing is accomplished with Final Cut Pro. Throughout the sequence, blowing snow created in Maya was added in layers and composited in After Effects. For rendering, the studio used Splutterfish's Brazil.

In one shot, a 3D building becomes frozen from the top down. "We used a procedural texture map to have the freeze travel down the building, and then painted a frozen building that is revealed through procedural frost," explains Rothbart.
The Orphanage sent procedural texture maps down the side of a building and revealed a painted frozen building through frost for the big freeze.




The same semi-transparent look and procedural ice was also applied to a digital double of a helicopter pilot in a downed chopper that had flown into the cloud and crashed. The helicopter and the cloud were CG. Particles created in Maya and volumes created in Sitni Sati's AfterBurn, a 3ds max plug-in available through Afterworks, were used. The Orphanage create the cloud volumes. Particles were rendered with in-house software, the helicopter with Brazil.

To work in extended-range linear color space, The Orphanage's Stu Maschwitz wrote a software tool called eLin that works with After Effects 6.5. Available from Red Giant Software, the tool helped, especially, with the helicopter's rotor blades, which are difficult to render with correct motion blur. "Extended linear color pulls color saturation into the blade to get the true motion blur," says Rothbart. For compositing, the studio primarily uses After Effects but also Apple's Shake and Eyeon Software's Digital Fusion.

ILM adopted a similar strategy for creating the complex details of a totally frozen city. "To have created the city entirely in 3D would have required a horrendous amount of data. It was more economical to paint," says Lakner. Thus, when they could, the crew relied on ILM's Zenviro software, with which painters working in Photoshop paint textures on simple geometry seen from one camera angle; the textures are automatically applied to 3D surfaces. "We had to create 20 miles of New Jersey that vanishes to a single point, so we needed information all the way to the horizon," Lakner says. "We had a 3D team start from the camera, and a Zenviro team start from the horizon. Zenviro was faster."

"When we heard Roland [Emmerich] didn't want us to use matte paintings, we were in shock for a while," says Lakner. "But we had a meeting and decided rather than talking about 'matte paintings,' we would talk about 'procedurally generated textures.'"

The snow on the ground was sometimes baking soda on a miniature, sometimes a particle simulation. Helicopters were CG as was the Statue of Liberty, and the tanker enmeshed in ice was a miniature. "We took still photos of the miniature and applied those to the CG geometry, and added 3D snow," says Lakner. Although most of ILM's shots were composited in the studio's proprietary Comptime software, the sky for this shot was fabricated from photos and composited in Shake. For digital asset management, the studio uses an in-house system, and for editing, an Avid system.
Some snow is baking soda, some is CG. The buildings in the foreground are 3D, and those farther back are 3D matte paintings created and composited at ILM.




"I've done a 180-degree turn on so many of my beliefs," says Goulekas. "It's a changing climate."

As David King said in that BBC news story, "The film brings events together into a highly unlikely or even impossible scenario. It's very difficult to explain the physics of it. But...[it] gets the basic message across."

Barbara Robertson is a contributing editor of Computer Graphics World and a freelance journalist specializing in computer graphics, visual effects, and animation. She can be reached at BarbaraRR@comcast.net.


2d3 www.2d3.com
Adobe www.adobe.com
Afterworks www.afterworks.com
Alias www.alias.com
Apple www.apple.com
Avid www.avid.com
Canon www.canon.com
Dell www.dell.com
Discreet www.discreet.com
Eyeon Software www.eyeonline.com
Headus www.headus.com.au
Kolektiv www.kolektiv.com
Mental Images www.mentalimages.com
Nvidia www.nvidia.com
Polhemus www.polhemus.com
Red Giant Software www.redgiantsoftware.com
Sitex Graphics www.sitexgraphics.com
SGI www.sgi.com
Splutterfish www.splutterfish.com
Syflex www.syflex.biz


To create a pack of four digital wolves, ILM modelers started with anatomy books and three real wolves brought into the studio from a refuge near Sacramento, California, that they observed and photographed. The photographs, taken of a wolf on a rotating pedestal by two synchronized cameras—one above and one at wolf level—were used for reference and, later, to check the models. In addition, the crew measured the wolves' bodies and coats. "We measured every bit of fur with a ruler," says Gregor Lakner, CG supervisor. "We were told not to make any unexpected moves or unexpected noises."

The three wolves brought into the studio—a black wolf (Titan), a nearly-black wolf (Thor) and a yellow wolf (Jasper)—were modeled in Maya and ILM's ISculpt and rendered with Renderman. A fourth, gray wolf (Big Gray), was cloned from Jasper.

After the digital models were skinned, painted texture maps were applied. "Without that, we would have seen through the hair," Lakner says. "It would have been too expensive to render as much fur as a wolf really has."





To create the fur, CG supervisor Christopher Townsend's team placed guide hairs that would later be interpolated to form the animals' coats, creating longer hairs on the animals' backs so that the digital wolves could raise their hackles. By placing transparent models of a digital wolf over photos of the real wolf, they were able to place guide hairs to match. "We tried to mimic the length of the hair, the placement, and the curl," says Lakner. "Otherwise the dynamics wouldn't look right." A rendering of the guide hairs alone gave the team an idea of what the final simulation would look like; testing the dynamics on the guide hairs gave the team a sense of how the simulated coat would move.

The yellow wolf, it turned out, had to be treated differently from the black wolf. "None of his hair was straight," says Lakner. Also, the black wolf's coat was shinier. "Black hair is oilier," explains Lakner. "It absorbs and refracts light differently than yellow or blonde hair does, so we had to simulate that."

The different light components—diffuse, specular, and ambient—were rendered separately. "On one hand, hair is very complex," says Lakner, "but it's also forgiving. You don't have to place the light exactly as it is in nature because it's scattered so much." —BR


At top, head shots of digital Jasper. At bottom, Jasper's guide hairs had a slight curl (at left, bottom). Titan's guide hairs were straight, as this rendering shows (at left, top). Digital hair density is checked against a photo of the real Titan




The Orphanage used a total of 6tb of on-line disk space. The studio's workstations are connected via Gigabit Ethernet to a cluster of SGI and Linux servers that deliver data from multiple SGI RAID-5 disk arrays connected via Fibre Channel. The studio also uses Nexsan ATAboy disk arrays for HD-resolution, real-time throughput in editing applications.

Hydraulx used 4tb of direct-attached Fibre Channel RAID storage for its 3D files, and its 3D workstations were connected via Gigabit Ethernet to Linux file servers. The studio's storage vendors included Discrete and Xyratex.