Sphere of Destruction
Issue: Volume: 32 Issue: 11 (Nov. 2009)

Sphere of Destruction

Read in-depth features about how cutting-edge CG was created for various movies making Oscar buzz.

Uncharted Territory, Digital Domain, Double Negative, Imageworks, Scanline, and 11 other studios destroyed the world for director Roland Emmerich's film 2012.


At their best, visual effects support a director’s vision. Whether the director uses invisible effects in the background or more visible digital characters, the goal is always to support the story. In director Roland Emmerich’s film 2012, the story is about love, family, and the survival of the human species. Survival, that is, in the face of the entire planet’s destruction. The end of the world. So, in this film, the actors play people at the mercy of the effects.


Digital Domain lifted this huge chunk of Los Angeles and slid it into the ocean following a 10.5 earthquake for this all-CG shot in 2012.
Actor John Cusack plays Jackson Curtis, a sci-fi author and limousine driver. Amanda Peet is Kate, Jackson’s ex-wife. But, the real stars of Sony Pictures Entertainment’s mega-disaster film are the visual effects that rip apart Los Angeles, erupt Yellowstone National Park, flood the Himalayas, and cause massive amounts of mayhem around the globe.
            
Co-producers Volker Engel and Marc Weigart supervised the effects. Both had worked with Emmerich on Independence Day, for which Engel won an Oscar for best visual effects. As is their practice, the two men expanded their production company, Uncharted Territory, to include postproduction capability for the duration of 2012, hiring a crew of approximately 100, buying hardware and software, and setting up shop on Sony’s lot upstairs from Emmerich’s editorial suite. The Uncharted Territory artists created 422 of the 1315 shots in the film, and Engel and Weigart contracted with another 15 studios to wrestle the rest, each chosen at the outset for their particular skills (see chart, below).

“Half this movie is virtual,” Engel says. “The effects are about set extensions, particle work, and destruction.”

But, that hardly tells the story. The sets were monumental; the particle, fluid, and rigid-body simulations that created the destruction were complicated. “People have done photoreal environments before,” Weigart says. “But when the entire environment reacts to a physical event, it makes everything more than tenfold as complex.”


(Below, bottom) Uncharted Territory geared up to destroy LA buildings by helping fund Cebas development of a volume breaker for its ThinkingParticles software. (Below, top) A combination of simulations—rigid body for buildings and cars, soft body for trees, and cloth for grass—caused the all-digital environment to move realistically (above).


Listen to the visual effects supervisors at the studios that created the most complex sequences:
“We didn’t have a huge number of shots, but they were really hard shots with a specific type of destruction and specific problems to solve,” says Mohen Leo, who supervised Digital Domain’s 97 shots, which included the characters’ terrifying flight through and over earthquake-riven Los Angeles.

 Alex Wuttke, visual effects supervisor at Double Negative, which destroyed Yellowstone National Park and St. Peter’s Basilica, also notes that the shot count was deceptive. “We had 200 shots, of which 130 were particularly heavy 3D with ridiculously complicated layers,” he says. “We had creative challenges and technical challenges, and sometimes we just plain ran out of disk space.”

“I don’t think we’ve ever pushed as much geometry through a raytracer as we did for this film,” says Peter Nofz, VFX supervisor at Imageworks, where artists built mammoth arks inside a digital cave in the digital Himalayan Mountains.

Scanline floated those giant arks on waves that surged over mountains and, in another sequence, sent water rushing through Washington, DC. “Usually when you do effects, you have a range of shots,” says Stephan Trojansky, visual effects supervisor. “We had huge tidal wave shots in dimensions no one had ever seen before, and hundreds of miles of floodwaters. It was really tough.”

As for Uncharted Territory, Engel believes they saved some of the most complex sequences for themselves, especially the shots in which the characters drive through a Los Angeles earthquake in progress. “Every three seconds there’s a new, big event,” he says. “It was one of the most complicated in the movie in terms of all the destruction happening.”
    
Uncharted Territory: Breaking LA and Las Vegas, and the ‘Hub’
Early in the film, limousine driver Jackson Curtis arrives at his ex-wife Kate’s house to rescue her and their children. Kate’s boyfriend is there, too, so Curtis piles the whole group into the limo and makes a mad dash for the airport as a 10.5 earthquake collapses Los Angeles around him. Because the script didn’t provide details beyond that, Uncharted Territory created a rough previs for the path Curtis would take, the events, and the camera angles along the way. Once approved, Pixomondo, which, according to Weigart, provided previs for 90 percent of the show (and which the postproduction houses highly praised), created the final previs for the three-minute, 93-shot sequence.

Weigart describes the action: “The family barely makes it out of the house. As they drive to the airport, the street ripples, buildings break, high-rises fall down, the big doughnut from Randy’s Donuts rolls down the street. A cement truck slides off the freeway, right in front of the limo, and crashes into a gas station that explodes. To the right is a parking garage that collapses and spews out cars parked there. They drive under the freeway as it breaks apart. And, at the end of the sequence, two glass high-rises tip over as the limo drives between. The high-rise on the left crashes into the building on the right, and as it crumbles, the limo escapes out on the other side.

To create the sequence, the postproduction team started with live-action elements of the limousine shot in Vancouver, British Columbia, on an 800-foot-long asphalt road surrounded by bluescreen. For this film, Emmerich used a Panovision Genesis HD camera, which allowed Weigart to develop, with Sony and Imageworks, a 100 percent digital pipeline that Uncharted Territory used for its own workflow and as the hub for shots from other studios moving through the process.

“We had a 400tb server at Sony and a fiber-optics line,” Weigart says. “As soon as the editors finished, we had software that converted the EDL [edit decision list], renamed the files, moved them onto our server in a correct structure, and notified the artists that they could start work. Doing this with 35mm film used to take two days. This system took two minutes.”

A project management system that Weigart developed helped speed the approval process and was equally fast moving finals back to editorial.

“We’d take submissions from everyone, put them into the frame cycles, and add notes and comments. Roland would add notes, and we’d send them back to the vendors,” Weigart says. “When we set the workflow to ‘final,’ the system would move the latest version back to the server for editorial, and they could cut the shot in for DI.” Cinesync helped communication between the supervisors and studios in Europe. Most of the work, though, happened in LA, by design. Engel and Weigart wanted to work interactively, in person, with the local studios.     


 Digital Domain developed a proprietary volume breaker, called Drop, to set up the buildings for destruction using a rigid-body solver based on Bullet, an open-source engine.
  
At Uncharted Territory, artists used a pipeline based on Autodesk’s Maya and XSI for modeling, and 3ds Max with various plug-ins for the effects. Compositing happened through The Foundry’s Nuke and Eyeon’s Fusion, and rendering through Cebas’s FinalRender. Cebas’s Thinking­Particles, a nodal-based, rule-driven particle system for 3ds Max, broke the road and buildings, and handled the dynamics.

“We discovered Cebas was developing something called a volume breaker for ThinkingParticles, so we financed part of that development to help them hire additional programmers,” Engel says. The volume breaker split CG objects into pieces, yet kept objects in their original state until the physical simulation took over.

“Without the volume breaker, a CG modeler would have to draw every split and crack,” Weigart says. “With the volume breaker, we could hit a button and break something into 2000 pieces, with 500 of them small ones, in various shapes.”

Modelers created the architecture using photogrammetry (image-based modeling), then fed the buildings into Thinking­Particles to break them apart and glue them back together, before sending them on to effects artists who ran the simulations. “You have to think about what materials are in each building so you can treat each material differently,” Weigart says. “Things that don’t break apart, like the palm trees, still had to be rigged.” The trees would swing and sway using soft-body simulation; the grass used cloth simulation.

“The sequence has 114 shots, and we ran around 400 different simulations,” Weigart says. The total render time for the sequence—which had 7056 frames that took nearly 20 hours per frame to render—was approximately 141,120 hours. Engel points out that had the sequence been rendered on a single machine, it would have taken 16 years.

Although Emmerich famously prefers 3D environments, Weigart and Engel decided to use projection maps to create 2.5D matte paintings in Nuke for the Las Vegas sequence. “Whenever he saw something he didn’t like, he’d say, ‘I wonder. It’s probably not real 3D,’????” Engel says. “And we’d say, ‘We’ll investigate.’ Usually whatever it was had nothing to do with 2D or 3D, so we tried to use projection map techniques in Las Vegas as much as possible.”

Digital Domain: Breaking
Los Angeles in Half

Jackson Curtis and his extended family reach the airport and make it onto a plane, but as the plane takes off, a major crack opens in the earth and breaks LA in half. The ground drops from under them, and they find themselves flying through a crack widening to the size of the Grand Canyon, with the entire city tumbling into the crack. When the plane pulls up and out of the crack, it flies through the toppling buildings in downtown LA. Outside the window, the family sees Los Angeles slide into the ocean.

“All we had to work with were the bluescreen shots of a plane cockpit on a gimbal,” says Leo. “For the arrival at the airport, they also had a floor as big as a basket­ball court on giant pistons so it could buckle and shake violently. Beyond that, everything was all computer-generated.”

Once Digital Domain had Emmerich’s approval on which buildings on which streets he wanted to see destroyed, modelers built 350 individual assets to populate the city—Santa Monica bungalows, Art Deco buildings on Wilshire Boulevard, office buildings downtown, fire hydrants, traffic lights, newspaper boxes, and so forth.

“These weren’t just shells,” Leo says. “We gave the modeling department criteria for building the interior structures.”

To break the buildings, the effects department used a proprietary volume-breaking system that procedurally cut the objects and held the pieces in place with constraints. Then, the effects artists sent the objects to the simulation department. For simulation, Digital Domain used an open-source rigid-body dynamics solver, called Bullet, as the core technology. The volume breaker, which they called “Drop,” is proprietary. The crew implemented both within Side Effects Software’s Houdini.

 “We could give [Drop] a building, and it would cut it into small shapes,” says Marten Larsson, CG effects animation lead, “sometimes as many as 90,000 per building.” A constraining tool automatically linked neighboring objects. Each constraint had its own data, carrying information about how the objects could move and the force needed to break the link. Other parameters gave the material particular characteristics so it might, for example, crumble like concrete, shatter like glass, or splinter like wood. Artists could put paint weaknesses into the buildings and add other variations using three-dimensional noise.
 
“Drop was really beautiful and powerful,” Leo says. “It allowed us to push through an order of magnitude more complex geometry. During the shot, the plane flies in this bottomless crack, with sheer rock walls hundreds of feet tall. Layers of the walls continuously break off, with whole sections of the city falling in. If the previs said a giant piece must move particular to the camera, we’d animate that. But for the majority of the wall, we’d break layers procedurally and simulate the pieces falling down.”

To create the length of the crumbling wall, the artists broke one 300-foot-long wall into sections with Drop, setting a  variety of parameters to create 20 individual simulations so it would crumble in different ways. Then using these generic sections, they formed a continuous wall long enough for the flight path. Once they had approvals for the wall, the effects artists added buildings and destroyed them using Drop and the rigid-body solver, then a layer of cars, trees, and smaller objects, and lastly dust and pebbles.   

“Having our own rigid solver and Drop made this doable,” Leo says. “There wasn’t any off-the-shelf software that could handle simulations for whole city blocks.”

In addition to splitting LA in half, Digital Domain slid that city into the ocean using photogrammetry to build that city from aerial shots, covered Washington, DC in ash using 2.5D matte paintings, destroyed a camp of refugees animated with Massive software, and toppled the Washington Monument using Drop and the rigid-body simulator.


Double Negative used a proprietary volume shatterer and rigid-body solver called Dynamite to demolish St. Peter’s Basilica. NaturalMotion’s Endorphin sent nuns and priests scurrying in the foreground of the scene below, while sprite people panicked in the background.

“People began getting comfortable with using computer graphics for water simulations a couple years ago,” Leo says. “That’s happening now with complex rigid-body dynamics. Rather than building miniatures and blowing them up in one take, we’re building entire cities and crumbling and breaking them with creative control.”

Double Negative: Exploding Yellowstone, Destroying
St. Peter’s Basilica

In the film, John Cusack’s character finds his old friend Charlie on top of the appropriately named Charlie’s Peak, a high point in Yellowstone Park. Down below, pressure builds in the caldera beneath the park. An area 12 miles wide swells, and as it rises, the surrounding mountains crumble. Then, a series of giant spherical explosions rip through the ground and consume the bubble. A huge pyroclastic ash cloud filled with hot, fast-moving gas and rock shoots lava bombs at Cusack as he races for safety in a cumbersome RV.  
 
Emmerich shot the confrontation between Charlie and Jackson Curtis on location and on a soundstage. For the soundstage shots, Double Negative wrapped the environment around them with a 2.5D matte painting. The effects artists used the live-action plates shot in Kamloops, British Columbia, for camera moves and for blocking the explosions. That was the easy part. The hard part bubbled up next.

First, the crew assembled a 3D environment in Maya. They moved that geometry into Houdini to create test deformations for the swelling earth that they sent to Engel and Weigart. “We tried to keep the deformations as procedural as possible by creating deformation networks in Houdini,” Wuttke says. “It was the fastest route to the finish line. Also, the swelling happens gradually, and because we were doing the deformation through Houdini, we could layer secondary effects, like the mountain ridge that crumbles, over the top, and then export everything as a single piece of geometry through Maya for rendering.”

For rendering, DNeg used Pixar’s Render­Man for solid surfaces and its own volumetric renderer, DNB, for dust, dirt, and other volumetrics.

The explosions that rip through the swollen surface were the trickiest parts of the sequence, creatively and technically. “We had the big bubble, the deformed terrain,” Wuttke explains. “Just as it blows, we simulated huge explosion elements with Squirt, our in-house fluid simulator, that break through the surface.”


Double Negative used proxies representing simulation groups stored in libraries to keyframe the action during its explosive sequences in Yellowstone National Park.

To control the timing for the explosions through 10 shots, the artists created libraries of simulations that they placed in a 3D layout. In Maya, they’d see proxy simulation elements as hard surfaces that expanded to represent the explosions.
 
 “We had 30 or 40 of these simulation elements,” says Gavin Graham, CG supervisor. “In any one shot, we’d instance 100 or 200 simulation elements at different distances. The volumetric renders took a huge amount of time, so we knew we had to have the right assets in place.” At render time, DNB rendered the explosions based on attributes, such as temperature, from the simulation, and scattered light from each simulation onto all the other elements. 

The crew used a similar technique to create the huge ash cloud that chases after Curtis. “We built small simulation groupings that worked together, and re-used them from shot to shot,” Wuttke says. “We wanted to give the ash clouds attributes, like a character in the movie. The cloud is born of simulation, but the placement of the groups strongly motivated the movement. In effect, we used the simulation pieces as keyframe poses and piped it through Maya for rendering.”

In one shot, for example, the ash cloud starts to surround Curtis like pincer claws. “It was important to keyframe that motion,” Wuttke says. “We could imbue it with the right sculptural qualities.”

Thus, even though the crew used the simulation elements as keyframe poses to place, control, and sculpt the cloud, because real physics drove the underlying simulation, it looked realistic. “When you press ‘render’ and see it on the move, it looks like a menacing, expanding, realistic cloud.” Graham says.

Libraries of simulation groups also helped the artists shoot lava bombs out of the ash cloud as it chased after Curtis, and target the landings using sometimes as many as 50 or 60 bombs per shot. “We created the lava bombs in three sizes, three speeds, and with three angles,” Graham says. “Animators could block out the shots in Maya, and then we’d export that data into Houdini for simulation.”

To have the speed and trajectory of each bomb affect what happened when it hit the ground, velocity fields exported from Houdini into Squirt generated dust, smoke, debris, ripped turf, and so forth. The effects artists added this data to the bomb’s simulation group and sent it all back to Maya for rendering. In addition, using tiled sections of simulated smoke spinning at different speeds, the artists placed smoke trails behind the bombs.

On the ground, as Curtis nears the plane, a giant crack breaks the runway, and he jumps the crack in his RV. To create that effect, the artists drew curves across a 3D landscape generated from aerial shots of the runway, then peeled open the earth using a volumetric shattering plug-in within Maya. Dynamite, DNeg’s rigid-body solver, which is based on the open-source dynamics engine, ODE, handled the falling rock.

For DNeg’s second sequence, the destruction of St. Peter’s Basilica, the crew used a hierarchy of simulations, rather than simulation groups. The shots take place during a montage of destruction around the planet. In Rome, during a midnight mass in St. Peter’s, a crack on the ceiling of the Sistine Chapel splits God from Adam. When the camera moves outside, we see the cathedral lift into the air and crash down. The dome detaches and rolls across the ground, crushing the priests and nuns beneath.

DNeg built the Basilica with a volumetric shattering tool set, used glue in different strengths to hold it together, and then crunched it within its Dynamite software using hierarchies of simulations by starting with big pieces and moving to dust. Sprite crowds panicked in the background, and digital people powered by NaturalMotion’s Endorphin ran, collided with other people, fought, and fell in the foreground.
 
“The big complication with this sequence was the complexity of the passes,” Graham says. “We had crowds, breaking buildings, bricks hurling, and smoke from the collapsing wall, so layering all these elements was really complex.”

Sony Pictures Imageworks: Building the Arks

In the film, the plan for saving some of the people centered on building huge ships inside the Himalayan Mountains that roll out of their construction caves onto tall, extremely tall, Y-shaped stantions. As the dignitaries arrive by the busload, we see that two arks are still under construction. The people panic and run to get on the finished ships. Other than footage of the stars and some passengers filmed on set, the environment is all-digital, built at Image­works, as are most of the people in the buses and out.
 
“It’s a huge, huge environment,” Nofz says. “The arks are 800 meters, almost a
kilometer long, so the people are dots.”




(Top) Sony Pictures Imageworks modeled the detailed, 800-meter-long CG arks. (Middle) A matte painting projected onto 3D geometry formed the environment surrounding the modeled and shaded CG construction zone. (Bottom) The all-CG ships in this final shot move toward a loading dock above a Himalayan valley.

Because the studio needed to send the arks on to Scanline, the crew modeled one ship in detail, using geometry rather than any proprietary displacement maps, and then replicated it. “We added details because we didn’t know if the other studios would need it, but we rendered it only in areas where we could see it,” Nofz says. Similarly, inside the construction cave, which they lit with thousands of digital incandescent lights, the artists added pipes and other objects as needed.
 
“The space is too big to even fathom how big it is,” Nofz says. “I felt like we built the biggest James Bond cave ever.” To create the mountains, matte painters projected textures onto simple geometry. Modelers worked in Maya. The studio’s version of Arnold handled the rendering. And compositors used Shake. Massive software and keyframe animation moved the digital people racing to get onto the ships.

“Your life expectancy on this film as a digital person is not very high,” Nofz laughs.

Scanline: Floating the Boats, Crashing into the White House
Because the script called for tidal waves to flood the Himalayas, Engel contacted Stephan Trojansky at Scanline, a studio known for creating massive simulations, and asked him to help determine what might be possible. “About four weeks into production, Roland [Emmerich] wanted to do a teaser,” Trojansky says. “And, he remembered the tests we did, so he created the teaser around the look dev clips. We had six weeks to produce the final quality.”

From there, they began working on the actual sequences in which a giant tidal wave sweeps away the ships created at Image­works, and Air Force One, which Scanline built, only to have it swept away by the wave and crash into the ark. And then the sequence grew. “About four months before deadline, Roland said, ‘I have a great idea,’?” Trojansky recalls. “?‘What about crashing the ark into Mount Everest and creating an avalanche?’ So, we have a lot of water interaction and wake simulation, and at the end of the sequence, the ark crashes into Mount Everest and triggers an avalanche, so snow and mud fall down onto the ship and into the water.”
 

Shot Count


 Uncharted Territory
422
 Double Negative 203
 SPI 154
 Digital Domain  
97
 Pixomondo 93
 Scanline  90
 Hydraulx  60
 Gradient FX  46
 Evil Eye  42
 Factory FX  32
 UPP
 25
 The Post Office  23
 Crazy Horse  15
Alex Lemke FX 
 7
 Café FX  5
 Picture Mill  1
 

To create the sequence, Scanline artists first animated the hero objects using keyframe animation in 3ds Max. Then, they ran the simulations. The major challenge was art-directing the simulation all the way through the approval process. 

“The water needed to look realistic, but it didn’t behave realistically,” Trojansky says. “Having a solver was only 20 percent of the solution. Controlling the solver was 80
percent.”
 
As a result, the team decided to use a polygonal character rig to drive the simulation, a technique they had developed to create the river god for Narnia. The polygonal under­structure emits the water and acts as a kind of controlling force. “It’s like having a magic hand inside the Navier-Stokes flow,” Trojansky says.

Because the tidal wave was so huge, the R&D team devised techniques to compute one frame on as many networked machines in the renderfarm as the group wanted, and thereby achieve massively parallel simulations. They also invested time into improving the infrastructure. “A single frame might use 50gb of data, and a single shot, 20tb,” Trojansky says. “So we created our own server system with a throughput of 4gb/sec and 1.2 petabytes of disk storage. That meant we could re-run a shot in one to two days rather than one to two weeks.”
          
In addition, the R&D team developed technology for retiming the simulation. “On many occasions, even after we had approval on the postvis, we’d hear, ‘We love it, love it, love it. But can’t you do it in 12 frames per second?’?” Trojansky says. “Or, ‘18 frames per second?’ We wanted to print T-shirts that said, ‘18 fps looks better.’?”


Scanline VFX created and simulated the interaction of the digital water with the mountains,
arks, Air Force One, debris, snow, and mudslides in the climactic all-CG shots.


The problem was that time steps drive the simulation, so changing the time steps changes the behavior—that is, the look of the simulation. An easy way to visualize this is to imagine throwing a cup of water out the window of a car. If the car isn’t moving, the water streams down. If the car is moving fast, you create a trail of droplets and mist. “The worst thing was to hear, ‘I liked the look of the previous version and the speed of the current version,’?” Trojansky says. “So we created a way to compensate for the new timing in the solver, and the slower simulations looked the same.”

To speed the approval process, the R&D team also developed a better way to refine rough simulations. “In traditional simulation systems, you do a rough version for quick feedback and then re-run it in higher resolution,” Trojansky says. “But each time you run it, the simulation looks different because you can’t scale the detail without affecting the timing, which is what the client wants most. We developed an iterative approach. Our first version drives the next version, which drives the next version, and the main characteristics of the movement stay the same. We can go finer and finer until we’re at a droplet level, until the mist blows off the wake.”

In addition to the Himalayan simulations, Scanline built environments and animated crowds for the sequence in which a tidal wave sends an aircraft carrier crashing into the White House. “We not only had water flowing through the city, but also a crowd of refugees on the ground, trees bending and interacting, the aircraft carrier reflecting and refracting, airplanes falling into the water, and burning houses from the previous earthquake. We even had to sweep away the American flag on the roof of the White House.”

The rules were the same as in the real world: the stronger object wins. In other words, the flag wouldn’t affect the tidal wave, but parts of the building would. To animate the crowds, Scanline used Massive software, as they did to create a fugitive trail of 10,000 people in India fleeing the tidal wave rushing toward the Himalayas. “I think that’s where you see the most digital people killed at one time,” Trojansky says.

“This show pushed us on every level,” Trojansky adds. “With some visual effects work, you can just hire more animators and modelers. In this highly specialized complex work, it’s so much about the hardware and software technology, and the people who can drive it. It was a big step for our team to reach this scale.”

During a panel session at Nvidia’s recent GPU conference, Thad Beier, CTO at Digital Domain, described 2012 as one of the best visual effects demo reels ever. As Digital Domain’s Leo pointed out, the film proves that computer graphics can replace miniatures to create this kind of destruction. And, it shows that with the help of faster hardware, clever software, and adept artists, directors can now think about using natural phenomena as characters in ways they might not have imagined only a few years ago.

At the end of the film, Engel and Weigart disbanded Uncharted Territory, as they always do at the end of their projects, but this time, they didn’t sell the machines. “We tried to talk Sony into selling them,” they say. “We told them, ‘This works because we always buy the newest machines and there will be something newer that’s twice as fast for the next movie.’ They said, ‘We don’t want you to touch a thing. Leave it all exactly as it is.’ So, all the machines are still there in this empty room. No people. But, of course, it’s all about the people.”