The Big and the Sméagol
Issue: Volume: 27 Issue: 1 (Jan 2004)

The Big and the Sméagol

It would belittle Jackson's grand accomplishment to call these sagas "effects films." Yet without visual effects, both practical and digital, we wouldn't have seen, in the first two films, 10,000 Orcs attacking Helm's Deep, Gandalf fighting a fiery Balrog, Gollum leading Frodo and Sam to Mount Doom, or an army of Ents high-stepping out of a forest. Nor, in The Return of the King, would we watch 250,000 Orcs, thousands of horsemen, and mammoth Mumakils trumpet their way into battle, Gandalf fly on the wings of an eagle, and Frodo, Sam, and Gollum wrestle near the lava of Mount Doom.

In terms of effects, it turns out that the first films were a dress rehearsal for the third, which has more effects than the first two combined. "It probably has around 1800 visual effects shots," says Jim Rygiel, visual effects supervisor. "I figure we're pushing an hour and a half to two hours of effects." Weta Digital in Wellington, New Zealand, created 1500 of those shots— twice the number the studio created for the second film (750), which was nearly twice the number of the first (400).

Among the 1500 shots are scenes with Gollum, creatures new and old, digital environments, and procedural simulations. For each, Weta developed innovative and efficient techniques—some involving major changes, others more a matter of fine-tuning.

"We knew the volume of shots we'd have to do," says Joe Letteri, visual effects supervisor at Weta. "At the end of film two, when Peter [Jackson] decided to flood Isengard, we had to fly by the seat of our pants. This time, we had a system in place, so if we needed to stage something that didn't exist, we'd be ready. Also, we hammered on the pipeline and got all the things we needed working to the point where the technical side was transparent and people could just use it. By setting up the pipeline with a lot of standards in mind, we could pull people in from all over the world." Indeed, the studio grew from 150 people on the first film to around 420 at the height of production for the third, with 20 compositors brought in during the final 10 weeks.

Weta's pipeline is based on Alias Systems' Maya for modeling and animation, Pixar Animation Studios' PR Render-Man for rendering, and Apple's Shake for compositing. In addition, Massive Soft-ware's Massive handled procedural animation for crowds, Jon Allitt's Grunt rendered the crowds, and Giant Studios' Giant software was used for motion capture. The programs ran on Linux-based Intel machines with a total of 3500 processors at last count.

Although Weta created digital environments and set extensions for The Return of the King—notably Sauron's tower, a digital structure that breaks apart and collapses with the help of software developed in conjunction with Tweak Films—most of Weta's work centered on creatures and characters. "A big chunk of the film is the Pelennor Fields battle, where we have digital Mumakils attacked by digital riders on digital horses who are attacked by digital Orcs coexisiting with practical horses, riders, and Orcs," says Rygiel. "Peter [Jackson] brought back all the digital characters except Balrog and the Watcher." Gollum, of course, plays a major role.
Weta adjusted the lighting for Gollum's eyes, changed his hair simulation, and slightly altered his model for this film (below). For the battle above, the studio procedurally animated armies of digital characters.




As before, Gollum's performance is based on that of actor Andy Serkis, with animators at Weta again using a mixture of motion capture, roto-motion (matching animation to a live-action plate), and keyframe animation.

For this film, the motion-capture team developed an onset capture system, arguably the first use of motion capture during principal photography and an innovation that will likely have a major effect on filmmaking. "It was particularly useful when Andy [Serkis] had to tumble down the hillside as he's fighting with Frodo [Elijah Wood]," Rygiel says. "As they're tumbling down, we're motion-capturing Gollum in real time."

Previously, the crew would film a scene with Serkis wearing a blue suit so that he could be easily replaced with Gollum later. Then, they'd bring Serkis onto a motion-capture stage and try to match the earlier performance. Lastly, once Serkis was painted out of the shot, the motion-editing team, matchmovers, and animators would fit Gollum into the scene. But, because Serkis's performance in the motion-capture studio never quite matched his performance on the set, the process was arduous.

By contrast, with onset motion capture, the markers were on Serkis's blue suit when Jackson shot principal photography. "When Gollum is fighting with Frodo and Sam on Mount Doom, you believe the character is there, pulling their shirts, hitting, punching, choking," says motion-capture supervisor David Bawel, "because he was." Bawel led a 10-person motion-capture team (that grew to 15-plus when capturing Serkis) and a 15-person motion-edit team that together produced animation for Massive agents (characters used in the procedural crowd simulations), horses, and digital doubles, as well as for Gollum.
Gollum's facial expressions were hand-animated, but for his performance, animators often used motion-capture data from actor Andy Serkis, who also provided Gollum's voice.




Once available, the onset motion-capture system found other uses. "We put markers on Elijah and Sean (Astin, who played Sam) so when they had to push Gollum or pick up objects that were not real, we knew where in space these points were relative to the camera," Bawel says. "We also tracked cameras and objects that we'd normally have to track off the film plates. It's a step forward that I'm confident will continue in the filmmaking process."

Although Weta used Giant's system on its large motion-capture stages for all three films, making onset motion capture possible for the Mount Doom sequence were new cameras and infrared light sources developed by Motion Analysis. The cameras' onboard computers and single cable simplified setting up a system on a sound stage; the new light sources were invisible to the film cameras, yet strong enough to keep the motion-capture cameras at a distance and out of the way.

The team's portable system held 16 cameras in an array on a truss above the sound stage, positioned so that the cameras could track any object within the set. The 1.3 million-pixel-resolution cameras caught the action at 60 and 120 frames per second, although they could have captured at a faster rate, according to Bawel.

"Because Gollum and Andy are built differently, the motion capture required some adjustment," says Randall William Cook, animation supervisor. "But much more of Andy's movement is guiding Gollum in this film than in The Two Towers."

For Gollum's face though, as in The Two Towers, animators used Serkis's performance only as reference, creating facial expressions with keyframe animation through a Maya-based facial control system and a library of expressions. "In shots where Gollum had no dialogue, the animators would write out precisely what Gollum was thinking on every frame, and then act out his internal dialog without moving his lips," says Cook. "We had to fool the lie detector of the movie camera into believing Gollum was really thinking, not pulling faces."

As for Gollum's physical self, the crew made subtle improvements to the now well-known character and also created a second, emaciated version of his model for some shots. For The Two Towers, to make Gollum's skin look like flesh, Weta wizards assembled a witch's brew of techniques, including a subsurface scattering shader written by technical director Ken McGaugh, texture maps for surface details, specularity, and shadow maps for ambient occlusion. "We stuck with what we were doing, although Ken added a little raytracing to increase the level of detail in the ambient occlusion," says Greg Butler, sequence supervisor for Gollum. "Also, after film two ended, Steve Upstill began working on Gollum's eye shaders. The lights were tuned to Gollum's skin, and when we threw lights on his eyes, we didn't get back what we expected. Steve gave us more control."

To gain greater command over Gollum's hair, the team switched from Maya's cloth and fur tools to Sylflex plug-ins for Maya. "Now, we can make individual adjustments to every hair," says Butler.

As for basic structure, other than modifying Gollum's knee and elbow shapes and reshaping his calves, his model, which was switched from NURBS to a subdivision surface model during film two, was left largely unchanged.

The same can't be said for the other digital characters. All but Treebeard were remodeled and updated. (They left the giant tree's 2200 texture-mapped NURBS patches alone.)

"Some creatures were never meant to get close to the camera or perform motions as complex as needed for this film," says Letteri. "But the biggest change was that we applied everything we learned from Gollum to the creatures."

The major modeling change adopted from Gollum was the switch from NURBS to subdivision surfaces. "The success of subdivision surfaces was so significant, we rebuilt all our creatures and digital doubles from scratch," says Matt Aitken, digital models supervisor. "It was like starting from a clean slate."

Also, the effects teams added lighting and rendering techniques honed on Gollum to all the creatures. "Everything has subsurface scattering now," says Guy Williams, CG supervisor, "even the trolls." In addition, the dragon-like Fell Beasts and elephant-like Mumakils received special attention, and the eagle Gwaihir had its feathers unruffled.

In one scene, nine of the dark Fell Beasts, each with a digital Ringwraith on its back, black cape flowing in the wind, fly out of the clouds and over the white city of Minas Tirith. Although the Fell Beast was first created in preproduction before The Fellowship of the Ring, because the camera moves close in this film, the team improved its look. "Most of the focus on the Fell Beast was on writing new shaders, simulating the effect of wind on the wing membranes, and adding more details to the textures," says Jean-Colas Prunier, lead technical director. "Its skin was covered with scales that needed to be iridescent." To light the beast, which measured around 98 feet from wingtip to wingtip, the crew used a combination of shadow maps and shaders that utilized the new ray tracing features in RenderMan 11 to create subsurface scattering, ambient occlusion, and for part of its wings, a particularly diffuse translucency.
The Fell Beasts at left and below were switched from NURBS to a subdivision surface model, as were all the creatures except Treebeard. In addition, new shaders were written for this creature's close-up.




"Working on this creature was like doing archeology," Prunier says. "One texture layer was four years old; another was painted by someone who worked here two years ago. The history of the production on these three films was in this creature."

To add lightweight geometric detail to the creatures' surfaces, the modeling team shipped maquettes to XYZ RGB for scanning with a 3D scanner from Arius3D. Modelers then built a subdivision mesh on top of the scan and extracted the difference between the heavy scanned mesh and the lighter subdivision mesh as a displacement map that was applied to the subdivision surface at render time. "It's a great way to preserve geometric detail without clogging up the 3D pipeline," says Aitken.

The team also began using a software-based method for adding geometric detail. "We've been working with the developers of ZBrush [Pixologic] to develop a pipeline so we can paint detail rather than using scans," notes Aitken. "We were using it routinely on models that didn't start with scans—painting muscle shapes onto the surface of the horse, adding detail to props, such as weapons."

Close-up shots of the eagle Gwaihir, on the other hand, presented a different problem. "We needed a new approach for digital feathers," says Aitken. "We got to the point where we could build feathers and keep them from interpenetrating, but after each of the feathers moved away from the next one, the bird looked like it had had a terrible fright." Finally, Aitken decided to turn the puffy bird into a sleek eagle by ignoring interpenetration. "I remembered an old algorithm called the painter's algorithm, which deals with hidden surfaces by rendering everything in the order of distance from the camera," he says. "It paints over surfaces already rendered. It had been taken out of RenderMan because no one was using it, so we had to trick RenderMan to do it." Thus, the eagle was rendered from tail feathers to head, feather after feather on top of (and hiding) the previous one. "It works because the feathers have a fundamental order," says Aitken.

Among the new characters in film three were "great beasts"—created during the last month of production to pull a battering ram during the battle—and, more important, Shelob, who attacks Frodo and Sam in two sequences that total about seven minutes. "She is a spider, but not like any spider you've seen," says Williams. "She's truly evil. Every part of her looks repulsive or painful to touch."
The evil, spider-like Shelob, one of the few new creatures in the film, had three kinds of skin plus bristly hair. The shader used to render it was 2000 lines long and had 20 illumination models.




Working from an animatronic built in the Weta Workshop as reference, the digital effects team built the creature with three kinds of skin: a shiny, purple plastic surface designed to scatter highlights, a soft flesh-like skin for its mouth and belly, and leathery skin. Using techniques learned from creating Gollum, the team layered materials such as dust, dirt, and slime inside the shader. "The shader was probably about 2000 lines long," Williams says, "with 20 illumination models inside it." Shelob's complex, eight-legged rig included controls for every joint and for the back so that animators could squeeze the body through holes.

Of all the creatures, though, the Mumakil was perhaps the most complex. Twenty-one of these 40-foot tall, tusked, elephant-like beasts with wicker towers on their backs holding 50 archers show up during the third phase of the big battle. Trained to stomp, maim, and mangle their opponents, the creatures are so immense that in some shots we see only a foot or a leg. "It was a nightmare-modeling scenario, building the Mumakil and the tower with rope, cloth, canvas, and skins and then rigging the tower for dynamics so that as the creature walked along, the tower swayed and creaked," says Aitken.

The battle of Pelennor Fields takes place on a digital canvas created, in part, from photographs taken of New Zealand's South Island by set director Alan Lee. "We had to follow the main characters through the battlefield, and it was such a fluid mixture, we had to previz pretty much the whole battle in a low-res form," says Rygiel. "Peter would sit in a virtual set and direct Mumakils and horses."

The battle begins when 250,000 Orcs and a pack of dog-like Wargs lay siege to Minas Tirith. The Orcs are attacked by 6000 Rohans on horseback who are, in turn, attacked by the Mumakils.

The Massive software program developed by Stephen Regelous and used for the previous films handled the Orcs and other warriors, the horses and horsemen, the people riding the Mumakils, civilians around Minas Tirith, and an army of the dead. "The software has moved so far since film one," says Eric Saindon, sequence supervisor for Pelennor Fields. "It has support for an arbitrary number of cameras so we can run one big simulation, and then use it for several shots from different angles. And, it has full cloth solves, so we get a lot more motion from hair, skirts, and horse manes and tails."

In addition, Massive agents could maneuver on animated terrain, which helped the archers in the Mumakil towers adapt to the moving platform beneath, and later in the film, helped Orcs run on ground opening up beneath their feet.

While the archers in the Mumakil towers were sometimes Massive agents, sometimes live-action actors composited in, and sometimes digital doubles, the Mumakil itself was animated by hand. "It was a huge undertaking," says Cook. "We used a number of tricks to make its skin slide, and shot sculpting to make the muscles, fat, and bones all behave appropriately on this big thing."

For the riders and horses, Massive supervisor Jon Allitt relied on one Massive agent brain for both horse and rider. "We had a motion tree for each kind of rider, as we did for any agent, with maybe 200 actions, plus another motion tree for the horse," explains Allitt. "The horse motion tree controlled where it went on the battlefield, and the rider would react to the horse, leaning appropriately as it went downhill or uphill or turned. Because it was all in the same brain, the horse could tell what the rider was doing, and the rider knew what the horse was doing."

To move on the battlefield, the horse would "listen" to other agents on the ground. "For example, when the Rohan are charging a line of Mumakil, they have to react to the Mumakil's feet and to the arrows from the Harad in the towers. So, we parented agents that emitted a sound for each foot that helped the horses determine how close they were to the enormous feet." To render the thousands of characters that moved through the Massive pipeline, the crew used Allitt's Grunt, written specifically to handle crowds of animated characters.

Most of the 6000 horses were digital with animation often based on data from real horses that were motion-captured using the Giant system on a 60-by-20-foot stage. "We had horses running, cantering, and being attacked by creatures and characters with weapons," says Bawel. "Some of the data was for specific shots, but most was used for Massive crowd animation."
Digital Mumakils (at right) are attacked by digital riders on digital horses who are attacked by digital Orcs coexisting with real horses, riders, and Orcs in the great battle (above).




About 200 of the horses were real. Some were filmed in front of a 600-by-60-foot bluescreen; 20 were filmed by Rygiel with a handheld camera while riding in a car. "We spent a month figuring out how to track those cameras," says Saindon. "And then we did lots and lots of rotoscoping."

The digital horses are intermixed with live-action horses; Orcs and Mumakils are mixed in with all of them; and they all interact with each other. "You really feel like you're in the middle of this battle with giant elephants around you," says Rygiel.

The most spectacular shots in the battle might be the scenes of Legolas (Orlando Bloom) jumping on a Mumakil. For this, the modeling team ordered special high-res scans of the Mumakil maquette. Bloom was filmed on an "elephant rig" on a bluescreen set, and Cook's team augmented that motion with an animated digital double.

To help push such complicated battle shots as these through production, Saindon's team would pick key shots for various sequences and finish those first. "It was a little scary," he says. "With three weeks left in the schedule, we had more shots to do than we had total shots for the first movie."

Among the final shots were those of Mumakils crashing together in the battle, the Fell Beasts with reflections of fiery explosions in their scales, shots at Black Gate where the earth opens up under the Orcs, and Gollum's death scene.

Although many people on the team will stay to work on the DVD, and some have signed on to work on new films, for others, the end of production on The Return of the King means the end of their own "fellowship." "Many people thought this was a mad quest at the beginning," says Cook, who started working with Jackson on previsualizations for the first film. "They thought such an effects feat couldn't be achieved anywhere, much less in a little backwater part of the globe. It's taken a huge collaboration of talents to achieve this impossible dream."

Adds Rygiel: "People were really into working on the show, not just their little part of the show. We settled in for the long haul. The first time I saw the scene at Gray Havens at the end of the film where the hobbits have to say good-bye to the people they love, I got tears in my eyes because I realized this was the end of the film for me, too. We've reached the end of all three films now. It's pretty amazing. There's no more of this."

Barbara Robertson, a contributing editor for Computer Graphics World, writes about computer graphics, animation, and visual effects. She can be reached at BarbaraRR@comcast.net.

All images ©2003. Courtesy New Line Productions.

Alias Systems www.alias.com
Apple Computer www.apple.com
Arius3D www.arius3D.com
Giant Studios www.giantstudios.com
Intel www.intel.com
Massive Software www.massivesoftware.com
Motion Analysis www.motionanalysis.com
Pixar Animation Studios www.pixar.com
Pixologic www.pixologic.com
Sylflex www.syflex.biz