Shaping Middle-earth
Issue: Volume 36 Issue 2: (Jan/Feb 2013)

Shaping Middle-earth

Artists at Weta Digital used a variety of visual effects techniques to create Middle-earth. At top, digital eagles soar over a CG landscape. Top, actors perform on a small set extended digitally. Bottom, the actors’ size disparity is an illusion made possible with motion-control cameras and digital compositing.

The hobbits are diminutive and the dwarves short, but there is nothing small in Director Peter Jackson’s cinematic vision of JRR Tolkien’s Middle-earth, as we saw a decade ago in The Lord of the Rings trilogy and see again now in The Hobbit: An Unexpected Journey. Nor in Jackson’s studio Weta Digital’s approach to helping the director realize his vision.

Eight hundred and fifty artists at Weta Digital worked on the Hobbit, many for more than three years. It is the first film in a planned three-part Hobbit series; three hours of what will become a nine-hour trilogy. Within a month of its debut, the Warner Bros release had earned more than $886 million at the box office worldwide, received Oscar and BAFTA nominations for best visual effects, and garnered numerous VES nominations.

Four-time Oscar winner and Weta Digital Director Joe Letteri was the senior visual effects supervisor for the show. Working with him were six visual effects supervisors: Eric Saindon, R. Christopher White, Matt Aitken, Kevin Smith, Mark Gee, and Jeff Capogreco. Simon Clutterbuck was head of creatures. Dave Clayton and Eric Reynolds were animation supervisors, and Marco Revelant was the modeling supervisor. Letteri, Saindon, Clayton, and White received the BAFTA and Oscar nominations.

To create the visual effects for this film, the crew developed state-of-the-art techniques utilizing proprietary technology and off-theshelf software to build entirely digital environments, extend sets and locations with computer graphics, enhance scenes with digital effects, and place CG characters, creatures, and digital doubles in leading roles .

In addition, Jackson provided a new challenge: He filmed the Hobbit at 48 frames per second (fps) in stereo 3D. Double the frame rate. Double the number of frames. Petabytes of data. Stereo also meant the studio needed to be precise in 3D space, which added to the workload. And matching scenes shot at 48 fps meant artists had to put more detail in the digital images. But, 48 fps turned out to have some advantages.

The crew could not use forced perspective to create the illusion that Bilbo (Martin Freeman) is much smaller than Gandalf (Ian McKellen) because that trick didn’t work with stereo 3D. Instead, they filmed the actors simultaneously on two stages and scaled the images so that while filming, Jackson could see one image composited from both cameras.

“It was actually a great thing,” Saindon says. “Obviously there were issues. There was twice as much roto. Twice as much paint. So, those departments were hit harder. But, when we have more data, it’s easier to get a nice, clean plate. And, when you have more information, you have less motion blur, so roto and paint were easier. Animators found 48 to be great because instead of doing subtle motion in 24 frames, they had 48.”

For most characters and creatures, animators started with performance data captured at 60 fps. Working in 48 fps meant the animators could use more of that motion-captured data. “You can see that added subtlety in Gollum’s face,” Saindon says. “You see the same lines and wrinkles that were in Andy’s [Serkis] performance.”

A Step Up

Weta Digital celebrates its 20th anniversary this year, but when Jackson began planning the Lord of the Rings trilogy, it was a small, young studio with work on a handful of films to its credit—Jackson’s the Frighteners, Robert Zemeckis’ Contact with additional studios, and a few others. Even so, the artists at Weta Digital brought home visual effects Oscars for all three Rings blockbusters, and went on to win two more visual effects Oscars (King Kong, Avatar), and receive three additional visual effects Oscar nominations (I, Robot, Rise of the Planet of the Apes, District 9). In addition, the studio created Director Steven Spielberg’s animated film The Adventures of Tintin, which received BAFTA nominations for best visual effects and best animated film, and a Golden Globe for best animated feature film.

They brought everything they learned, developed, and refined for those films and others to this one. And then, pushed the technology further to inch computer graphics ever closer to realism. “Across the board, this film represents a step up to the next level in terms of the quality of work we’re producing,” says Aitken, who has been with Weta since 1995.

The Journey

The film begins not with the unexpected journey, but with the reason for the journey. In a prologue to the film, we see glimpses of the dragon Smaug laying waste, long ago, to the want to regain their lonely mountain home, and not least, the mountain of gold inside. We meet the aging Bilbo Baggins (Ian Holm), who describes how the dragon Smaug destroyed Erebor, causing the dwarves to abandon their home—and their gold.

“There are classic visual effects shots in the sequence, with sets on greenscreen stages and wider environments that are all-digital,” Aitken says. “It was an elaborate piece of work because of the environments. We have a transition every shot or two into a new part of Erebor. The mines. The forges. The hammer room. The jewelry workshop. The throne room.”

As for Smaug, even though the crew built, textured, and rendered a complete creature, Jackson teased the audiences with peekaboo views, as he had done with Gollum on the first film of the Rings trilogy. “The prologue takes place a decade before, so in the second film, he’ll be older,” Aitken says. “We’ll tweak the model, but most of the work will be at the texturing and shading level. We’ll work some wrinkles into our displacement maps and make him more lived-in.”

After the prologue, we meet the youthful Bilbo (Martin Freeman), and the story of this hobbit’s unexpected journey begins. Thirteen uninvited, rowdy dwarves show up one by one at the fussy but generally good-natured young hobbit’s hobbit hole and throw a party. When the wizard Gandalf (Ian McKellen) arrives, the dwarves’ reason for disturbing Bilbo’s quiet life becomes clear. Gandalf invited them. Gandalf persuades Bilbo to accompany the dwarves on a quest to rescue their home (and the gold), and convinces the dwarves’ leader, the exiled king and warrior Thorin Oakenshield (Richard Armitage) that, as a burglar (which he is not), Bilbo would be a valuable addition. Bilbo could help open the secret door in the Lonely Mountain.

Real-time Lighting

“Obviously, hardware rendering isn’t new, but we approached it differently,” says Wayne Stables, a visual effects supervisor at Weta Digital, who has spent the past year or so working on new technology. “Rather than starting at the beginning with hardware rendering, we started at the end—with our renderers now. We decided to follow the same physically based lighting and shading models as we do in [Pixar’s] RenderMan. We have the equivalent custom surface materials. Our image-based lights work the same in [our] Gazebo and in RenderMan.”

As a result, a lighting artist can open a Maya scene, create an area light while working in Gazebo, watch the scene render in real time, send it to RenderMan, and see a frame that looks nearly identical. It’s a new tool, one that Stables expects will change the way artists work.

“It helps us hugely in two different ways,” Stables says. “The most obvious way is in character lighting. But, the interesting thing is with environments. With our large CG environments, iterations have become painful to do.” That’s true even though Weta Digital has close to 30,000 processors in its renderfarm.

“Now, we can experiment with different lighting ideas without waiting days for frames to come back from the render wall,” Stables says, “and that’s huge for us. I would like it to become the tool that lighting technical directors use throughout the day for their creative work. Any time you move a light and have to wait to see the effect, you lose your train of thought. Gazebo gives instant feedback.”

Gazebo runs on GPUs and CPUs. “Gazebo itself is a renderer that runs as much as it can on a GPU,” Stables points out. “It has an API layer that goes to the render. But, all our creatures have a ‘provider’ that understands animated meshes. It runs on the CPU and reformats data into a big stream that makes Gazebo happy.”

Although Gazebo can get close to a final render, limitations on graphics memory prevent it from achieving absolute perfection. “We have the shadows and highlights you’d expect to see,” Stables says. “We get the lighting angles and lighting color. But, in RenderMan we render subsurface models and texture maps with pore detail. We don’t push Gazebo to that level.”

Level-of-detail solutions helped with the memory limitation. “We are lighting mammoth CG environments by using lower-resolution geometry,” Stables says. And with the same idea in mind, previs artists have used Gazebo, as well. “I want to light shots in previs and feel confident that those creative decisions will flow through to the end,” Stables says. “If we make our lighting accurate in previs, it will go all the way through.”

In fact, Stables imagines that artists all along the pipeline might use Gazebo. “When you’re building a model and doing layouts, how you choose to light the environment is crucial,” he says. “You want to put lights inside to see what works and what doesn’t. Gazebo will let us do that. And lighting is crucial in animation. In the same way that an actor works with a DP [director of photography], an animator might wonder what a particular pose would look like if he scooted the key light around.

“It becomes a holistic creative process,” Stables adds. “We’ve always done things in a more linear fashion. Moving lighting from the end to the front makes so much sense. If we can consider lighting as we’re blocking animation and laying out environments…that would be fantastic. It’s a bit of a brand-new world for us. I’m extremely excited about it.” — Barbara Robertson

Thus, Bilbo and the dwarves, with occasional and timely help from Gandalf, begin a journey that takes them through a series of encounters with Orcs, elves, trolls, wolves, stone giants, and other creatures in a variety of environments—open spaces, underground cities, mountains, forests, waterfalls, and a cave where Bilbo meets Gollum (Andy Serkis).

Filming took place on set for a year and a half and on location for two and a half months. “On most films, you’re on set for a couple weeks, or, if you’re doing the whole show, six or eight weeks,” says Smith, who was the on-set visual effects supervisor for the second unit. “We had 195 shooting days.”

Capturing Reality

As he had done for The Lord of the Rings trilogy, Jackson took advantage of New Zealand’s natural landscape by taking the actors and the crew into every corner of the country, as Saindon put it. “We traveled from the top of the North Island to the bottom of the South Island in a huge convoy,” he says. “When you lined up all the main-unit trucks in a row, we had a kilometer’s worth of vehicles.”

Back at home, in a departure from how they shot the previous trilogy, the filmmakers decided not to use miniatures for The Hobbit. “Our technology got to the point around the time of Avatar where we could dispense with miniatures entirely,” Aitken says. “We could achieve all the complexity of a natural environment in CG.”

Thus, when they weren’t on location, the director, actors, and crew filmed in huge sets built by Jackson’s Weta Workshop and conceived with help from the two concept designers who had worked with Jackson on the Rings trilogy: John Howe and Alan Lee, who had illustrated the 1997 edition of “The Hobbit” and the 1991 edition of “The Lord of the Rings.” Weta Digital artists matched and extended sets with computer graphics, and built entirely digital environments.

“There were lots of sets, and some were huge,” Saindon says. “And, we needed to have everything correct in stereo 3D. On Rings, if something was wrong, we could stick a tussock in front of it and no one would see it. Here, we didn’t have that ability because of the 3D.”

In the past, the on-set crew would survey sets and place markers that layout artists and modelers could later use to place a virtual camera in 3D layouts and build set extensions. For this film, the crew scanned and photographed each set at night after everyone had left, to collect accurate dimensions and take photographs for textures and reference.

“We had huge environmental builds with no hard lines,” Saindon says. “There were no right angles in the Shire. There was a lot of detail in the Rivendell sets. And, thousands and thousands of pieces of artwork in the goblin caverns, all with organic shapes. So, we did a lot of scanning to capture all the angles. We put the scanned data into JPG files and created QuickTime VRs. We could spin the QuickTime VRs around the pivot point from where we created the scan. So, the camera guys could click on any pixel and get world-space coordinates. They could find the markers and get the information without having to open up the whole scan.”

New Perspectives

Filming in stereo 3D created a second challenge for the crew: They couldn’t use forced perspective, a traditional camera trick, to create the illusion that Gandalf and the elves were bigger than Bilbo and the dwarves, even though the actors were of similar sizes.

“We tried, but it didn’t work,” Saindon says. Instead, they built two sets. To create the illusion, actors playing Bilbo and the dwarves performed their scenes on a detailed set that was 30 percent larger than normal. Ian McKellen and the actors playing the elves, on the other hand, worked on a nearby green-screen set. Jackson operated the camera on the detailed set. The camera on the greenscreen set was slaved to the first.

“We scaled the camera on the second set by 30 percent,” Saindon says. “When Peter moved his camera, he could see the images from both cameras composited together. He could see Ian [McKellen] in with the dwarves at the proper scale. We went through a few hoops to get the [stereo] convergence right; we controlled that separately to have Gandalf also in the right 3D depth. But [motion-control supervisor] Alex Funke got it working. We did some great shots with Ian taking a cup from a dwarf who is running around Ian as Ian moves around the set. Obviously, we had to do the roto later. But, we shot it as one pass, rather than doing a greenscreen shoot later.”

Because the two sets were so large, all the crew wore earpieces to communicate. “Peter had a speaker so he could talk to anyone at any time. If he had to give a quick note, he could do that over the intercom. It was like the voice of God,” Saindon says.

The crew employed a similar process for shots with the “Great Goblin,” and added performance capture to the system to use actor Barry Humphries’ movements as the basis for the CG character.

“Barry needed to be three times the size of the dwarves,” Saindon says. “So, instead of putting Barry on the throne with the dwarves in the same set, it was better to make a smaller set next to it and capture him in his own volume. Then, we used the same sort of simulcam thing that we had for Avatar, but on a bigger scale. We captured everything; more capture than we had attempted for Avatar. It allowed Peter to control the live-action cameras at the same time as the motion capture and get the framing.”

To complete the picture, the visual effects crews surrounded the digital goblin with a CG environment as they would for other creatures and characters throughout the journey.

Stereo 3D and the increased detail in sequences shot at 48 fps meant Weta Digital artists had to match sets perfectly to extend them digitally. Modelers added walkways, bridges, and hundreds of individual elements by matching and expanding Lidar scans of the set.

New Tech

Digital environments typically started with sketches by Lee and Howe in the art department. “They’d draw the environments behind the sets,” White explains. “What Peter wanted was a sense of travel, of going through different zones.” Those zones ranged from the goblins’ cities beneath the ground to treetops on top of a mountain.

Among the largest digital environments, though, were the set extensions for the goblin caverns. “The goblins built cities by scavenging things, so it has a shantytown look,” White says. “It is a disgusting environment. There’s blood on things.”

An environment team built the overall shape of the interior, the cavern, and the rocks. Modelers added walkways, bridges, and individual elements by matching and then expanding Lidar scans of the set.

“The difficult thing was keeping the same mood,” says Revelant. “The same design. John [Howe] and Alan [Lee] were fantastic. They’d give us drawings from different views and say, ‘Try this.’ But, there was a lot of stuff and it all had to work together. There were no safe areas. We might have one CG shot and the next was live action. You’d be surprised by how much more detail you can see with 48 frames per second. In 24 fps, you had your friend motion blur helping you. Now, everything is very sharp. And, the eye of the viewer always had the live-action set to compare. Textures were especially hit by this.”

To give the sordid underground environment the required organic look, texture painters worked with scanned and photographed set pieces, artwork, and a distinctive pumice rock near Jackson’s house that he particularly liked. “Instead of treating individual assets, the artists textured the environment as a whole using triplanar textures,” White says. That is, rather than mapping a texture in one direction on a model, which can cause the texture to stretch as it curves around a shape, they projected textures from three sides—front, side, and above, and used normals to determine which texture contributed most to a particular vertex. “Say we wanted a marbly texture,” White says. “We could have a base color that we repeat over a large surface, then another offset layer, and dirt, grime, and moss layers that we repeat and blend. It gives us a lot of freedom. If we had a fixed rock in a cavern, we could adjust the scale in high-resolution textures.”

A far sparer cavern, more a cave really, with sheer walls and blue-gray stone, provides the location for Bilbo’s first meeting with Gollum. On set were actors Andy Serkis [Gollum] in a full performance-capture suit with facial capture rig, and Martin Freeman [Bilbo].

“We did the captures down the street in a massive warehouse,” Capogreco says. “Gollum’s rock, where he dragged the goblin out to eat it, was completely CG. Anything beyond that finger-like rock was 100 percent CG. The walls and the main area where Bilbo and Gollum riddle, and the rock you see Gollum come over, were built on set, but everything behind a meter or two in any direction was digital. Some shots were all-CG because it was easier.”

In postproduction, the artists removed Serkis from the plates, rebuilt the environment behind him, and inserted the digital Gollum. “Each of the 200 shots needed a level of paint work,” Capogreco says. “We had clean plates we could use to help the paint artists. They also projected stills onto Lidar scans. But, we’re talking 48 fps and stereo. We had to paint, and paint in depth.”

At top, the Rivendell set on the greenscreen stage behind actors Cate Blanchett and Ian McKellen has a tiny waterfall, but most of the waterfalls were digital. At bottom, Weta Digital artists replaced the greenscreen with a digital environment and extended the set. The crew did not film any miniatures.


Above ground, the action takes place in rolling hills, stony mountains, and the beautiful Rivendell, which was a major location in the Rings trilogy. “We did all the wide shots and set extensions in CG, but we wanted Rivendell to be recognizable as the same location in The Lord of the Rings,” says Aitken. “We looked at the photographs of the miniatures shot for Rings. And, we had Alan Lee and John Howe back. As soon as they wrapped up on the shoot, they moved over to [Weta Digital] and became our in-house art department. We relied on their in-depth knowledge of what Middle-earth looks like.”

On set, Gandalf, Bilbo, and the elves moved through elegant Rivendell interiors built on set. But, the most dramatic element in Rivendell is the water flowing through the city and streaming down in waterfalls. “We built libraries of simulated waterfalls,” White says. “We realized we could use the city waterfalls in the surrounding environment and the compositors could place them in the backgrounds.”

To create the water, the crew used the studio’s in-house simulation tool called Synapse. “We did a lot of work on Synapse for Tintin and then refined it for The Hobbit,” Aitken says. “The water was pretty much all-CG. When the company first comes into Rivendell on their horses, the waterfalls in the distance and the water flowing around them and under the bridges are all-CG. The [effects team] did multiple simulations for the body of the water, and often several simulations generated separately worked together to create these water events. It had to look beautiful. This is where Bilbo will want to spend the last years of his life.”

Water much more chaotic than the calm waters of Rivendell also played a part in the Misty Mountains where the dwarves found themselves in the middle of a battle between stone giants. “It was a big dynamics effects sequence,” White says. “With lots of rigid-body dynamics, rain, lightning, volumetrics, atmospherics, and digital doubles of the dwarves riding on the legs of stone giants. It was at night, and Peter wanted dark, strange shapes that looked like creatures of the mountains silhouetted against the sky.”

Effects crews sent fluid simulations of rain, mist in the air, and mist flying off digital rocks. Those rocks, tossed by CG stone giants, fell, hit other rocks, and broke apart with motion created through rigid-body dynamics.

At top, CG stone giants separate from digital mountains amidst simulated atmosphere and toss digital rocks that break apart using simulated physics. At bottom, Weta Digital created a new tree-growing program to match and extend the artificial trees on sets and to grow forests and foliage in all-digital environments.

Weta’s Synapse also played a part in a sequence in which three trolls tied several dwarves to a spit in an attempt to roast them over a fire. The trolls were CG with animation based on performance capture. The dwarves were real. The fire was CG. “We wrote a new effects plug-in for [Side Effects’] Houdini for the fire, which is handled within the Synapse framework,” Capogreco says.

Growing a Forest

After Bilbo and the dwarves leave the trolls, they encounter the wizard Radagast, who helps them escape from Orcs and the wolf-like Wargs (see “Of Gollum and Wargs and Goblins, Oh My! on page 14) by leading the attackers on a merry chase in his rabbit-drawn sleigh. Radagast is real. The sleigh and rabbits are keyframed.

“We had a sled towed by an all-terrain vehicle,” Aitken explains. “But as we extended the forest and replaced the greenscreen set with a wider forest, it became easier to replace the trees on set with digital versions.

To create the trees, modelers used an in-house tree-building program called Lumberjack developed specifically for this film. “We had two problems on Hobbit,” Revelant says. “One was to create nice-looking trees quickly. The other, more complex problem was to extend the trees already built on set. We couldn’t create a fully procedural system because it wouldn’t allow us to exactly match what they had built.”

Instead, they developed a procedural software program that artists could control with an artist-driven modeling tool for drawing curves and generating geometry, and a growth-based procedural tool for adding branches and leaves. All the trees grow with dynamics built in.

“We scanned all the trees built on set to have a good base, and from there extended the branches by drawing lines in [Autodesk’s] Maya,” Revelant says. “And, we added branches. then on top of that, we developed a procedural system based on Dr. P’s [Dr. Przemyslaw Prusinkiewicz] research to grow branches and leaves that compete for resources.”

the artists could deffine settings for an element, like an oak tree, and watch it grow based on parameters that might include age, amount of shade, and so forth. But, the procedural system is reality-based, and the artifficial trees were an artist’s version of reality.

“the trees on set might not grow like a real tree,” Revelant says. “You might be modeling a tree and see the main branch crash into another tree. So, we spent time making sure the artists could use the curves to move a branch without destroying the secondary branches.”

For background and non-speciffic trees, the modelers used an iPad application called TreeSketch by Steven Longay, a PhD student at the University of Calgary who studies under Professor Prusinkiewicz. TreeSketch exports 3D models to FBX ffiles. “the general idea is the same,” Revelant says. “You draw a curve, deffine a radius of resources the plant has, and grow it procedurally.”

Radagast’s forest had oddly shaped trees, which hint at the evil that will visit Greenwood Forest in the second and third ffilms.

The trees the dwarves and Bilbo climb during a sequence near the end to escape the Wargs and Orcs are more normal-looking pine trees.

“We had done all this work on the trees and realized we hadn’t done pine needles,” Revelant says. “And we needed to have pine needles very close to camera. They couldn’t be the same as leaves based on geometry instancing. We needed curves. So, we had a coder on our team spend a lot of time trying to produce a lot of pine needles in a way that wouldn’t kill the scene. They had to render properly, and we needed to be able to change the density without going back to modeling. Lumberjack transports all the information connected with the tree, including the effects all the way to rendering.”

During that sequence, giant eagles rescue Bilbo, Gandalf, and the dwarves from the Orcs and Wargs by plucking them out of the pine trees to which they’re clinging. The Orcs and Wargs are digital, as are the eagles, and sometimes digital doubles replace the actors. Even so, Kevin Smith, who supervised the sequence, didn’t consider the digital characters as the most challenging.

“We had the eagles pretty well under control,” Smith says. “And we shot the characters on stage. The hardest part was the environment. Ideally, you want to build up depth by having a near-ground, super-near-ground, mid-ground, action, deep background, far background. But they were on a set, which was supposed to be on a mountaintop in a big valley. So, we were stuck with foreground and deep background. We didn’t want to get into having big clouds because that would have turned shots with giant eagles into a huge volumetric raytraced thing.”

Moreover, the story evolved during postproduction. “It was hard for us to get into a groove,” Smith says. “We had shots where we put Wargs into a plate. Shots where all we got was a blank tile that said, ‘Yazneg [Orc warrior] advances on Warg’ so it was all-CG. Everything was a one-off. It was a mishmash of about any combination you’d want to pick, and the edit evolved. It turned out to be a bit of a different story than originally shot. But, we’re used to that. Pete [Jackson] knows that if he wants to change something and it means we do all-CG shots, it’s not a problem. That’s what we’ll do.”

Indeed, The Hobbit is, as was The Lord of the Rings trilogy, a testament to the 850 artists who perfected their art and craft well enough to turn a director’s vision into a believable world on screen. For anyone born after 1990, the world that Weta Workshop and Weta Digital created is Middle-earth. It’s hard to imagine another.