Efficiency Experts
Issue: Volume: 31 Issue: 12 (Dec. 2008)

Efficiency Experts

Many a producer, director, or artist dreams of one day creating his or her own feature film, yet few—if any—ever attempt such an ambitious feat, let alone accomplish it. But this month, a group of determined digital artists from Atlanta-based Fathom Studios will see the result of their long, difficult journey play out on the big screen when the independently created 3D animated feature film Delgo opens in theaters nationwide.

Delgo is a story about love and betrayal in a magical world that is torn apart by the mutual prejudices of two races who inhabit it: the winged Nohrin, masters of the skies, and the terrestrial Lockni, who harness the mystical powers of the land. Here, a young boy finds forbidden love with someone outside his own kind, and together they expose evildoers and unite a kingdom.

The concept for Delgo, from germination to realization, unfolded over a 12-year period. “When I joined Fathom in 1997, Marc had already been talking about doing a movie,” says Warren Grubb, animation director/VFX supervisor, about the ambitious desire of Marc Adler, Fathom president/executive producer. “Together, we toyed with the idea for a few years, drawing pictures and thinking of story concepts. Then, in 2000, we made the decision to do it.” By 2003, a star-studded cast had been selected, and production soon followed.

“We were flying by the seat of our pants. We had never done anything like this before, and we had to deal with limitations in terms of bandwidth and available technology because we’re an independent facility,” says Adler (see “Independent Minded,” pg. 4). “However, as we later discovered, ‘independent’ does not mean ‘inexpensive.’” Nevertheless, Fathom was embarking on this journey with far less money in its pockets than the typical Hollywood studio.

Ready, Set, Delgo
With a small budget for a project of this size, Fathom had to be fiscally conservative and resourceful in terms of the software and hardware chosen for the film’s production. In the end, the studio expanded its existing Autodesk Maya pipeline, which had been used for short-form television projects (such as station tags) and interactive Web content. “Warren and some of the team devised a number of technologies and methods that allowed us to be very efficient,” says Adler. “That is one of the main reasons we were able to do a movie of this quality and scope in the time that we did and with the resources we had.”

In fact, Maya was the main tool for just about every aspect of the content creation, and the team took advantage of MEL to write hundreds of custom scripts for tasks ranging from the simplistic to the complex.


Artists modeled and animated two separate races of Delgo’s humanlike characters in Autodesk’s Maya, giving each a distinctive look.

Nevertheless, the group’s early attempts at modeling the film’s characters entirely on the computer (from illustrations) failed to produce a cohesive look in a timely manner. Alternatively, an artist sculpted the main character models in clay. Then, using a Cyberware scanner, the crew digitized the maquettes and turned the point data into usable meshes. “There is software for extrapolating NURBS, but it is expensive. So we just wrote scripts in Maya to help with the process,” says Grubb. In the end, though, the group used polygons rather than NURBS to facilitate rendering and deformation.

A morphing system, created in-house, enabled the team to generate crowds, while MEL scripts provided the necessary AI to direct them.

The film’s characters—which are human-like—were animated by hand. “We knew we had to keep the characters as flexible as possible so they could be rigged quickly and changed easily,” explains Grubb. “We chose smooth-skinning almost exclusively, avoiding other deformers, such as blendshapes, clusters, and wire deformers, so we could use scripts to easily copy binding information.” Alternatively, the artists used influence objects and extra joints to achieve the necessary deformation.

“We were able to drive joints, influence curves, and influence meshes with set-driven keys and expressions to get everything from muscle bulging to subtle facial wrinkling,” adds Grubb. “Keeping all the meshes within smooth-skin clusters made it easy to, for instance, change surface topology extensively on a character and transfer all the binding information with a button click.” By using the master rig and adjusting the base bones to a particular character, the animators could rig a character in a single day versus weeks.

Many of the characters can fly, so the wings and motion blur were big considerations in the production. The wings had to be large enough to look like they would be able to lift the character, but they had to appear light enough to keep the character from looking weighed down by the massive appendages. To avoid that issue, the artists added veins with large, translucent cells. “We experimented with a lot of settings to get the best blur, and ended up rendering on ones or twos, depending on the scene, to get a good insect-like speed of flapping,” explains Grubb.
 
One model that especially challenged the group was not a character, but a palace throne room, which had to be destroyed in a final sequence. “It’s one thing to model a structure for animation, but when it came time to break parts of it off, we realized there were a lot of subtle things that had to go into making it believable,” Grubb says.

First, the team cut out pieces of geometry to make the “parts” seem as if they were broken out, but this produced a “cheap, toy appearance,” Grubb recalls. The artists then created subsequent versions by adding more substructure—such as different types of stone and mortar, reinforcement bars, bricks, and even bits of broken and cracked plaster and stone around the edges.

As for texturing, the artists began the process by creating a library of images that formed the basis for the color palettes and textures that would be representative of the two major character races. For characters, they wrote MEL scripts to help with the UV layout process, and used Adobe’s Photo­shop to paint the surfaces, adding effects and filters to give them natural variety and a painterly hue and tone.

Environmental Impact
The epic-size environments and sweeping vistas are lush, beautiful, and colorful, with a soft fantasy-like appearance. Although they may look like matte paintings, most of the backgrounds began as 3D models and were later intensified with painted detail. “The way we lit the backgrounds made them look more painterly. We took things like a sky and clouds on mattes, and placed them onto a dome inside Maya, then color-corrected the layer. This gave us one massive painting that we tweaked for various scenes,” says Adler. “Again, we used what we had and modified it as much as possible to take things as far as we could.”

The environments often are teeming with foliage, grasses, and moss, all grown with Joe Alter’s Shave and a Haircut. Procedural textures were used exclusively for these elements, making them more resolution independent for both close and long shots.


The majority of the environments are 3D models enhanced with painted detail. Lighting intensified the aesthetic.
 
The film also contains a good amount of clouds and smoke, created with Maya Fluids. As Grubb explains, the characters had to move through each—the flying Razorwings through the clouds, the ground dwellers through the smoke and fire. To avoid the problems inherent in rendering meshes within Maya containers, the artists used depth and normal renders of the characters and set elements to place them within the volumes during compositing, which occurred within Adobe’s After Effects and Apple’s Shake. This also gave them more control while manipulating the density and placement of those effects.

The group also used a variety of procedural techniques, particularly in one scene that proved to be deceptively difficult. “We started out just modeling tubes of caverns, but they looked too simple and bland; to make them more believable, we added lots of rock and cracks, but the sets were getting incredibly dense,” says Grubb. “So we ended up using a procedural particle replacement to add a variety of stones to the walls of the caverns at render time, so the models were light for animation and base lighting, and would render quickly with the instances added in the final frames.”

Taking Control
Due to financial considerations, the group used the renderer inside Maya. “It’s free,” notes Adler. “But we did have to be more creative in getting the overall look we wanted out of that renderer.” So they leaned heavily on the lighting team to come up with the desired look for the movie.

Because the tool did not offer a lot of flexibility out of the box, the group rendered in layers, using a good deal of RGB and normal maps to control the images in comp. “This gave us a lot of control over the color and effects, and it let us achieve a uniform, painterly look via Shake macros, which let us dictate the light bleeding, blooming, and other effects,” adds Adler.

DrQueue open-source software automated and managed the rendering process. This was done on a Linux-based renderfarm of 1000 CPUs, some donated by AMD.

Perhaps the biggest technical investment the group made was in the creation of a proprietary Web-based asset management and collaboration tool, dubbed Storyline, that tracked the film’s 1500 shots from layout, through animation and comp’ing, to final output. The visual system enabled any artist, regardless of location, to browse a series of thumbnail images and access the desired sequence. By clicking on a shot in that sequence, the person is presented the shot along with a range of information about it.

“We could track the shots through the different steps in the pipeline, and with a button click, we could look at a shot and determine where it was in the overall process,” explains Grubb.

By focusing on the end result rather than on a particular tool, the group at Fathom Studios was able to turn a dream into reality. 

Karen Moltenbrey is the chief editor of Computer Graphics World.