Epic Q&A Series: Materials
Barbara Robertson
May 30, 2013

Epic Q&A Series: Materials

Blue Sky Studios’ Materials Supervisor Brian Hill has been in the thick of the action on the animation studio’s last five feature films, where he has led teams working on – in his words – “two and a half” movies. Epic, the studio’s latest animated feature, shrinks a teenager and places her in a natural environment filled with trees, grass, flowers, and magical creatures. Thus, the materials department surfaced geometry for a human-sized world, and that world as seen through the eyes of two-inch-tall characters, then sent the assets to CGI Studio, Blue Sky’s proprietary, physics-based rendering software.

Here, Hill discusses his latest work on Epic with CGW Contributing Editor Barbara Robertson. (Also read a Q&A with Chris Wedge in the May/June 2013 issue of CGW, and another with CTO Carl Ludwig online and accessible via the May/June 2013 issue box on cgw.com. And, look for a fourth with co-producer Michael Travers, coming soon.)

When did you and the materials department begin working on Epic?

Chris [Wedge] directed Robots, and we started on Epic, which was titled Leafman at that time, just as Robots was wrapping. We worked on a test for Epic and began developing the technology to do it. We ended up choosing to make other movies first, but Chris kept it flowing with a small team. It's been a long journey. From a materials standpoint, it is a very, very complex, ambitious movie for us - one of the biggest things we've done since Robots.

Robots came out in 2005. Could you have made Epic that long ago?

Technically, we could have made the movie then. But, the movie we made now is far better visually than we could have done then.

Why could it be visually better now than before?

We've learned how to deal with complex scenes, with high memory loads in our renderer, and with lots of detail and geometry. We know how to build trees more efficiently than before. All those things we figured out for the pictures that came in between, so when Epic hit, we said, 'All right. Now we can make it look better.' And, purely from a horsepower standpoint, the renderfarm we have now is no comparison to what we had then. It's massively more powerful, so we could up the detail.

You introduced procedural shaders for Robots. Did you use them for this film, too?

We dabbled a little with procedural shaders on the first Ice Age, but Robots' shaders were almost entirely procedural. It was a massive paradigm shift. Since then, a high percentage of what you see on screen is procedural. But, with Epic, we re-introduced texture maps mixed with procedural shaders depending on what we needed to get the job done based on the shot, the leaf, the piece of bark, the rock, the twig. If it was more efficient to write a procedure, we would do that. If it was more efficient to use a texture map to generate a shape, a vein pattern, we would use the map, bring it into our procedural tool kit, and use one of the many layers to make a leaf.

How did you determine which was more efficient, maps or procedural shaders?

Some of it came down to timing - production schedules, artist schedules. We have a team with diverse skill sets. Some are really, really good with procedural textures. Others are well versed with 3D paint packages and procedural shaders, and if they were available, and the schedule was tight on something that demanded a high level of detail close to camera, we'd have them do a blend of 3D paint and procedural. This was our first film using 3D paint, so we evaluated several packages. The studio already had licenses of [Autodesk] Mudbox for our lighting team, and for what we needed, it worked well. We also used 3D-Coat [from Andrew Shpagin] a little bit.

The thing that's limiting for us when we go down that road, is that it becomes a single-use material. You make a painting, a series of texture maps, that work for that asset in that moment, which is why we steered away from a texture-mapping pipeline. If the texture gets too close to camera, you have to repaint. If it's too far away, you might have heavier maps than you need. The ideal situation is when we can come up with a procedural solution that can be multi-purpose, used for something else. But, that takes more time. So when the schedule was tight, we might opt for the 3D paint route. When the schedule was more open, we'd dig into the procedural side and add more materials to the library.

You have a library of procedural shaders?

As each show comes along, we build a library of procedural materials that we can move from one asset and one shot to another. The library has grown over the past number of shows. We relied on it heavily and fed it with new materials that we developed. Going forward, we can use those materials again. It's just code. It isn't tied to topology or geometry.

The way our materials work is that we attach them to an asset. A large oak tree, for example. We build the material once and attach it, then wherever the tree goes, it gets the material. The forest in Epic is made of thousands of assets. We have probably 20 or 30 generic rocks. A lot of trees. And different permutations of types of trees - oaks, maple trees, pine trees, large, medium, small. And we have a kit of foliage.

Usually what we do is browse through the library and pick a few materials that might be a good place to start. Then we bring them into a scene with the asset, apply them quickly, see which work and which don't. Sometimes we mix two together. Or, maybe take one, duplicate it, and adjust to make a new one.

Did you create all the materials for the assets in Epic early in production?

The first time we get a sequence, if it's one of the first in the film, we have a lot of work to do, a lot of vegetation we haven't seen before. As we work through the film, we have fewer and fewer new materials to do because the assets have their materials built and attached to them. It becomes a matter of tweaking for a particular sequence. In Epic, we have characters up in the trees jumping from leaf to leaf, and in the branches running from tree to tree. So, the materials attached to that tree may have worked well on a mid-ground or background tree. But, our two-inch-tall Leafman might be standing on a leaf. So we might have to go into the materials and make sure they look good.

Can the artists creating the materials see the result in the scene while they work?

They're working in our interactive renderer. The way our TDs work is to have a text editor, which is the material and the code they are building it with, sitting next to our interactive renderer, CGI Studio, which we've been building for 20-some years. We have a quick render tool. It's a multi-threaded, multi-processor version of Studio that can go to the renderfarm and grab a single machine with a lot of cores or grab a lot of different machines - as many as needed. They can put the library material on an asset, render it, see how it looks, and tune it using handles and knobs. Then, they can settle in and really detail it up and make adjustments.

How did you decide how much detail to put into the materials?

That was the hardest part of my job.

For the first few sequence when nothing was built, we put materials with a certain level of detail on everything in the sequence very quickly. Then, it was iterations. I'd meet with my team and the art director, Mike Knapp. We'd talk with Mike and Chris [Wedge]. Chris might say, 'This set of leaves is really close, and I care about them because in this shot a character will slice a leaf in half, make a sail from it, and soar to the ground. So, it has to look good.' Or, someone might say another leaf would be blurred by depth of field, so we shouldn't spend as much time refining it. We'd have discussions like that. It took a few sequences before we figured out our groove with level of detail. But, once we had the recipe down, it took less and less time to iterate. You find the point where you hit a sweet spot - enough detail to make it look like our world. Once you find it, you know that's what our movie is, and then you have to make the rest look like that.

I think we put in the appropriate amount of detail. There's always room to add more, but one thing that worked well was the progressive refinement approach. We could spend years detailing the film beyond what you could ever see, so we really tried to focus our efforts. We're not a gigantic studio with unlimited time and budget, so we focused on materials we knew would not be motion-blurred or depth-of-fielded away.

Did you add details for the world seen by the two-inch characters?

There were really two different worlds. Bomba, the father, is about six feet tall, so when we're looking at the world through his eyes, we put the appropriate amount of detail in the world he sees. We tried to use the same assets and the same everything for the world from the eyes of a two-inch person, but sometimes we needed more detail for when we see the world through the small characters. Sometimes the detail worked in both worlds. We have variations for materials that could be close with high detail or less complex for backgrounds. We do that all the time for materials.

Other systems in our pipeline handle other aspects of level of detail - the geometry and the way we render it. We render SVOs, sparse voxel oxtrees. We create these voxel bodies of trees, leaves, grass, and other elements in the landscape that tend to be farther from camera because they are more efficient to render in our renderer than the actual geometry. There are means within our pipeline to switch, depending on the speed of the camera, how far away it is, and other criteria. We learned a lot about level-of-detail management on Rio, where we had a lot of trees and hundreds of thousands of people in the stands, and brought that into Epic and improved on it.

The human characters in Epic must have presented challenges in the normal and small worlds.

They were really, really hard on this show. Certainly the skin was the hardest material and took the longest. For Bomba [the human-sized father], it was easier to do some of the subtle wrinkling with 3D paint rather than procedural shaders to hit the look Mike [Knapp] and Chris [Wedge] wanted. We had a couple shots that were brutally close, so we used maps and procedures. But some of the other characters had completely procedural shaders. Rio was our first stepping-stone into doing human skin; we hadn't done much before that. We learned a lot about translucency of skin, subsurface scattering.

The trick on Epic was trying to figure out artistically what we do with Leafmen and other characters that are in a significant part of the film and are only two or three inches tall. How their skin reacts differently from big characters. Their skin is thin, and a lot of light filters through.

We spent a lot of time iterating with Mike [Knapp] and Chris [Wedge], finding physically correct lighting for something that small that was still visually pleasing. We didn't want to look at our humanesque Leafmen and be taken out of the movie because they were too thin and glowing. But, we didn't want them to be as dense as they would be if they were human-sized. It was a delicate balance.

How did you keep the little ones from glowing?

It was all in the material definition. The renderer will try to do the right thing. If you follow it numerically and put a skin material on something that small, the skin will almost look self-illuminated. We dialed that down by increasing the density of the transmittance, which is what we call translucency. We made the material definition more dense than would be physically accurate at that size.

Did your team develop any new technology for Epic?

I'm really excited that Jamie Macdougall has a paper accepted at SIGGRAPH that highlights a technique he used to create tree bark. Doing 3D procedural materials on branching structures like trees is a big challenge. You can always stretch a noise in one direction, but how do you get it to stretch in all the directions? It needs to flow down all the branches of the tree correctly. You can stretch the noise vertically and it looks great until a branch angles off. You could rotate the noise, but how do you resolve the intersection? He came up with some cool tricks that meant we didn't have to deal with it. It allowed us to have the noise stretch vertically and stretch in the correct direction along every branch, regardless of how it is connected and oriented. It dealt with the transition and would seamlessly blend between the two directions. When he hit on the answer, it was an Aha! moment. After a couple weeks, it became really cool. We used it a lot.

Did you work on any unusual materials?

A few, and they are big story points. Definitely organic stuff related to decay in the forest. It was challenging to kill the forest in a way that looked right and still looked the way the art director and director wanted it to look. Also having something dynamically dying was challenging. We wanted to show that on screen, but we didn't want it to be so disgusting little kids would run out of the theater screaming. And, we wanted it to look good. So, it took quite a few iterations to nail. I guess if some kids run out of the theater screaming, we know we did a good job.

Read part I of this series of Q&As on Epic.