Everything You Ever Saw
Issue: Volume: 32 Issue: 2 (Feb. 2009)

Everything You Ever Saw

Twenty years ago, Pixar Animation Studios released a shading language, an interface, and a specification called RenderMan, a collection of “tools and systems that would let thousands of people create pictures of whatever they chose to design,” as Pixar’s Ed Catmull put it then. And indeed, studios around the world have used RenderMan to create hundreds of films, many of which have won Oscars for visual effects and for animation.

In 1993, Catmull, Loren Carpenter, Rob Cook, Thomas Porter, Pat Hanrahan, Anthony A. Apodaca, and Darwyn Peachey received a scientific and engineering award from the Academy for the development of RenderMan software. In 2001, Catmull, Cook, and Carpenter received an Academy Award of Merit for “significant advancements to the field of motion-picture rendering as exemplified in Pixar’s RenderMan.” Catmull has also received Academy Awards for subdivision surfaces and digital image compositing.

This month, at the Scientific and Technical Awards presentations, the Academy is bestowing its Gordon E. Sawyer Award, an Oscar statuette, to “Ed Catmull, a computer scientist, co-founder of Pixar Animation Studios, and  president of Walt Disney and Pixar Animation Studios, for his lifetime of technical contributions and leadership in the field of computer graphics for the motion-picture industry.” Below, Catmull discusses the evolution of RenderMan and its impact on the CG industry.

What was the original goal for RenderMan?
We wanted to do something that was going to last for many years, so we had, let’s say, three main goals. One of them was to think about extreme complexity. We set a goal of being able to manage 80 million polygons, which at that time was extremely ridiculous, but we were trying to think about the problem in a different way. That forced us to redo the way we thought about the whole pipeline, which has led to an architecture that has lasted for years. Second, we believed we had to find a solution to motion blur. And the third goal was to come up with control over shading and lighting so it didn’t have to be done by programmers.

How did you solve motion blur?
We were trying different things. I had an analytic solution for the problem, trying to come up with an exact solution. But, Rodney Stock suggested that we look at dithering. That triggered Rob Cook to experiment. And he came up with stochastic sampling. The stochastic sampling not only solved the motion blur, but it also opened up solutions to depth of field. So it was a very beautiful and elegant solution to the problem. And then Rob came up with the notion of shade trees for solving the problem of complex shading.

You were at Lucasfilm then?
Yes. The architecture was originally called REYES, which Loren [Carpenter] named; it means renders everything you ever saw. When we spun out and became Pixar, the technology, of course, followed us. I think it was [Silicon Graphics CEO] Jim Clark who suggested coming up with a standard way to talk to rendering. It was at that time we initiated an effort to come up with an interface, and the interface was what we called RenderMan.

How did the interface come about?
Pat Hanrahan was an employee of Pixar at the time, and he took the lead. There were 19 companies involved in this, five very active. But, we didn’t have a design by committee. The way we set this up, and I felt very strongly about this, was that Pat had the final call about what went in. He was the architect. And to this day, that’s how we make our movies. We have the leading conceptual designer on a movie or, in that case, the interface, and nobody can override that person. But Pat listened to every­body, which is what made him good.
 
Pixar rendered Cars using raytracing in PRMan, the studio’s commercial version of RenderMan. It was Pixar’s fi rst use of raytracing throughout a fi lm; the cars demanded it for the refl ections. Cars received an Oscar nomination in 2007 for Best Animated Feature.

I’m going to get a bit technical here. With rendering, like with raytracing, you have to have all the information before you can start to do anything. You don’t actually care about the order of things because you can’t start until everything’s there. But with a real-time machine, you have to be able to process information as soon as it shows up, so you care about the order. Basically, the original interface was designed so that the order mattered. If you were doing real time, you could process it when it arrived, but if you were doing sophisticated rendering, you could wait until it all got there so you could do raytracing.

The trick was, if you thought about both, you could get the order right. If you only thought about one, then you would screw up the other one. So getting that right was part of the trick of the original interface.

So it would work either way?
That’s right. If you pay attention to the order to begin with, you can have it both ways. If you don’t pay attention, then you make mistakes. I believe the late Jeff Mock said we could do all this in software, and someone, I forget who, suggested calling it RenderMan. [In the forward to Steve Upstill’s The RenderMan Companion, published May 1989, Catmull credits Pat Hanrahan. David A. Price, in his book The Pixar Touch, quotes Hanrahan sharing the credit with Jaron Lanier.]

The shading language started with Rob Cook’s shade trees, which generalized the shading formulas, and then Pat and Jim Lawson generalized this—with everybody thinking about it—into the shading language that is part of RenderMan.

Do you remember which films outside Pixar first used RenderMan?
I’d say the milestone picture was The Abyss. Well, it was more like a stepping-stone. The first milestone, when people first noticed it, was Terminator. And the big milestone, when the rest of the industry [other than ILM] switched over was in 1993 with Jurassic Park.

Does Pixar use exactly the same commercial version of RenderMan as everyone else?
At one time, the Pixar version was different from the industry version because Pixar’s needs drove the development in a different direction. But, after a while, we realized it was a big pain in the neck for everybody. So it evolved into one version that Tony Apodaca’s group controlled. One time, we delayed a release of important technology, the hair stuff, and we realized that was a mistake. We’ve never done that since.

That was deep shadows?
Yes. That was when we deviated from the pattern. But the studio believes now—very, very firmly—that we always want it to be the same for everybody. When I say the studio, I include all the technical people. It’s easier to think about it as a product; it’s clearer, and, from a reliability point of view, it’s a better place to be.

Why did you move the RenderMan group to Seattle?
Tony went into production, and Dana Batali took over the group. He’s so good that when he had to move back to Seattle for family reasons, we moved the group with him. It’s better because the group doesn’t get sucked into production crises.

Is the RenderMan group a profitable unit?
Oh yes. A lot of people, you know, would like rendering to be free, but I think the industry now gets that the free renderers don’t advance as much. If the companies don’t make money on them, they don’t want to invest in them. We have a price structure that everybody knows supports the RenderMan group, and the group uses all its resources to make the product better for the film industry. So while the industry has to pay for it, its not getting something targeted at CAD/CAM or games.
 
(Left) Pixar’s technical crew wrote sophisticated subsurface scattering shaders in PRMan to create the green aliens’ volumetric, translucent material for “Lifted.” (Right) Each character in “One Man Band” required several hundred shaders. Both shorts received Oscar nominations.

Does that mean the RenderMan group doesn’t care about games? 
We care. But, if a game company buys RenderMan and uses it, they’re doing it because they want feature-film quality for their games. CAD/CAM people use it, and universities have it. We listen to anyone who finds it useful. But clearly our direction is dominated by the people who want the very best in quality, performance, and reliability.

Reliability is a big one. These films cost so much money, the one thing studios don’t want is to find that it chokes. They want their film to go through, they don’t want to wait long for it, and they do not want any surprises. It’s difficult. We’re talking about super-complex things.

Are the multicore machines helping?
There are two kinds of forces. One is in the interactive loop where the studios want pictures as fast as possible. The other is when they want to render the whole movie, and in this case, it’s the total efficiency that counts. If you have to render 100 frames, the most efficient thing is to have each frame render per each processor, so each processor is going full out. So for the 100 frames, that’s the most efficient way. But, if you’re doing one frame, rendering with 100 processors isn’t 100 times faster. You wish it were. So for that reason and because you’re sharing disks and so forth, the efficiencies for a single frame start to go down. I’m not giving you solid answers. It’s complex, and dealing with the complexity and tuning it for the rendering world is one of the things the RenderMan group is doing to take advantage of the architectures. You’ve got to follow the mainstream.

When you think of what’s driving RenderMan, do you still think in terms of photorealism?
Well, when we started, we were using animation to drive it for ourselves because we wanted to make animated films. But, we used reality for a number of years as a driver. At the time we started, the distance from what we were doing to reality was pretty far.

We said that if we can match reality, then we will have control over the lighting, even though, as an animation company, we’re not trying to do reality, we’re trying to do something different from reality. On that trash planet Wall-e is walking around, excuse me, gliding around, it’s not reality, but on the other hand, it’s very real. It’s a different real. It’s very hard to describe. But we use the difficulty of reality to drive us forward.

I remember for years and years I’d get up and give talks, and say the difference between the best computer graphics and what you see in front of you is a huge gap. I could show the best stuff from anybody, and then say, ‘Look in front of you.’ And the difference was marked.

But I haven’t been able to say that for a long time. It’s kind of embarrassing that the credit frequently goes back to those of us in the early days. The bulk of the work and the heavy lifting was done by teams led by Tony for a few years and now by Dana for many years. They’ve done extraordinary things. Dana is doing a phenomenal job of being responsive to what people need by pushing in new, important directions while being very practical. And, RenderMan has gotten very sophisticated in terms of the lighting model, the shading support, the multicore work, the shadowing, the deep shadowing. There’s been just an incredible amount of work, going beyond what we thought was possible.

Beyond what you thought was possible?
Oh yeah. I’m blown away. I see stuff now and think, ‘You guys have really pushed this.’ We used to talk about millions of polygons. We thought we were being adventuresome by picking this number. Now it’s millions of hairs with a sophisticated lighting model on each one. It’s just mind boggling.

We look back on it now and say, ‘Wow, that was just the beginning.’ The really cool stuff was yet to come. But, it’s never been about predicting, which you can’t really do anyway. It has been about creating an environment in which people can do cool things. If we had set out to solve all the things we actually did solve, we might have gotten frozen.

Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at BarbaraRR@comcast.net.