Visual Effects, One Frame at a Time
Issue: Volume 35 Issue 3 April/May 2012

Visual Effects, One Frame at a Time

Based on the book by Gideon Defoe, The Pirates! Band of Misfits is a comedy-adventure film animated using stop-motion techniques and enhanced with CG visual effects at Aardman Studios. Directed by Aardman co-founder Peter Lord, Pirates! is the first stop-motion feature by the studio since the 2005 Wallace & Gromit: The Curse of the Were-Rabbit, and the first to use Aardman’s new in-house visual effects department. The cast of 60 animated characters includes the Pirate Captain (voiced by Hugh Grant), Cutlass Liz (Salma Hayek), Black Bellamy (Jeremy Piven), and a Pirate with Gout (Brendan Gleeson). Led by Andrew Morley, the visual effects department created CG backgrounds, boats, water, and other effects, and applied digital makeup as needed to touch up the tiny stop-motion models.
   
Andrew Morley has moved around the world of computer graphics and visual effects for films literally, and figuratively. His career has taken him from Industrial Light & Magic where he was a technical director on Jurassic Park III, to the Moving Picture Company where he was a technical supervisor on Harry Potter and the Chamber of Secrets and visual effects supervisor on Batman Begins. Then he moved to Singapore to supervise the Lucasfilm’s digital artist group, which produced visual effects for console games and such feature films as Transformers. From there, he moved back to London, where he supervised effects on Hellboy II for LipSync Post and was a CG supervisor on Avatar at Framestore. Along the way, he also worked on high-end commercials. In 2009, he formed his own boutique studio, FuzzyLogic FX in London’s Soho district and began working for Aardman Animations as a visual effects supervisor for Aardman’s stop-motion feature The Pirates! Band of Misfits.

Contributing editor Barbara Robertson caught up with him shortly before the film’s release.



At what point in the process of making Pirates! did you come on board at Aardman?

They were shooting test shots. Ben Lock, the visual effects producer, had been there a year earlier and started the process of planning the visual effects in terms of number of people, time, and all the standard production things, and some of the visual effects people had started on a pipeline. When I started, Ben and I, between us, built the [visual effects] department. Apart from some proprietary tools we developed, it was a fairly standard department technically. The challenge was setting up a visual effects department at the same time we were doing a film. Also, finding people and hiring staff to work in Bristol, which is about 160 miles from London where there are lots of people and lots of companies. It was particularly tricky toward the end of the project when we needed to get freelancers for a short period of time.

When did you see your first shots?

We spent a long time planning. I didn’t get my first shot off the floor until I had been there nine months. It was exciting.

Did you feel like you were working in slow motion?

Quite the opposite. There was a lot to do for a brand-new visual effects department. Every shot went through visual effects, every shot had a little bit of cleanup. So, we had over 1500 shots to work with. I was running dailies on a [FilmLight] Baselight machine, and I’d do bits of grade between shots. There was no slow-motion feeling at all. It was a very busy film. We had 40 units shooting, so we had lots of shots coming from the floor every day.

Was working on this film like working on a live-action film or an animated film?

I treated it as a live-action film in many ways because there was a large amount of greenscreen work and the lighting was real. To get the high quality we wanted, we needed to think live action. All the lights on the shots were computer-controlled, and the ‘tower’ had information about lighting levels, so shots with similar lighting used the same levels. The lighting was fabulous. If we had fully digital shots, we could wait for plates to turn up and match to shots around them.

Did you do previs?

We did, to help tell the story and to work out lots of things. We’d start with storyboards and then in [Autodesk] Maya do proper previs, high-quality animatics for the entire film. The previs gave us continuity.

What are the major differences between creating visual effects for stop-motion and for live-action films?

The world, the scale of the world, is a big difference. The characters are smaller and they’re puppets. And, normally, on live-­action films when you do a visual effects shot, you start work knowing the shots are locked. But, at Aardman, editors could re-animate the shot. We use digital stills; it’s stop frame. Animators move a character and take a photo­graph. The images come off the floor using a proprietary piece of software that sends it to editorial, and editors could add frames and pull frames within a shot. So, if the animator shot frames 1, 2, 3, 4, and 5, because the camera was locked off, the editor could change that to 1, 1, 3, 4, 5. That meant the official take wouldn’t come immediately off the floor—unless it was a sweeping camera. Getting a shot lock was especially important for us when we had animated water. It might be worth mentioning that they shot the film in stereo, so there were two pictures for every animation frame: one for the left and one for the right eye.

Did you produce photorealistic water simulations?

The brief was to make it look fabulous in an Aardman way. We needed to give the CGI the creative style of an Aardman project. It’s not quite photoreal, but it has photoreal qualities. We had close-up water, distance water, and oceans of water. We didn’t want to do something that looked like a real ocean, and we had to reduce the scale of the water to match the scale of the Aardman world. Also, we wanted a more plastic feel, a little more squash and stretch to give it a more modeled feel.

What software did you use for the water?

We had a Maya pipeline with [Side Effects] Houdini and [Pixar] RenderMan. We used [Next Limit’s] RealFlow for the close-up splashes, and it all came through [The Foundry’s] Nuke. But, most of the water was proprietary.

Why did you build proprietary tools?

To make the water look like what the director wanted. Most simulation tools aim at copying nature. We needed to do more than that. We needed to do squash and stretch, we needed new kinds of shaders, and we needed to animate water. There isn’t a water solution available to do what we needed to do. We had about 10 different ways to do water because we had such a variety of water types. On a wide shot with a boat crashing through the water, we used particles, fluid simulations, and proprietary tools for our main mesh. On a close-up, when we needed splashes, we’d use RealFlow. For spray, particles. For a wider mesh, we’d animate geometry and use water shaders on top. The water has to change color in various locations, so we’d use photo reference. In the Caribbean, we had blue/turquoise water, in the river Thames, dark, menacing water. We achieved the look with RenderMan shaders.

Are there any matte paintings in the film?

We used matte paintings predominately for skies, for projections on geometry, and for the back of shots. We’d have textured geometry in foregrounds. The island was completely geometry, as were the buildings at the edge of the river. With stereo, geometry just feels better.

Are there any fully digital shots?

Maybe a dozen shots were fully digital. They were digital for things they couldn’t shoot. For example, we had some wide shots of the ocean with a big ship, and the greenscreen sets couldn’t be big enough to put a camera far enough away. Therefore, we’d go with a digital ship and digital water. The island was predominately digital.

Did you create any digital characters?


Yes. Again, when we have wide shots and when we need to see several hundred characters, it becomes more cost-effective to do digital. Imagine how slow it would be for one animator to move that many real characters on set, and you’d have to be careful not to knock into one of them. Sometimes we had live action in front that we used for lighting reference, and put digital characters next to them.

What did you use for crowd animation?

We have our own tools. Normally, the crowds were all doing the same thing. We might have all the characters sitting down in an amphitheater and then they all stand up. There was no need for AI. They usually followed the lead from some real puppets.

Did you do cleanup on the puppets?

We had thousands and thousands of mouth shapes; every character has hundreds. Another department printed mouth shapes using rapid prototyping. It was a massive undertaking, but it was a fabulous decision to go with that. The downside is that there was a horizontal line beneath the eyes or in the nose area that needed to be painted out by the visual effects department. We wrote tools in Nuke to do this. For example, if a character rotated its head and the lighting changed across the face, we had one set of tools. If the character was static, we had simpler tools. All these tools had to work in stereo. In some shots, artists painted out the lines. At the start of the project, it was fairly painstaking, but it got faster and faster. It’s such a long project, we were able to optimize how we did things.

How was it for you personally to work at Aardman?

It’s a studio, really, so you’re closer to the script and to the director. Unlike doing visual effects in Soho, where the post house works with the production studio, there is no middle layer. Aardman is the studio. It was fun. I’ve seen the film now. It was fabulous to be in the audience, and see their reactions. I haven’t seen one negative review in the UK press. Aardman does not make many films. Being a visual effects supervisor on one of them was fantastic.