Food for Laughs
Issue: Volume: 32 Issue: 9 (Sep. 2009)

Food for Laughs

Read in-depth features about how cutting-edge CG was created for various movies making Oscar buzz.

Sony Pictures mageworks artists twist computer graphics into a colorful, cartoony, and hilarious animated feature for Sony Pictures Animation



Weather station intern Sam Sparks (Anna Faris) reports on the latest climate change, a giant pancake that flopped from the sky.
In the totally uninhibited film Cloudy with a Chance of Meatballs, the characters stand open-mouthed while hamburgers fall from the sky, tomatoes roll down sidewalks, children ride sleds on mashed potato mountains. It’s difficult to imagine subject matter more fun for computer graphics professionals: The colors look saturated in the way that only digital colors can look. The characters stretch the bounds of 3D animation. Cartoon physics has never been so appetizing, bananas so aggressive.

Directed by Phil Lord and Chris Miller, the animated feature from Sony Pictures Animation based on the popular children’s book of the same name, tells the story of a young mad scientist who converts water into food to help an impoverished town. Meatballs rain from the sky and he becomes a hero. That is, until the welcome food showers become violent storms. An enormous pancake dripping with butter and syrup smothers a school. A giant banana stabs a building.

“I like to say this is the first cartoon I’ve ever worked on,” says Rob Bredow, visual effects supervisor, who moved onto this film after completing Sony’s Surf’s Up, which received an Oscar nomination. For Cloudy, Bredow led a crew of 250 people at Sony Pictures Imageworks who worked on the film. “It’s our most ambitious animated movie to date,” Bredow says.

The film features approximately 30 main characters who live in a world created with 4000 hard-surface models. “From the first character to the last soda can took exactly two years,” says modeling supervisor Marvin Kim, adding, “the can was a garbage piece that we crushed and put next to a dumpster.”

Flipping the Animation Style
The characters move using a style of animation inspired by United Productions of America (UPA), which was especially popular in the late 1940s and 1950s, with “Mr. Magoo” and “Gerald McBoing-Boing” among the most famous examples. “The [UPA] animators weren’t constrained by accurately moving volumes in space,” explains animation director Pete Nash. “They wanted something more abstract and conceptual, so they flattened the images and did simplified graphic designs. The principles of animation took a backseat to concept; that is, they could break any rule if it supported the concept. An arm could grow, a character could be off-balance as long as it supported a strong idea.”

Applying that idea to CG characters was tricky. “In 3D, you’re more confined by realistic stuff, like textures,” Nash says. “If you stretch a texture too far, it looks like rubber.” So the crew found ways to have concept drive everything without, necessarily, stretching limbs and body parts.

The main character Flint, for example, who is skinny, awkward, and nerdy, has hoses for arms. No elbows or knees. “He was a loose, gangly character,” Nash says, “so it made sense not to give him a skeleton in some cases. But we still kept him grounded.”

For example, when Flint walks, he often doesn’t move up and down; he moves in a straight line, with only his legs animating. That bolstered the concept of an intensely focused inventor, and it fit with the UPA style of animation.

“When a UPA character walks into a room,” Nash says, “the upper half might lock [while] the lower half keeps walking, and then the upper half would catch up. So, with our rig, animators could change proportions and lock parts of the body so those parts couldn’t move when others did. For example, we could put the chest in world space or in body space, and then switch back.” With that kind of control, the animators could have Flint and the other characters sometimes act in this unique style and still have seamless performances.”

By contrast, the animators created a realistic performance with accurate mechanics and follow-through for the self-important Brent, the town’s only celebrity, albeit a performance they still sometimes exaggerated. Earl, the town’s cop, on the other hand, moves in more extreme ways. He is an incredible athlete with nothing to do, so he overstates everything physically.

“When he moves, it’s like a sprinter exploding off the blocks, and he stops on a dime,” Nash says. “He does that even when it’s not necessary. He’ll do a flip and two somer-saults, and land just before telling someone they got a parking ticket. I loved [this style of animation]. The very idea of concept driving everything is, to me, what animation is all about. It’s the reason to animate a movie instead of doing it live action.”

In addition to creating performances, the animators also sometimes created characters using the flexible rig to transform a generic model into a specific secondary character. “The directors wanted 80 characters,” Bredow says. “It wasn’t feasible to build that many modeled, rigged, textured characters in time, so we came up with a clever compromise.” Modelers built the 30 hero characters, which could morph into new characters, and added a generic male and a generic female to the digital crew list.

“We could change their proportions, skin color, costume, lots of things about them to create a rich set of background characters who could deliver lines at camera,” Bredow says. “Pete [Nash] and his team created characters on the fly as needed from artwork and inspiration.”

Imageworks’ proprietary crowd system provided the means to control large numbers of characters when needed. “It’s basically a [Side Effects] Houdini-based system that allows you to place characters, bring in motion libraries, and choose what executes when and where,” says Dave Davies, effects animation lead. “It uses goals and steering behavior to get characters from one place to another.”


Layout artists “filmed” about 20 percent of the colorful film using a handheld camera to compose the CG scene.

Chewandswallow
The characters populated the town of Chewandswallow, as in the book, so for the film, modelers built a city 14 blocks long from east to west and 22 blocks long from north to south, with eight buildings on each block. Four times.

“The modelers built this huge town, where pretty much every street was shootable for wide, medium, and close-up shots, and then in the second act, the mayor takes out a loan and converts it into a food utopia,” says Bredow. “So we built a second town. And then the food falls, so we rebuilt the town with a food layer. And then, it’s destroyed.”

Although texture maps handled some variations, for the most part, modelers created each version anew, including all the props that dressed the set—cars, street lamps, trash cans, and so forth. In addition, the modelers added items for the interiors to the parts library and did the initial set dressing for the layout teams.

“We built detailed interiors for places the characters would go into and out of, and low-res interiors when you could only see inside the windows,” Kim says. To simplify the modeling task, the team altered and replicated base models whenever possible. A hallway made from thousands of egg cartons, for example, started with two models that the crew bent and placed in various ways.

The largest interior set was Flint’s laboratory, and the biggest geometric part of that lab was a wall with 62,233 buttons. “We tiled sections and copied them, but we modeled each button that someone would push during the movie, so the amount of geometry on that wall was amazing,” Kim says. “And, all the buttons had to work.”

For modeling, rigging, and animation, Imageworks uses Autodesk’s Maya enhanced with custom tools; for cloth simulation, an in-house solver called Tango. Tami, a custom renderer, works with hair groomed in Maya. Swami manages motion cycles and paths for crowds. Painters and texture mappers work with Adobe’s Photoshop and Maxon’s BodyPaint 3D. Technical directors create the primary effects tools within Houdini. For some effects, they use a custom 3D sprite renderer called Splat, and for others, SVEA, a volumetric renderer. Imageworks’ Katana handles 3D lighting and 2D compositing. And Imageworks’ in-house version of Arnold, raytracing software developed originally by Marcos Fajardo, now a software architect at Imageworks, rendered the film.


Modelers built four versions of the town Chewandswallow. The mad scientist Flint and his sidekick Steve survey the damage in a detailed model of a building destroyed by the falling food.

Crazy Colors
“Our bible for the movie was the artwork the production designers gave us, and we adhered closely to the artwork, from lighting, to design, to the stylization of the food,” says Bredow, “so much so that when we compared our shots to the artwork, as we wiped over the art, you could hardly tell where the transition was.”

All well and good, except that stylized food doesn’t always look delicious enough to eat. It was up to the look dev team artists, four CG supervisors, and a team of TDs to make the 50 different edibles believable enough to be appetizing yet fit within the film’s brightly colored style.

“We were excited that the directors wanted a stylized look in terms of characters and shapes in the environment,” says Danny Dimian, CG supervisor, who laid the groundwork for the Arnold shader writers. “We had two shader writers and a lot of support from the Arnold programming team,” he says. “For our show, they wrote the shaders in C++, so it wasn’t as simplified as it will be later with the open-source language Larry Gritz is working on.”

Early in the film, when the people in the town were poor and down on their luck, the lighting team created a muted environment using only a few colors. When the food began to fall, the colors intensified.

“We had purple skies and incredible orange skies,” Dimian says. “At one point, we heard that we weren’t pushing the colors and the look far enough and that we all needed to go to ‘crazy school.’ So we took that comment to heart. When the characters go inside the meatball, we went as far into the color space as we could to get lots of crazy hues. But, the lighting needed to behave realistically, partially because the movie would be in stereo 3D.”

The team believes that the global illumination built into Arnold made it easier to achieve the realistic lighting they wanted. “In Arnold, we’d set up a sun and a sky dome, and we’d get a really beautiful look right out of the box,” says Daniel Kramer, digital effects supervisor.

Because lighting with Arnold closely resembles lighting with real-world lights and because many of the lighters had worked only with scan-line renderers, Bredow sent them to classes at Mole-Richardson, a well-known Hollywood lighting company. “With Arnold, the size of a light dictates its shape, the size of the shadows, and how soft or hard the shadows are, which is much more like lighting on set with live-action photography,” Bredow says. “So, at Mole-Richardson, we had DPs walk our guys through live-action photography.”

The DPs lit sets mocked-up from artwork of a bedroom, complete with a bed, tables, sheets, and a window, noting how lighting they would use for characters walking through the set would differ from that in the artwork.

“It was refreshing for me to have conversations with the lighters about where to place bounce cards,” Bredow says. “Because Arnold is a raytracer, it renders the shadows right in every shot, whereas on previous movies, we needed to dial in shadow biases. And when we transferred our lighting rigs from one scene to another, the results were more predictable.”

The trick was in knowing when to stop. “If we turned the knobs too high and increased the sampling too much, it was easy to turn a four-hour render into a 16-hour render without any appreciable increase in quality,” Bredow points out.

Because Arnold renders entire scenes each time it renders, to give the lighting team the ability to work interactively, the Arnold crew created a system in which they rendered a first pass at low resolution with low anti-aliasing. “This gave the lighters a way to quickly show me the lights blocked in,” says Kramer, who spent much of his time during production sitting in a dark room and giving comments on lighting effects, cloth, and hair.


Animators used a flexible rigging system to create background characters from generic models. The studio’s version of Arnold rendered the colorful, highly saturated scenes.

“The artists had eight-core machines,” Kramer says. “So, we could load in all the geometry and move lights in near real time. The next level was 1k resolution with better anti-aliasing. Then, if we liked that, we’d go to full resolution and full quality.”

One of the problems the group ran into, however, was in rendering hair. When the strands of hair became too fine, the ray-tracer had sampling problems. “We’d get lots of chattering,” Kramer says. “So Rob [Bredow] came up with a control so the hairs would never be smaller than the size of pixel.” Once the characters moved away from camera and the hairs became too small, the system automatically thickened them and used transparency to render the fatter hair.

“We accumulated a lot of transparency,” Kramer says. “But we didn’t have aliasing artifacts, which would have been expensive. The big challenge with inexperienced lighters was optimizing scenes to make the raytracing efficient.”

Arnold provided other advantages, though. Because the raytracer rendered entire scenes, and because the modelers built entire city blocks, the artists could freely move the camera and see fully rendered scenes. Kramer provides an example: “We were test-shattering a building by dropping a banana into it. The rendering artist placed cameras all around the building so we could intercut close up and far away to create the shot. To our surprise, when we looked at the backside of the building, we saw glass busting out. We didn’t know that was happening. Being able to set up lights for one simulation and get coverage without having to explicitly dial each view was cool.”

Food Fight
Although modelers, animators, and lighters adhered closely to the production artwork in order to create the stylized, saturated look of the film, effects artists were left largely spinning on their own. “The only area where production designers gave us guidance, but didn’t take us all the way through, was in the area of effects,” Bredow says. “They’d give us inspirational paintings, and our effects supervisor Dan Kramer and his team worked out how things moved. It was great to give them a playground for blowing our socks off.”

At first, normal-size food rains down, but in the third act, the townspeople must fend off massive amounts of colossal chow. “We drop food that’s made of more than one piece, and that interacts with whatever it hits,” Bredow says. “A pancake hits a school. A jalapeño pepper explodes into a giant fireball. The interactions are extreme.”

To handle the dynamics—the dropping food and the destruction it caused—the crew used the open dynamics engine (ODE), a rigid-body dynamics solver that the studio had integrated into Houdini for previous films. By connecting many rigid bodies with joints, they could simulate soft bodies, like food, as well as hard bodies, like buildings, and more easily simulate the interaction between them.

In sum, operators in Houdini shattered objects into complex, low-resolution parts connected with breakable constraints, simulated through ODE, and parented to high-resolution geometry that the team rendered through Arnold.

More specifically, although ODE easily handles volumes, such as buildings, to handle organically shaped food, like bananas and hamburgers, as well as buildings, the Cloudy team gave the dynamics engine the ability to calculate the movement of convex hulls.

“ODE is fast and stable when you use spheres and boxes,” Davies says. “But, it takes a lot of manual setup to approximate an arbitrary-shaped object, such as a turkey leg. You can’t easily represent it with a sphere or box, but if you can divide it into sections and each is a convex body, you can approximate the exact shape of many pieces. By using convex shapes, we could build any arbitrary shapes we needed. We got less interpenetration between objects, the simulation was faster, and it looked more realistic.”

To begin, the effects team used operators in Houdini to divide objects into convex shapes via a program called Qhull, which shatters volumes into convex hull parts based on predefined patterns. The hamburger bun, for example, became 24 rigid bodies bound together loosely. “Fortunately, it all happens procedurally in Houdini,” Davies says. “If you give it a shard, it returns a convex shape that wraps around it. We compiled those and fed them into the simulator.”

A “shard” might be pieces of pipes, drywall and studs, and other internal geometry for a house, or a hierarchy of shapes that make up a banana or a hamburger. “When you see a rough approximation of a banana, it looks like armor, with all these rigid shapes parented together in a chain,” Kramer says. “To rain hamburgers, we had many semi-rigid convex bodies stuck together with flexible constraints and layers for the buns, meat, lettuce, tomato, and pickles. Each was its own collision object, and all the layers had breakable constraints. When a hamburger hits the ground, the bun might pop off and a tomato might roll away on its edge. It was all based on physics and dynamics.”

Mathematical glue held the rigid-body parts together during the simulation; the joints between could be breakable or soft. Constraints specified how much force would break the glue for breakable joints to, for example, shatter a building into pieces, pop apart the layers of a hamburger, or, for soft joints, bend a noodle.

“Once the simulation is done,” Davies says, “we have tools for mapping the high-resolution data onto the rigid-body representation. If something bends or crumples, we used a polygonal bind to have the high-resolution geometry reflect what was going on in the sim.” In other words, the high-resolution banana is bound and deformed to match the movement of the low-resolution, rigid-body “armor.”


TDs created the spaghetti tornado using chains of rigid-body boxes to facilitate collisions with other objects, and then rendered a curve drawn through each chain with noodle geometry.

When the bouncing burgers numbered in the thousands, the effects artists would drop 20 or so, then bake the resulting animation cycles out and attach them to particles later, again using Houdini operators. “The particles would tell the system, ‘Hey, I’ve hit the ground,’ and that would trigger a new animation state,” Kramer explains.

One of the most dramatic sequences in the film revolves around a tornado made of spaghetti—and meatballs—that rains down on people in a restaurant.

“Each strand of spaghetti was a chain of rigid-body boxes,” Davies says. “Using our [Houdini] tool sets, we built colliders for the characters and the tables in the restaurant. The rigid-body boxes in the chains could interact with the characters and tables, and could pile up on each other. For each chain, we generated a curve, and then generated noodle geometry for each curve that we sent to the renderer. I think the result looked convincing—technically and artistically.” And, funny.

“This film never got old for me,” Kramer says. “Even after seeing shots many times, I still laughed. The crew would be cracking up in dailies every day.”

One reason might have been that the directors encouraged them to come up with gags.

“We had such creative freedom,” Bredow says. “We had ideas come straight off a technical director’s desk and into the film.” He gives one example in which animators had posed Flint, who nearly electrocutes himself while working in a high-voltage area. The technical director took that cartoony pose, put a skeleton that he designed inside, and lit it to expose the skeleton. “The directors and production designer flipped,” Bredow says. “It was like Wile E. Coyote. They loved it.”

Fun with Stereo

Senior CG supervisor Grant Madden Anderson, who had last worked on Beowulf, took on the role of stereoscopic supervisor for Cloudy with a Chance of Meatballs. While some stereo films these days are using depth subtly, this film provided the perfect settings for stereo gags.

“This is a fun movie, so we could bring objects into the audience,” Anderson says. “You don’t want to do too much of that, but audiences seem to like it, and it can be fun. We had food falling, being thrown, exploding. It’s the very nature of this film, and also a personal preference, to have things coming out into the audience. It was like being in a food fight. It was a blast.”

The first scenes in the film, before Flint invents a way to turn water into food, are flat, and then as the film progresses, the scenes get deeper. “We have a shot at the end where Flint is holding onto the machine for dear life while it’s shooting food out the bottom,” Anderson explains. “We pull his legs way out into the audience, and you wonder whether he’ll make it or he’ll fall. It added to the sense of peril.”

Working with Anderson was a small team of assistant technical directors and camera artists who helped manipulate the left-eye and right-eye cameras to give the scenes depth and the characters roundness.

“Basically, we create two 3D cameras and dial two parameters: convergence and interocular,” Anderson details. Convergence, which describes how cameras are angled, determines the depth. If you angle the cameras in, objects push back. If you angle them out, objects push forward. Interocular, the distance between the cameras, stretches elements within a scene and, thereby, makes characters look rounder, especially ones close to the cameras. The stereo operators usually start with an interocular distance of 2.4 inches, the average space between human eyes.

“Generally, we’d dial in the interocular to get the roundness we wanted, and if that moved objects too close, we’d tweak the convergence to shove them back,” Anderson says. “If objects are too close, it’s painful. You can have only so much depth, close or far away, before it becomes too deep, so we tried to maximize that stereo budget to use it to our best advantage for every scene and still have the scenes cut together well. At the very end of the movie, we did an overall tweak of the convergence to make the scenes fit together better.”


The subject matter of this film, falling food creating catastrophic weather, created the perfect opportunity for stereo gags.

As is the case for most stereo postproductions, the Cloudy team used multiple cameras to help enhance story points. “Sometimes you might increase the interocular to make a character round, but that makes the whole scene too deep and separates the characters too much in depth. So, we might use multiple pairs of cameras to adjust convergence and interocular for characters and objects independently in the background, midground, and foreground.”

In one shot, for example, the mayor, who had been a small man, has eaten so much free food that he’s grown immensely. “Flint barely recognizes him,” Anderson says. “So we really tried to play with his roundness.”

One difficulty the stereo team had with Cloudy was that the scenes were so complex they couldn’t always dial in the cameras interactively. “The entire town was a model,” Anderson says. “It took so long to open the files, I’d give a camera artist parameters, say, an 8.6-pixel offset. The artist would make the change, and then we’d bring up the scene and review it.”

The huge town also made balancing the depth in the establishing shots tricky. “The city scenes used such wide lenses, we didn’t have a lot of depth at that distance,” Anderson says. “But, if we gave too much depth to objects far away, they looked like miniatures.”

One thing the crew learned with this film, though, is that they can push the depth back farther than they once thought. “We’ve learned that our eyes are more flexible than we’ve given them credit for,” Anderson says. “This stuff is all so new that for the first set of films, we concentrated on getting stereo 3D right technically. Now, we can start breaking the conventions to use it creatively."

No better event than a food fight. —Barbara Robertson