Like in the Movies
Issue: Volume: 27 Issue: 9 (September 2004)

Like in the Movies

With the increasingly powerful real-time graphics capability of game consoles and PCs giving game developers the tantalizing possibility of creating ever more sophisticated graphics, the word "cinematic" is frequently used to describe the result. Often, though, "cinematic" is used as a synonym for "photorealistic," which itself is a synonym for "realistic."

Still, a few game developers have begun applying the game engine's power beyond making only realistic-looking characters and beautifully textured backgrounds. These developers are learning how to use techniques from the film world to enhance gameplay and create a more interesting experience. Thus, given the intense competition in game development, it's no wonder companies such as Electronic Arts have been actively recruiting art directors from film effects studios.

Two such senior art directors at EA, Henry LaBounta and Habib Zargarpour, have reinforced that trend by creating stunning game visuals for what have become two of EA's most successful games: the snowboarding classic SSX3 and the Need for Speed Underground racing game. The racing game alone has reportedly sold more than seven million copies.

LaBounta joined EA in Vancouver, British Columbia, two and a half years ago from PDI/DreamWorks, where he was PDI's visual effects supervisor for Minority Report. Before that, he parted the Red Sea in The Prince of Egypt for DreamWorks. And prior to that, he was at Industrial Light & Magic, where he received an Oscar nomination for the visual effects in Twister along with Zargarpour, who received a second Oscar nomination for his work at ILM on The Perfect Storm. Zargarpour is equally well-known for leading the team that developed the pod race in Star Wars Episode I: The Phantom Menace under the supervision of John Knoll.

While many of the tricks and techniques that LaBounta and Zargarpour have brought to EA are visible on screen, others go beyond the inner workings of a game. "I was surprised by how easily and readily I could apply tricks and techniques from films to games," says LaBounta. "I thought there would be more technical limitations than there are."

One of the big differences LaBounta and Zargarpour noticed was that game developers paid less attention to developing a style for a game during pre-production than they had experienced in the filmmaking world. "Directors of photography and cinematographers in film spend a lot of time developing a visual language," says LaBounta. "But in games, I think people were comparing what they were doing to the competition or to what was done last year when they looked for ways to make it look better."

Thus, LaBounta and Zargarpour expanded the visual development pre-production phase to further refine the concept art. "It's hard to set a high initial visual target without going through reference gathering, editing, and concept art stages," says LaBounta. "Even if you know a game or a sport in your head, it's hard to make up a beautiful image from scratch unless you're a master painter." Moreover, it costs less to create concept art than full 3D scenes. "If a concept sketch is not working, we can throw it away and do another," he says. "It's a creative process."

When Zargarpour began work on Need for Speed Underground, he started developing the look of the game well before putting the ideas into production. "In film, I would have to prove the look before we created it, but this was not the norm in the game industry," he says. "People tried using videos here and there, but we honed in on what the game could look like just as we would in a visual effects process." To accomplish this, Zargarpour's team used assets and geometry from the game to create a video that represented the visual style they were trying to achieve and showed how the effects would enhance the sense of speed.
For the game Need for Speed Underground, Habib Zargarpour, senior art director at Electronic Arts, applied such visual effects techniques as actual velocity motion blur, honed while working at Industrial Light & Magic on sequences including the pod

"There are some games that have strong art direction, where visual goals have been applied through the game, but in a lot of cases, I think the games look the way they do by accident," Zargarpour says. "You could argue that it's better to stay free and loose to the end of development, but I think that if people plan what they want to achieve early on, everyone can work toward that goal and know what they need to develop for it in terms of programming and rendering."

Another surprise for LaBounta when he moved into games was the lack of color control during production. "When I started at EA, and I've since found this to be true at other game companies, there was no calibration for the TVs or monitors; no one was looking at the video signals on a scope. As I walked around the floor, the colors of a game would change," he explains. He hired a contractor to calibrate the monitors, and he implemented a controlled lighting environment for consistency throughout the day. "We had artists with rays of sunlight hitting their screens while they tried to do the lighting," he says. "We're not working out of a garage anymore. This is a professional environment; we need professional standards."

For Need for Speed Underground, Zargarpour was given two requirements from executive producer Chuck Osieja for creating the look of a street racing game that would take place entirely at night. "There were a lot of concerns about visibility," he says, "about not getting claustrophobic and helping players see where they were going. Second was adding to the sense of speed."

Games are often brightly lit to make it easier for players to navigate, but Zargarpour had other ideas. "The constraint of night racing spurred us to find out how we could use visual effects to guide the eye."

Drawing on his film experiences, Zargarpour brought motion blur into play. "I think this was the first use of actual velocity motion blur on a console game," he says. "It helps give a sense of speed because your eye can follow an object off screen, rather than having a random set of flickering textures."

In addition, the team used streaking lights to give a sense of speed and to help with navigation. For reference, they used a music video by singer Terri Clark in which time-lapse footage and streaking lights created an illusion of speed. "It's part of motion blur," Zargarpour says, "but it's a High Dynamic Range effect, so we made our version appear like a light trail, and it became a unique look for Underground."

The team also selectively lit areas of the environment to help with navigation, using pools of light and colors to lead the driver. "All these effects were in the video we created in pre-production after making sure our rendering programmers thought each effect was feasible," Zargarpour says. "Arn Arndt, our chief rendering programmer, led a team that identified each effect, split them out, and implemented them."
Lighting effects such as these in Need for Speed Underground were previsualized in videos using game elements before being implemented in the game. The streaking lights helped players navigate, created an illusion of speed, and gave the nighttime r

"People have said that Underground isn't photoreal," adds Zargarpour, "that it's its own thing. That was the idea. It was designed for night racing, to make you feel a certain way. The wet street makes you feel like you're driving on glass, like you're flying, and it gives you a little bit of a fear of heights; the motion blur brings back the sense of speed."

Although SSX3 takes place in daylight, LaBounta, too, used lighting and rendering techniques from the world of visual effects for film to enhance gameplay. "The executive producer wanted the game to be bright, shiny, and new," says LaBounta. "In the previous games, the programmers did a great job of putting in basic capability. As I started on SSX3, I came up with a list of 30 things I wanted to be able to do. They rolled their eyes at what I was asking for, but we implemented a new approach."

Rather than trying to do all the lighting with the real-time engine, LaBounta split the effects into pre-baked and real-time. "In SSX3, we have snow, trees, rocks, cliffs, and shadows cast from trees that don't change, so they could all be done off-line," he says. "I told the programmers, 'Rendering software programs like RenderMan and Mental Ray do a good job of calculating lighting, so why not spend your time on effects that are dynamic and pre-calculate all the lighting that can be done offline?' By implementing Mental Ray for our lighting calculations, we were able to go from a sticks-and-stones lighting method to sophisticated lighting that rivals what's being done in film."

That approach even extended to using High Dynamic Range radiance techniques to light the real-time snowboarding characters and integrate them into the environment. The real-time rendering engine probed "radiant spheres," that is, spheres into which images of the environment were mapped, to calculate lights and colors

that were applied to characters as they moved down the mountain. "It didn't affect performance," La Bounta says. "We bake the lighting down to an array of vectors that can be applied as efficiently as the real-time lights, such as street lamps or lights from windows, that are typically used in games."

The artists working with LaBounta warned him that to make it easier for game players, the producers would want the characters super bright, to stand out from the background rather than integrating them into the background. "That went against everything I know," says LaBounta. "I want characters to be nicely integrated, to be believable. What we did instead was use a common film trick, rim lighting, to achieve the objective of gameplay, so that even in a dark environment we could see the edges of the player."
By implementing such color-quality controls as monitor calibration and consistent lighting conditions, Henry LaBounta could be sure everyone working on SSX3 was using the same color palette.

Similarly, Zargarpour applied standard lighting tricks from film to Need for Speed Underground. "In visual effects, one of the basic CG tricks is that to make a CG car look like a real car, all the reflections are bright, overexposed. We also put reflections into the wet streets. And, across the frame, we made the black levels black. We know that if black levels are gray, the environment looks fake. But with the exception of a few games, black typically isn't used because the players need to see everything in the scene. We weren't afraid to hit those blacks."

One important part of the visual language Zargarpour and LaBounta developed for the games they art direct is the camera—the "photo" half of "photorealism." "If you go back a few years in games," says LaBounta, "the player's character would always be dead center on the screen all the time, even if it was jumping, falling, or turning, and the camera was always locked onto that character. But with that you lose the impact of the experience, and it's not interesting visually."

Thus, for the gameplay camera in SSX3, LaBounta took advantage of the fact that the camera can anticipate some things about the player's movement in advance. "If the player was likely to have the character snowboard to the right, we'd swing the camera left to get a better view of where he is going and help the gameplay in that way," he says. "When the character jumps, we let it go up to the top of the frame; when he's falling, we let him get closer to the bottom. In film, you never have a camera lead the character; it's always lagging behind, having to keep up with the action, and we tried to apply that in the game."

For non-interactive sequences—those scripted sequences in which a character might tell the player something about the game or congratulate the player—the trick was to give the camera a little brainpower. "We didn't know which characters would be used or where the sequences would take place, so we gave the camera rules to use to create interesting camera angles," LaBounta says. "It used to be that as long as the characters were in the screen and we could see them, that was good enough. But that's like movies in the 1920s. Today, cinematographers spend a lot of time setting up the right camera angles, and we wanted to have that quality in the game visuals as well."
The foreground blends seamlessly with the background sky box, or matte painting, because the artists paid attention to perspective lines and matched the color temperature of the shadows for SSX3.

Camera effects in Need for Speed Underground were also designed to enhance gameplay. "We learned on the pod race that when you have a camera locked to a car, it doesn't feel as fast," Zargarpour says. "We struggled to apply that to the game. We knew that it couldn't hinder the game experience by not allowing you to see what you were doing. So instead of locking the camera to a car, we simulated a cameraman hanging out of another car trying to film the player's car."

To help design camera moves, the Underground team created a tool that's now called ICE, for in-game camera editing, that gave them real-time camera editing. "We could do cuts and preview what was going on by scrubbing through time using the PlayStation 2 controller," says Zargarpour.

Zargarpour also helped develop cinematic camera techniques for the game James Bond 007: Everything or Nothing. A cameraman follows Bond when he's racing his bike into the sun, camera shake was built in to simulate a hand-held camera, and the non-interactive sequences in the game used an aspect ratio closer to letterbox than typical game screens.

For Bond, Zargarpour's lead rendering engineer Peter Doidge-Harrison used a trick from visual effects called multi-texturing. "You don't want to see repeatability in textures, so in large areas, we use different textures at different scales," he says. "Using streaming technology, we could load different parts of the world as we needed them."

Zargarpour also applied his knowledge of rigid-body simulations to oversee the creation of complex explosions for Bond. "There was no such thing as too much over the top, so it was a lot of fun," he says. "We'd used the game physics when we could; otherwise we'd do rigid-body simulations in (Alias) Maya or (Discreet) 3ds max, bake those, and play them back."

Explosions weren't important in SSX3, but large vistas were, so LaBounta applied techniques learned in creating matte paintings for film. "We had a great concept artist, Tom Johnson, who had worked on Final Fantasy, create the sky box, which in film terms is a matte painting," he says. "I wanted it to look like part of same world as the path the snowboarder is on. As people in the film world know, you can get 80 to 90 percent of the way there quickly, but that last 10 percent takes a lot of tweaking. It was a shock for the people here, how much noodling I wanted to do to nail it. But now the color temperature of the shadows matches the foreground, and the perspective lines make it look like it was all shot through the same lens."
Artistic effects such as the light bloom in the cave (above) and the expressive trees (below) were first created as concept art to develop a consistent visual language for the game SSX3.

Although, for the most part, everything in a game is rendered at the same time in one pass, LaBounta experimented with a form of real-time compositing. "The light bloom that we used in the caves and color-timing effects were filters that could be considered compositing operations," he says. "We've just scratched the tip of that iceberg. I hope that with the next generation of hardware we can use more advanced techniques such as rendering multiple passes and applying separate mattes for image processing operations. These techniques will soften that artificial hard edge of video games."

LaBounta believes that in the future, people will see scenes from video games and not be able to immediately identify them as video game images. But while recognizing that cool visuals can affect sales and reviews, the goal is always to enhance gameplay. "We have a lot of areas to explore in invoking an emotional response," he says. "In movie theaters, you get scared, you cry, you're happy because you believe the images are real. But millions of polygons and beautiful texture maps alone won't make an experience more believable. You also need to pay attention to physics in the properties of light and know how cameras work. Games aren't limited to photorealism, but we haven't achieved photorealism yet. It's a good goal to have."

Barbara Robertson is a contributing editor at Computer Graphics World and a freelance journalist specializing in computer graphics, visual effects, and animation. She can be reached at

Mental Images