As technology advances, game developers are given more choices when it comes to creating the next generation of protagonists and antagonists that captivate gamers for hours on end. During the past few years, as studios have become more acclimated to current-generation consoles, they have been pushing their processing power further, and as a result, gamers have seen a variety of diverse character types evolve. From photorealistic, motion-captured characters that seem to eerily live and breathe within the game worlds of Quantic Dream’s Heavy Rain to the epic, ripped-from-a-painting, blood-soaked beauty of Sony Santa Monica Studios’ God of War 3, there’s something for every graphic artist to dive into.
During the past several months, titles like Gearbox Studios’ Borderlands took the shooter genre in an entirely new direction with a unique cel-shaded, “living comic” look that had never been seen before, especially running on Epic Games’ Unreal Engine 3. A derivative of that comic-book style can be seen in characters like Ryu in Capcom’s new Super Street Fighter IV, and that studio is pushing this vibrant, pop-out-of-the-screen style even further with the 2011 fight title Marvel vs. Capcom 3: Fate of Two Worlds. There are also games that are focusing on story and utilizing more robust, more human characters in story-driven titles, such as 2K Games’ interactive crime story Mafia II and Visceral Games’ survival horror/adventure Dead Space 2. And then there are studios, such as Epic Games (Gears of War 3) and People Can Fly (Bulletstorm), which like to infuse hulking, arcade-style caricatures who carry big guns and let the ammo do the talking.
Here we examine some of the unique characters in these game titles and the CG techniques used to create them.
People Can Fly/Epic Games
The folks at Epic Games liked Polish developer People Can Fly so much after working with the studio on the PC version of Gears of War that Epic bought the studio. Next spring, their first collaboration, Bulletstorm (published by Electronic Arts), will be released on the PC, PlayStation 3, and Xbox 360 platforms. The game introduces a “symphony of blood,” allowing players to methodically torture enemies before showing them mercy.
The game’s protagonist, Grayson Hunt, is a drunken space pirate who was once an elite mercenary. Cliff Bleszinski, design director at Epic Games, says Hunt was modeled after rogue antiheroes, such as Han Solo; players will journey with Hunt as he seeks revenge and, ultimately, redemption.
“Hunt is a member of Dead Echo, an elite group of mercenaries trying to keep the peace for the confederation of the galaxy,” explains Bleszinski. “He discovers that some of his commanding officers have been using him and his team to do their ill will, so he makes a decision to save his crew, and they end up in the dead of space.”
The game’s action picks up years later with Hunt living a rogue pirate existence. After crashing his small ship into the Ulysses, the prized ship of the confederation, the game takes place on Stygia, a resort planet run amok by mutants and now overrun by confederation enemies, as well.
“Modern consoles, along with high-end game engines like Unreal Engine 3, can manage insanely detailed game characters with ease,” says Andrzej Poznanski, lead artist at People Can Fly. “Are there still restrictions and limitations? Sure, they’ll always be there, but these days it’s not about limitations, it’s about not getting overwhelmed and carried away with almost limitless possibilities.”
As Poznanski notes, good game characters need a tasteful balance of clean, simple shapes, complemented with meaningful details, which weren’t added just because there was empty space on a normal-map texture. He adds that it is important that even when players are squinting their eyes, they still clearly “get” the distinctive features of the model, including the character’s silhouette, props, and attitude.
The team at People Can Fly start the character creation process with a mood concept drawing, “because we need to get the vibe and feel of the character before we go further,” explains Poznanski. Next, the artists make proper orthogonal drawings of the character in a default pose, and then make necessary adjustments to ensure that the new character will work well with the studio’s standard skeletal rig for animation. “It’s important and lets us reuse all typical animations for our humanoid characters,” he adds.
To create the characters for its noir title Heavy Rain, Quantic Dream enlisted real actors to bring its CG characters to life. The artists spent a great deal of time creating realistic facial animations for the cast.
A 3D modeler then builds a base mesh, which is quickly rigged, and the person applies some temporary textures and exports the character to the Unreal engine so that the team can get an early feel of the character. Once the base mesh is approved, it is used for the creation of a medium-resolution model. People Can Fly employs Pixologic’s ZBrush for this stage, and then uses re-topology tools before exporting the character to either Luxology’s Modo or Autodesk’s Maya or 3ds Max for further modeling. At People Can Fly, the art team usually juggles the software of choice a number of times, depending on the specific task at hand or the artist’s personal preference. Once the medium-res model is complete, the group again uses ZBrush to create a high-resolution pass. The art team mixes default brushes with custom alphas, utilizing layers, morph targets, projections, Z spheres, and a 2.5D tool set.
At this stage, the mesh often reaches 30 million to 40 million polygons, which are trimmed to about four million to five million polys using Pixologic’s Decimation Master plug-in. An artist turns this medium-sized mesh into a low-res mesh using ZBrush’s re-topology tool, with all the details baked into it. Next, a character undergoes the time-consuming UV layout, an important technical step in creating a sharp and detailed protagonist.
To build characters in its title Bulletstorm, the team at People Can Fly use a range of software, including ZBrush, Modo, Maya, and 3ds Max.
“Normal map, base color, and ambient occlusion are derived from the hi-res mesh, and hundreds of additional textures are layered in [Adobe’s] Photoshop,” says Poznanski. “A large part of the final effect can be attributed to Unreal’s powerful shader capabilities. We also can use the Fresnel effect, and breathe life into skin textures by emulating subsurface light scattering, and create more 3D models using bump offset mapping, and even animate geometry using vertex shaders.”
At each step of the way, the character is tested in the game environment because it’s only after the artists see the character in the level with in-game lighting during actual gameplay that final adjustments and fixes can be tweaked.
“Does the character look distinctive? Does it have screen presence? Does it work well in fast motion? Do detailed features work from a distance, or do they become meaningless noise?” asks Poznanski. “We are often forced to make significant changes at that stage, but when we’re done with them, then, and only then, can we finally say, ‘We no longer have just a character; we have a game character.’ ”
Uncharted 2: Among Thieves
Naughty Dog/Sony Computer Entertainment America
Developer Naughty Dog has pushed the idea of an interactive Hollywood action flick into new territory with the critically acclaimed Uncharted 2: Among Thieves. The group’s goal, says Hanno Hagedorn, lead artist at Naughty Dog, is to bring cinematic characters to life. The key focus is for these characters to deliver a believable performance in every way possible while meeting the studio’s high standards.
“An extremely high level of detail in our characters is crucial,” says Hagedorn. “Polygon counts can go up to 45,000, and texture resolutions up to 4000 by 2000, not only in cut-scenes, but within the game itself. Some of our high-resolution meshes went beyond the 100 million mark, which can be a challenge to work with. Despite all those numbers, the quality and believability of the final product purely comes down to what the artist is able to deliver.”
Naughty Dog begins with a concept that outlines the major attributes of the character. For the characters’ faces, the artists use a mixture of concepts, reference photos, and photos of actors. Giving the game characters a rough resemblance of their actor counterparts helps with delivering a solid performance across the board.
Naughty Dog’s game Uncharted 2: Among Thieves brings the cinematic characters to interactive life. To do this successfully, the team made sure the characters delivered a compelling performance.
When it comes to the look and personality of the characters, the artists work closely with creative director and writer Amy Hennig. In the end, it’s all about creating a character that fits and works with the story, notes Hagedorn. The Naughty Dog group uses motion capture as the base for the characters’ body animations, but all the facial performances are still 100 percent hand animated. “We’re actually proud of that fact,” he says. “Hand-animating facial movements goes along great with the stylized look of our characters and helps us avoid the biggest issues of the Uncanny Valley.”
When it comes to sculpting, the majority of the team use ZBrush, but some of the guys stick with Autodesk’s Mudbox. In the end, each artist picks his or her weapon of choice to deliver the best performance. For texturing, the artists at Naughty Dog use a mixture of Mudbox and Photoshop, and a little bit of ZBrush’s Polypaint once in a while. The ability of Mudbox to display and paint on normal and specular maps can be a great help, too, Hagedorn adds.
“In general, we put a big emphasis on maintaining an artistic, hand-painted look,” says Hagedorn. “Therefore, using photos as textures is not the path that works for us the majority of the time.”
However, the artists sometimes use photorealistic textures for minor surfaces, such as fabric patterns. The company’s shader system is hooked into Maya, enabling the artists to get a real-time preview of their shaders within Maya itself. The preview doesn’t take any postprocessing effects into account, but it is close enough to ensure a sophisticated workflow, Hagedorn maintains. It also allows the group to dynamically select the resolution for each texture separately without having to re-export any assets. “Using this feature is a great help in optimizing our assets,” he adds.
To satisfy the technical directors and to get better skinning results, Naughty Dog uses quad-heavy in-game meshes. One side effect is that the more evenly distributed polygons give good auto-LOD results. The other is that this, in combination with the polygon densities, allows the team to effectively use its in-game face meshes as its sculpting bases. As a result, the team can spend more time sculpting, thereby significantly easing various processes, such as the creation of wrinkle maps.
For Naughty Dog’s cinematic skin, the crew uses texture-space diffusion. The artists bake the lighting information into a separate map, which is blurred with different widths for the red, green, and blue channels. The blur kernel combines the five blurs into a single 12-tap blur. For hair, the studio uses the Kajiya-Kay hair-shading model, giving the hair its anisotropic look. The group then tweaks the shadow so that the hair does not self-shadow, but instead uses a diffuse falloff that wraps around the hair strands. The direction of the sun used for the specular is always set at a grazing angle.
According to Hagedorn, part of the game’s story is told with the look of the characters. Not only do they change outfits on a regular basis, but these characters sometimes become physically affected by what is happening to them as the narrative plays out. For some characters, Naughty Dog has as many as four different beat-up face textures. For each outfit, there is at least one dirty or beat-up variation. In addition, the characters can get dynamically wet or dynamically accumulate snow, also affecting their appearance in the title.
2K Games/2K Czech
Mafia II introduces a new cast of characters and an open world environment for players to explore through a 10-year journey that spans from the 1940s through 1950s in Mafia II. The game introduces a colorful cast of young characters that enter the violent business of organized crime. According to Jack Scalici, director of creative production at 2K Games, who served as lead writer, music supervisor, casting director, and voice director for the title, one of the goals of the team was to bring these
authentic-looking characters to life and build an emotional bond between the main characters and the nonplayer characters that populate the New York-inspired city of Empire Bay.
“We examined each character’s reason for existing in the game, their relationships with one another, and we made some adjustments to ensure they all feel real and have a defined purpose,” explains Scalici. “From there, I started working with the cast. The best thing you can do for your character is to cast a good actor and let him or her become that character. I ended up using the first draft of the script I was given as more of a blueprint than a script when it came to the characters and dialog. After the dialog was written, we still didn’t consider it 100 percent final. The guys at 2K Czech have some incredible tools, and they can respond to changes very fast, so I had the freedom to improvise during recording and to completely change certain scenes if they weren’t working out in terms of how they were intended.”
The characters and their relationships with one another were vital to the story behind Mafia II. A mix of proprietary tools and middleware were used to create the protagonists and antagonists.
Joe Barbaro, who is protagonist Vito Scaletta’s best friend and wingman for most of Mafia II, was brought to life by actor Bobby Costanzo. Scalici describes Barbaro as the life of the party but someone who is going to end up in a fight by the end of the night. Although it might seem like there were Hollywood inspirations for Mafia II, Scalici maintains that he did not watch any movies or TV shows to help craft these virtual characters.
“The development of Joe from what he was at the start of the process to what he has now become is a result of me putting a lot of myself into him, along with little pieces of so many guys I knew growing up in New York,” says Scalici.“The Godfather is one of my favorite movies, but for this game, we wanted our characters to be real wise guys, not an idealized vision of what you see in that film. Plus, we certainly didn’t want them to be the stereotypes you see in so many movies.”
The team at 2K Czech used its proprietary Illusion Engine to bring these characters to life, while utilizing third-party middleware, such as Autodesk’s Kynapse for AI, PhysX for Physics simulation, and FaceFX for in-game facials. According to Denby Grace, senior producer on the title at 2K Games, this engine allowed the team to fully realize the vision for the game; as a result, the artists were able to provide a hugely detailed and destructible world that will load without the player incurring any wait time after entering the city.
“The main difference between Mafia I and Mafia II in terms of technology has been the dramatic increase of texture resolution and poly count (from hundreds to thousands),” explains Ivan Rylka, lead character artist on Mafia II at 2K Czech. “Civilian characters have 4000 triangles on average, while major characters exceed 6500 triangles; Vito, the protagonist, has nearly 10,000 triangles.”
This higher visual credibility was achieved through complicated shaders, as well as using normal maps for wrinkles and expressions, and facial animation through FaceFX technology. Rylka says that physically simulated cloth on a wide range of Vito’s outfits was also something the team couldn’t have done in Mafia I.
“During the process of character production, we also used ZBrush for high-res models, which gave us incredible detail to bake into the normal maps of our in-game models created in 3ds Max,” details Rylka.
Grace believes that this sequel ultimately benefited from a larger development budget, thanks to the success of the original title. That allowed the team to provide more depth for not only how these characters look, but how they behave in the game.
“Everyone who has played the game has said the same thing to me: Our characters feel like real wise guys, and the story has a mob feel and atmosphere that’s there in a big way,” relays Scalici. “What many of them don’t realize is that this is achieved without the characters ever using the words ‘respect’ and ‘honor,’ and when you hear the word ‘family’ in Mafia II, 99 percent of the time it’s the main character talking about his mother and sister. Like the first Mafia game in 2002, Mafia II is not a story about the mafia. It’s the story of a regular guy who ends up in the mafia, and all the risks, rewards, and consequences that go along with it.”
Dead Space 2
Electronic Arts/Visceral Games
Electronic Arts brought gamers face to face with evil when they introduced Dead Space and its “strategic dismemberment” to shooters back in 2008. Following the 2009 Wii prequel with Dead Space: Extinction, the game gets its first sequel this winter with Dead Space 2. It’s been three years in game time since engineer Isaac Clarke faced off against the Necromorph monsters aboard the mining ship Ishimura in the original game, and since that time, technological advances and upgrades to the Dead Space Engine have allowed the development team to further explore this protagonist. For one thing, players will actually see Clarke’s face and hear his voice for the first time.
“In the first game, Isaac always has his helmet on, so there was no facial work necessarily,” says Ian Milham, art director for the Dead Space franchise. “This time, it’s a much more complicated rig. All the shaders have been punched up.”
According to Milham, Clarke’s helmet can fold away, revealing a full head underneath. He’s been given full mocap performances for his body, which now has more fluid, lifelike movement, and for his uncovered human face, which has the ability to emote. Milham said the team came up with this whole dynamic system for his helmet to fold away, so you could really play with that emotion on his face. “All the technology had to go up to match,” says Milham. “A lot of it wasn’t necessarily technological advances as it was re-budgeting for a character that was much more complex and featured more bones, more shading, and more texture to support a fidelity of performance that is greater.”
The character also was given upgrades to its space suit for gameplay purposes. Players will now have full control of Clarke in zero-gravity combat, so the suit has flaps, rakes, and jets that respond to player input. The visual upgrades to the suit are immediately recognizable and are the result of the pipeline that the team employed for the sequel. Because the world of Dead Space is rooted in reality, starting with the early-concept artist work, all the engineering has to actually work and have real fundamentals behind it.
“Rather than using CG to fake-transform how Isaac’s helmet folds up and away, we created engineering schematics so that the helmet actually works,” says Milham. He explains the process: “Those get passed to a modeler who does a high-res base model in Maya. It’s not all done in ZBrush because sometimes you’re just doing panel lines and that sort of business. That high-res base model in Maya is taken into ZBrush and up-res’d and done up completely. Once that’s approved, a low res is done in Maya, and the normal maps off the ZBrush version are brought in. That gets passed off, and then a base set of textures is done for that. Next, a specific shader tech comes in and does the final shader punch-up. Our character’s pretty unique in that he has his health bars built in, so there’s actual gameplay information playing on the character. He has different helmet glows and things like that, so it goes through a whole technological pass before it is finaled up. Then it goes on to rigging, and everything else.”
Since Dead Space is a horror game, the environments the player will explore are dark and foreboding. Milham and his team are dealing with a world that tends to have a huge number of lights that are moving, animating, and flickering, but they have relatively quick fall-offs. As a result, they have tweaked a lot of the shading for the character.
“Because Isaac’s suit is now more shiny and metallic, there’s much more stuff for those lights to chew on,” explains Milham. “That’s just as much thinking back to the design as it was in the shading. We concentrated our new shader upgrading primarily on things that would pay off on a world that has a lot of light sources, as opposed to being outside where there is one sun. We’re on a spaceship with lots of blinking lights and lots of stuff moving around all the time.”
Part of that movement comes from the fact that this world—and those lights—are completely destructible, which meant more work for the team to bring the causing effects to life as the player tears through these environments. This game employs live specularity and real-time reflections. In contrast, the original game used more canned content with prebaked environment maps or cubic enviro-mapping. The end result is a character that fully comes to life with a more realistic look and a new voice, whether he’s barking commands at his team or navigating the dark corridors, waiting to unleash his weapons on the aliens.
Developer Quantic Dream first pushed the envelope of interactive entertainment with its PC, Xbox, and PlayStation 2 title Indigo Prophecy in 2005. Since that time, the French-based developer focused on its dream project: Heavy Rain. This PlayStation 3 exclusive was created using the innovative new technology that allowed producer/writer David Cage and his team to utilize real actors to bring virtual characters to life.
The game introduces four unique characters that the player interacts with throughout the noir thriller: Ethan Mars, an architect suffering from mental and emotional instability, journalist Madison Paige, FBI agent Norman Jayden, and private investigator Scott Shelby. Every decision that is made in the game directly impacts the outcome of the story, which involves a missing boy and the hunt for the so-called Origami serial killer. As Cage explains, these characters were critical to the gameplay: He wanted players to invest not only time, but also emotion, into them as they played through the game’s chapters.
“One of our ambitions was to create highly believable characters that look and move in a realistic fashion and express subtle emotions, captured from real actors’ performances,” says Thierry Prodhomme, lead character designer at Quantic Dream. “It necessitated rethinking the complete production pipeline and developing specific techniques and tools that we then used.”
To ensure a cinematographic vision of this film-noir thriller, the studio built a character team comprising concept artists, fashion designers, and 3D artists. “Fashion designers were in charge of defining the mood and style of each character, concept artists provided turnarounds and proportions, and 3D artists produced the final models,” explains Christophe Brusseaux, art director at Quantic Dream.
Isaac’s suit in Dead Space 2 features a fold-away helmet to reveal the character’s emotion, which is apparent in his face.
Quantic Dream cast real actors to perform live acting, motion-capture shooting, and voice recording, and used 3D scans of actors’ faces. A cast of 70 actors worked on the massive game, which required a record amount of work. The 3D scans were mostly used as templates for the artists, who worked in Autodesk’s Maya. Accompanying photo sessions provided all the skin details in high resolution.
To model the characters, the group first resurfaced low-resolution models of the faces with edge loops dedicated to facial deformations and animations. Then the artists created the high-
resolution models in Pixologic’s ZBrush, unfolded UVs, and built the skin shader using specifically developed proprietary tools.
“Our proprietary Materials Editor is based on a nodal shading network system similar to Maya Hypershade,” says Brusseaux. “With this system, we can easily create a lot of complex shaders, in particular, skin shaders, including SSS, translucency, and thickness.”
To bring Heavy Rain’s characters to life, Quantic Dream built an in-house mocap studio. There, the team filmed multiple actors on stage at the same time using props and basic sets. For the characters’ bodies, the crew used motion-captured data as a reference for building volumes and proportions. An initial skeleton was built for producing a body placeholder used by the animation team as a first reference and by the 3D artists for creating the final body model.
When the animators completed the basic skinning, they built an exoskeleton for each model: This additional skeleton was driven by the main skeleton, and contains automatic expressions, enabling special behaviors and increasing the quality of the mesh deformation.
For the faces, the group used only raw mocap data captured in the company’s sound studio at the same time of the voice acting. A special marker set was used to capture the face movements, while for the body motion, the crew used 1.5mm markers along with a Vicon setup that includes 14 MX cameras.
“For aesthetical reasons, we didn’t use blendshapes, as these often look too robotic,” says Prodhomme. “We focused on capturing raw mocap data to avoid heavy post animation work, which also retained the maximum information from the original performance of the actor.”
The animators also produced a special rig that handles 76 base bones for the body, 105 for the face, and 60 for the exoskeleton.
To enhance the characters’ appearances, the team used Havok Cloth to dynamically simulate certain clothing, such as long trench coats, as well as hair and special props.
On traditional shots, the team used classical light setups for directional, spot, and ambient lighting. However, Brusseaux noticed during the production that specific, highly cinematic, close-up shots required a higher quality of lighting. To solve this issue, Quantic Dream’s R&D team developed a specific tool to manage the special lights with high resolution, integrating the technology into the real-time directing bench used by the camera team to set all in-game cameras and to “direct,” among other segments, the real-time, in-game cinematic sequences.
“Technology, tools, and pipelines have greatly evolved during the past few years to allow us to create highly believable characters,” says Brusseaux. “The time when artists alone were crafting characters and animating them is probably over. By using scanning and motion-capture technologies, as well as through the use of advanced shading, skinning, and lighting tools, we were able to capture the performance of real actors, produce highly realistic characters, and bring them to life in a way that, we think, has further pushed the boundaries of emotion in games.”
Given the success of the game and the ability for the team to avoid the Uncanny Valley criticism that has even plagued some Hollywood CG films in recent years, Quantic Dream’s pipeline has solved many problems and opened up a new doorway into character creations. By utilizing real actors and adding another layer of emotion to this game, the studio has pushed the boundaries of interactive entertainment.
And Quantic Dream is not resting on its laurels; the studio is already working on its next project, and as the game industry looks ahead to the next generation of hardware, this pipeline will breathe life into even more believable virtual characters in the near future.
Playing With Character
At the end of the day, many of today’s videogame characters have become as well rounded as anything seen on the big screen or on television. Technology has given the current generation of artists and character creators the ability to craft unique heroes, heroines, and villains using the methods that they prefer.
Ultimately, whether using motion capture or cel shading, these characters are leaving an indelible mark not only in gaming, but in the broader entertainment landscape. Hollywood has taken notice, developing big-screen versions of games like Gears of War, WarCraft, Uncharted, Dead Space, and EverQuest. Jerry Bruckheimer elected to turn Prince of Persia into a summer movie—and potential franchise—because of the character and story that Jordan Mechner created with Ubisoft.
Moving forward, these more believable game characters will more easily migrate across media. As gamers already know, one of the reasons is because many of these characters stay with players long after the power button has been turned off.
John Gaudiosi has been covering the world of video games and the convergence of Hollywood and computer graphics for the past 16 years for outlets like The Washington Post, Wired Magazine, Reuters, AOL Games, and Gamerlive.tv.