Masters of the Game - Strangehold
Issue: Volume: 30 Issue: 11 (Nov. 2007)

Masters of the Game - Strangehold

There’s a war erupting in the video game world, one as intense as the fiercest firefight between Halo’s Master Chief and the dreaded Covenant. But in this war, the foes are not polygonal characters, but game developers, hell-bent on claiming the crown of the first truly “next-generation” game.

The flashpoint for the fight occurred on August 27, when game designer Ken Levine introduced the world to a spectacular vision of a dystopian underwater city called Rapture in his game BioShock. With its Art Deco style of deep-sea visuals and disquieting Orwellian story line tinged by themes of lost innocence, BioShock garnered almost every Game of the Show award at E3 2006. By this past September, the title had sold more than 1.5 million copies, sending shares in its publisher, Take-Two Interactive, soaring by nearly 20 percent.

Accolades from the mainstream media were unprecedented. The New York Times waxed rhapsodic: “Intelligent, gorgeous, occasionally frightening…. Anchored by its provocative morality-based story line, sumptuous art direction, and superb voice acting, BioShock can hold its head high among the best games ever made.” Meanwhile, the Los Angeles Times proclaimed: “It also does something no other game has done to date: It makes you feel.” And finally, The Chicago Sun-Times called it: “The rare, mature video game that succeeds in making you think while you play.” In response to the game’s overwhelming critical, commercial, and artistic success, Take-Two chairman Strauss Zelnick declared that the company now considered the game part of a franchise.

Surely, then, BioShock had to be the standard-bearer for a next-generation title. Not so, according to highly regarded game designer David Braben, owner of Frontier Developments. “I loved the 1930s to 1950s atmosphere of BioShock. Overall, the whole game was beautifully executed, but the gameplay itself was not ‘next-gen,’” he claimed in a recent interview. Braben also refused to bestow the “next-gen” anointment upon Halo 3, which he called “great fun, but also a little disappointing. Although there were a few nice touches and improved graphical fidelity, it hadn’t moved on that much from Halo 2.” Braben’s comments ignited a firestorm of controversy and debate in gaming forums the world over.

Obviously, as a rival developer, Braben has an ulterior motive in criticizing these games: to draw attention to his forthcoming title The Outsider (with an anticipated release of late 2009), about a fugitive CIA agent, accused of murdering the president, who chooses between exonerating himself and seeking vengeance against those who incriminated him. According to Braben, The Outsider will be a proper “next-gen” game that will outshine both Bungie’s Halo 3 and Levine’s BioShock. By his definition, a proper “next-gen” game will give players the means to affect the story line more deeply than merely choosing the “good path” or the “bad path.” 

Indeed, given the power of next-gen consoles, Braben’s expectations are not unjustified. Through advancements in AI programming, crowd simulation, and animation breakthroughs such as NaturalMotion’s Dynamic Motion Synthesis, massive numbers of nonplayer characters (NPCs) are now capable of reacting to the environment and the player more intelligently than ever before. This is occurring with unscripted and uncanned movements that not only make the gameplay unpredictable, but also create an infinite potentiality for the story line—in effect, individualizing each player’s story. However, collating all these newly acquired gameplay and graphical advancements into a single game has been a feat few developers have yet to achieve.

That’s where Ubisoft’s Jade Raymond enters the fray. As if in response to Braben’s claims, the producer of the visually stunning Assassin’s Creed (which was showered with awards at last year’s E3) told the BBC that the new game from the Prince of Persia masterminds would be the next-generation trailblazer, dropping the player into a massive, fully interactive environment set in three sprawling Holy Land cities populated by thousands of intelligently reacting people where every single thing can be grabbed, climbed, or jumped on.

“Assassin’s Creed isn’t just about looking pretty; it’s about taking gameplay to the next level. Our mandate was to define what next-gen gameplay is, and a lot of that is about crowd simulation,” Raymond says. “When you’re not doing anything, the crowd has its own AI; the people have the need to socialize, they get hungry, they rest. Then you have their reactions to what you’re doing, so the gameplay is about understanding the crowd and using social rules to stay hidden or to help in your escape at different points in your assassination.”

As for setting the new standard for games, Raymond says, “As an entertainment medium, games are really just scratching the surface. I think we are at where we were with movies when we made the transition from silent films to films that were telling a story with sound and dialog. We are just at the point of discovering what we can do with interactivity and what’s an interactive story, where you are creating a story for players. But, it really has to become the player’s story.”

So what is the current state of next gen? To find out, we’ll take a look at the inner workings of three of the most celebrated and highly anticipated games of the new era. In this issue, we will feature Midway’s Stranglehold. In the following issues, we will look at Bungie’s Halo 3 and Ubisoft’s Assassin’s Creed.

 
 
Midway’s Stranglehold incorporates next-generation gaming technology to take players into the rich, interactive world from John Woo’s 1992 action film Hard Boiled.
 
While experts predict the next-gen era will be defined by advancements in AI, it could also be a coming-of-age for the “cinematic game,” a fusion of film art and interactive storytelling that developers have long sought to perfect. And if one game can make the case for achieving that fusion, it would be Midway’s Stranglehold, the so-called spiritual sequel to John Woo’s 1992 action film Hard Boiled.

Developed in collaboration with director John Woo, Stranglehold puts players into the shoes of Chow Yun-Fat’s character, Tequila, as he battles the Russian mafia and rival Chinese Triads to rescue a kidnapped police officer. Sliding down banisters with twin pistols blazing, gamers engage in Woo’s trademark bullet-time, ballet-like gunplay as the story line takes Tequila from the congested markets of Hong Kong to the museums of Chicago. So how did Midway go about translating John Woo’s cinematic aesthetic into the interactive world?

“We studied Woo’s films heavily and made a list of the must-have features,” states director Brian Eddy. “It was important to us that the most action-packed parts of the game occurred during gameplay, and not during a put-down-the-controller cinematic. We worked hard to be authentic to the John Woo universe. If you dissect the features (spin attack, precision aim, barrage, acrobatics, Mexican standoff), they can all be traced directly to iconic moments in his films.” While Midway Chicago did all the art direction for the game, John Woo and his production company, Tiger Hill, provided storyboards for the cinematics, which specified the choreography of the action and camera work.

The visual palette of the Hong Kong settings was crucial to establishing the cinematic atmosphere of the game, as it takes the player from the dense, rain-soaked neon-lit streets of the city, with its teahouses and opulent penthouses, to the aging flotillas in the murky harbor and the decrepit bamboo structures of the Kowloon slums.

“We wanted to have locations never seen before in a game, and we wanted a great deal of variety. It was important to mesh the two primary locations in the story (Hong Kong and Chicago), as well as create a sense of the fantastic. We wanted to contrast East/West, new/old, clean/dirty, day/night, and give the player a sense that he or she were moving through a rich, detailed, and varied world,” says Eddy. “Of course, choosing locations where there’s lots of stuff to destroy was also a big factor.”

To handle the ultraviolent gunplay and the cinematic rendering of the myriad shattering objects, Midway made extensive modifications to the Epic Unreal Engine 3. The group’s main focus was upgrading the engine to cope with thousands of dynamic objects. This meant adding new lighting systems, optimizing the rendering pipeline, replacing the physics system with Havok, and creating the tools and pipelines to author all the content required to support massively destructible worlds.

Stranglehold’s characters have a realistic yet slightly exaggerated look, developed in Maya and 3ds Max.

Aside from raw optimization, Midway also gave the artists much finer control of performance trade-offs, which allowed them to choose which aspects of the scene needed high fidelity (real-time shadows, accurate physics, and so forth) and which objects could be rendered more cheaply but in greater volume. “To sell the illusion of destruction, we knew we needed large numbers of particles and objects, but we didn’t want to compromise the entire look of the game; hence, putting this trade-off in the artists’ hands was very important to us,” Eddy says.

In addition, the cinema tools were rewritten to handle some of the more complicated tasks the artists wanted to achieve. They also replaced the audio and AI systems, and made numerous refinements to the editor to improve the workflow for the artists.

Character Design
Visual design director Stephan Martiniere wanted each character to have a unique emotional resonance, forged through their facial shapes and color palette. He wanted them to be realistic, yet slightly exaggerated so each one would fall within an archetype that reflected his or her role in the story.

For example, says Eddy, the character Yung features a prominent jaw and a somewhat tapered cranium that gave him a strong, almost ape-like impression. Peanut, one of the gun-fodder enemies in the Golden Kane Triad, has a tiny chin, upturned nose, and protruding, crooked front teeth that convey more of a fidgety, nervous, rat-like personality.

“Occasionally, some of the more unique features needed to be toned down for technical reasons. In some cases, certain characters became a bit more homogenous in order to share items like skeletons and animations with other characters so that we could save a bit on memory,” Eddy points out.

Midway scanned both Chow Yun-Fat and John Woo, and used the resulting meshes as reference for modeling and texturing. “In the case of Chow, we had a specific request from Tiger Hill (Woo’s studio) to reverse-age him by about 10 years so that he appeared closer to what he had looked like in the film Hard Boiled,” notes Eddy. “This meant going back to a lot of photo references from his earlier films and massaging the scanned mesh and texture map until we had captured the younger, Hard Boiled-era Chow Yun-Fat.”
 
For the hero cop Tequila, Midway digitized actor Chow Yun-Fat, using the high-res mesh as a reference for the modelers and texture artists. During the modeling process, the team reverse-aged the character, making him appear as he did in the movie Hard Boiled.

Artists modeled all the other character faces from concept images created by artist Vince Proce—essentially photo collages of facial parts taken from random people. “We would bring these photorealistic ‘sketches’ into our 3D software and begin tweaking our in-game geometry into the desired shapes, using the images as modeling guidelines,” explains Eddy. “Because the quality of our reference images was high, we would often use them as the starting point for our texturing, projecting the images directly onto the geometry and then filling in texture gaps as needed.”

Artists modeled the in-game geometry in either Autodesk’s Maya or 3ds Max. Tequila’s highest level of detail (LOD) comprised 10,000 polygons, enough to withstand close scrutiny in the cut-scenes. Other characters ranged from around 6000 polygons in their highest LODs down to approximately 500 polygons in their lowest LODs.

For texturing, artists used Maya, 3ds Max, Pixologic’s ZBrush, and Adobe’s Photo­shop to create facial maps at a resolution of 1024x1024, first creating diffusion maps while sculpting the geometry, and then adding normal maps afterward by importing the geometry and diffuse map into ZBrush, where the artists could up-res and sculpt specific features.

For primary characters, the artists also added a “facial wrinkle” normal map overlaid on the base normal map. They created fine-grain features, such as wrinkles and pores, by running a normal-map filter on the diffuse map, which could then be added as an overlay to the ZBrush-generated normal map in Photoshop. “It was similar to a workflow you can do entirely in ZBrush, but I liked having the flexibility of adjusting the larger- and smaller-scale features independently as layers in Photoshop,” says Eddy.

By using the shader systems in Unreal 3, they were able to blend portions of these “wrinkle maps” in and out, with the values driven by the facial animation. In addition to diffuse and normal maps, the characters also had specular maps that were generally created by tweaking the diffuse map.

One of the first characters created during the making of Stranglehold, the villainous gangster Lok, received enormous attention. Unlike other characters, whose high-res meshes were sculpted in ZBrush, the artists sculpted Lok’s meshes in Maya using NURBS, which were later converted to polygons. Under the direction of Jason Kaehler and Stephan Martiniere, artists gave Lok extra jowl mass as well as slightly deformed ears.

When the high-res version was approved, the artists built the in-game mesh around it, and then created the diffuse map by projecting the concept art onto the mesh and filling in any gaps with further projections in Photoshop. Afterward, the team imported both the in-game and high-res meshes into Max to generate the normal map.

While the team fashioned the main characters in excruciating detail, they were unable to give the supporting cast of enemies and myriad extras roaming the streets the same individuality due to a lack of tech support for body-swapping capabilities.
 
Artists first created sketches of these Russian characters before modeling them in 3D. Unlike the main characters, most of the supporting cast were built from a base body model.

“We built several base models for enemies and other NPCs, and used a few methods for adding variety, such as color tinting for texture maps; however, we needed to create each enemy and NPC as a unique asset, by starting with the base body model and attaching different heads. Quite frankly, this turned into an absolute nightmare to maintain,” says Eddy. “Every time a modeling or weighting tweak was requested, those tweaks had to be propagated through each enemy of the same class, as well as their subsequent LODs. As a result, we didn’t have the bandwidth to create the amount of variety among our characters as we would have liked.”

With Hong Kong known for its violent rainstorms, throughout the game, great slanting sheets pelt down as flashes of lightning illuminate the scene every few seconds. Artists created a soaked look for Tequila using a material effect within Unreal 3. By increasing his specular power values, the artists made him appear more wet and shiny, and used scrolling mask tricks in the normal channel to create the effect of water droplets hitting and flowing down his face and body. In addition to the material effect, the modelers sculpted a special version of his geometry to simulate the “matted-down” hair look.

Rigging and Animation
Created in Maya, Tequila’s facial rig consists of joystick controllers driving Set Driven Key poses for an array of expressions to match performances. The facial system for the in-game and cinematic characters works exactly the same, albeit on different LODs of facial skeletons.

The cinematic models have double the number of facial bones and dramatically increased polygon counts with facial bone-driven normal maps. Riggers created the poses using 72 bones, an inordinately high number that required a robust Mel script-driven tool set to aid animators in pose-saving and general animation workflow.

The team used a boned skeletal system (no blendshapes or morph targets), which Eddy says is the most efficient option for the existing Unreal Engine 3 tech.

The facial animators also used the joystick controllers to drive the animated facial normal maps created in ZBrush for the wrinkle system. The engine would execute the use of multiple wrinkle displacement maps based on data from a wrinkle bone in the facial skeleton, which, in turn, was driven by the animation rig.

“Our facial guys also made a ton of custom Mel scripts to speed the workflow, from quick import/export scripts, to creating a character facial bible for each main character that contained frequently used poses,” says Eddy.

While diving across a room, riding on a rolling cart, or sliding down a banister, Tequila can perform a whole subset of shooting maneuvers in slow motion. Called “Tequila Time,” these slowed-down gunplay acrobatics put a huge strain on the number of animations the team had to create in order to prevent pops and ticks that would be amplified in slow motion.
 
The game features a number of unique environments, including this temple, which reflect the ancient and modern cultures of Hong Kong. This imagery gives the title a cinematic feel.

Eddy describes the situation: “For instance, not only can [Tequila] dive in eight directions (which means eight base dive animations), he can do that with four weapon types (dual pistol, single pistol, ‘shotgun,’ and no weapons). So now we’re up to 32 base animations. Now consider that there’s probably an aim grid on top of that, so he can aim a bit while he’s diving. If it’s even just a simple 3x3 aim grid (three vertical columns and three horizontal rows), that’s nine aim grid poses per dive direction. Now we’re talking about 72 additional poses for just one dive aim grid.”

Eddy continues: “Fortunately, he does not need to aim if he’s weaponless, so that’s 216 aim poses and 32 base animations. That doesn’t even include the ‘hitting the floor’ reactions from each dive angle and the resulting slides, aim grids for the slide, and the ability to barrel roll on the floor.”  

In total, the group created approximately 7000 animations for Tequila, including aim grid poses. To handle the blending of all these poses, Midway had to create custom scripts inside the Unreal 3 engine. “A lot of our aim grids are comprised of custom additive nodes so that we can apply single-pose animations to a looping idle animation; the engine then blends between those poses to create a living, breathing aim grid for a fraction of memory of what an aim grid of full animations would cost, but at the same level of quality,” says Eddy.

The team motion-captured nearly 99 percent of all the character animations for the cut-scenes and the in-game animations using a Motion Analysis system installed in Midway’s Chicago studio. After editing the motion data with Motion Analysis’ proprietary software, EVa RealTime, the artists applied it to their game models using Motion Builder, from Autodesk.

With its robust auto-rigging tools, Motion Builder was used to massage the motion and apply additional additive keyframing to amplify the actor’s performances. Finally, the group imported the moves into the Unreal engine, using Epic’s proprietary ActorX plug-in for Maya.
 

Nearly every object within the game’s environments are interactive. Here in Wall City, like elsewhere in Stranglehold, this is achieved through the use of AI.implant from Engenuity.
 
Cut-scenes
After the script was polished by one of John Woo’s writers, a storyboard artist boarded each scene in close collaboration with Woo himself. “It was important to the overall pacing of the story that the cut-scenes—or “Woo moments”—hit certain crescendos at key moments in the story,” says Eddy. All told, the cinematics team had to produce 80 cut-scenes totalling 60 minutes. Midway cinematic director Marty Stoltz developed animatics from each storyboard, working out 3D versions with a placeholder soundtrack and simple sliding characters.

Later, the animation department used each animatic as a guide to develop the motion capture needed for each shot. Once the animation was delivered, a cinematic layout artist began working on the scene, establishing the layout and camera locations suggested by the animatics.

“Because game design and environment art ran somewhat parallel to cinematic development, the scene would be reviewed by art director Kaehler, who would provide feedback and updated environment art to make sure the continuity from game to pre-rendered cut-scene remained valid. He would also assist in the direction of the scene’s lighting pass to provide similar continuity,” says Eddy. The final pass of each scene was submitted to Woo’s Tiger Hill company for final approval.

Effects Animation
Of course, the atmosphere of the teahouses could not be completed without the thick smoke permeating the air. To simulate the veils of smoke, artists created meshes the size of skylights, extruded them along the light vector, and applied a shader with blended, panning smoke textures to provide a volumetric rendering effect. Combined with the right fog values, the technique proved quite effective, according to Eddy. The method was also used to create the “light shafts” that burst through the opaque, backlit windows on the far side of the room after they’re shot out.

Midway opted against a real-time weather system, which reduced visibility and made some levels, such as the Slum City, almost unplayable. Instead, the team used a combination of particles and artfully placed textured planes to create rain and fog. To prevent players from getting lost in the fog, artists used sheets with shaders that make the fog recede as the camera gets close. On the flip side, the team also used smoke grenades—created with dynamically placed particle effects—to force the player to fight in areas with poor visibility.
 
Stranglehold’s environments teem with life. Each are filled with the tiniest of details, while atmospheric effects add a richness and air of mystery.

From the vast flotillas of boats on the harbor to the rain-swept streets, Midway invested an enormous amount of energy into perfecting the water simulations. Usually, the artists began with sprite-based particles to create mist and droplets, along with artfully textured meshes to give mass and shape. “We also employed trail particles, which use a particle system to define a path, and then draw and update a dynamic mesh based on that path,” says Eddy. “Trail particles made some of our favorite effects possible, like the streams of water that come from water barrels and the water tower in Tai-O.”

To simulate the rippling surface of the harbor, artists used an unlit, translucent pixel shader on a flat plane, then added normal maps together and panned them in different directions to create the impression of waves. Using falloff, they created a simulated Fresnel effect, blending between the water diffuse texture and the reflected horizon environment map. By applying depth bias, the artists created a simple approximation of a foam “edge,” where meshes intersected the water. To simulate the fogging effects of particulate matter in the water, they again used depth bias to fade the meshes to the water color as they went deeper and deeper. With displacement maps, the team gave the waves more depth when the water was viewed from an angle.

Finally, the artists created an additional shader of “crud” bobbing in the water and applied it to a simple quad mesh, which could then be placed near retention walls and dock supports to give the illusion of various floating debris accumulating in these areas. “The bobbing effect was created using displacement maps; the “crud” never lines up with the waves in the water shader, but we found that you tend to buy the effect anyway,” says Eddy.

Driving all the game’s fire effects—from explosions, to muzzle flashes and cigarette-lighter flames—was Cascade, the Unreal particle system. “Even on the newest hardware, high numbers of particles are still hard on a game’s performance, so we used animation sequences derived from photographic reference to make the most out of our particle counts,” adds Eddy. “Large explosions rely on more than just smoke and fire to be convincing.” Thus, the group used additional particle systems to throw tumbling debris out of exploding objects and created dynamic, physical fragments for objects, like the exploding Stilt Houses found in Tai-O. Exploding objects also create a physical impulse that acts on objects around them, leaving a devastated environment behind, now cluttered with scorched and tattered debris. 

Ironically, Eddy says it was the muzzle flashes, which last on-screen for only a fraction of a second, which posed the greatest challenge. “Our FX artists pored through a lot of photographic reference before painting a library of dynamic muzzle-flash elements that would look convincing and visually interesting, even when they were only on screen briefly.”

Using Unreal’s dedicated particle editor, Midway’s FX artists also created the copious blood splatter throughout the game. Each blood effect uses multiple layered components, including red-tinted mist and fine droplets that give volume to the blood hits. These complement prerendered elements for the viscous, coherent shapes that make up the body of the blood impacts. The size and form of these bold, viscous bloody shapes allowed the team to draw a visual distinction between nonfatal shots that had connected with an enemy and those that had resulted in a kill.

Midway artists spent a great deal of time perfecting the various water simulations used in the game, from the rain-soaked streets of the city (shown here) to larger bodies of water.

Universal Destructibility
Another important gameplay feature allows Tequila to take cover behind pillars and posts as enemies close in around him. But hide behind a column for too long and enemy fire will slowly chip away at it, reducing it to rubble. Universal destructibility abounds in Stranglehold (see “Mind Over Matter,” July 2007).

Early on in Stranglehold’s development, the team had a vision of taking destruction to the next level. The early concepts focused on causing objects to break specifically at the location of each bullet impact, which meant designing the content in a new way so that the progression of each object from pristine to destroyed could be guided dynamically based on where the object took damage.

“Initially, we focused on making tools to author nonlinear breaking patterns,” says Eddy. Later they turned their focus to optimizing the run-time versions of these systems—allowing more fragments, but also adding other ways for the content creators to augment raw geometric breaks by adding particles and other effects. Finally, using the Havok physics system as a base, the crew developed new technology that allowed the artists to propagate damage through an entire object and make holistic decisions about how it’s going to react.

“We only just started to realize the potential of this system during Stranglehold, and we plan to take it a lot further in our next game,” Eddy notes.

Artists modeled the destructible objects as clusters of fragments held together by glue networks. The result is uniquely damageable objects, since any damage done to an object is location-specific; they never disintegrate in a pre­determined way. Of course, this meant that each destructible asset would get a full second round of modeling, during which it would be broken into its fragments, which were hand-modeled and textured. “This was a lot of work, but it allowed us to perfectly tune the look of the damage to feel just right, whereas procedural solutions usually give you unrealistic and bland-looking results,” Eddy contends.

If a player shoots at the corner of a table, for instance, that corner most likely will disintegrate; remove a table leg with it, and in-game physics will cause the whole table to tip. If a table is shot in the center, the table will break differently and appropriately according to the hit location. Building the objects to work that way was crucial to achieving the sense that cover was eroding relevantly during gunplay, and also to give the player a sense of control when figuring out the destruction-based puzzles. Players can control how a damaged object will fall, for instance, when using objects to set off the laser mines in the penthouse, or when using the giant totem poles in the museum as weapons against enemies.

Unfortunately, universal destructibility poses a huge challenge for the AI programmers, who have to ensure that the NPCs can respond intelligently to a constantly altering dynamic debris field. “It complicates pathfinding considerably,” says Eddy. “The really difficult problem was figuring out which objects to avoid and which ones to ignore.  Having AI that avoids all objects in this manner wasn’t going to work because there are dynamic objects everywhere; but at the same time, it’s not acceptable for the AI to get stuck behind a pile of objects that’s blocking the way. In the end, we applied a technique that we had already used to help prevent the player from getting caught up in clutter: something we call our debris-pusher system.”

As the name suggests, the system literally pushes objects out of the way; so while the AI does its best not to tangle with obstructions, if a character encounters a pile of objects that could snag him, his debris pusher kicks in and clears a path. “We also tightly integrated our interaction system with the breakable object system. Our AI knows which objects can be used for cover and the circumstances in which the cover is valid,” explains Eddy. If, for example, a table that is tagged as a cover surface breaks into two large pieces, then the AI knows that it can use either piece as cover. If one piece has the wrong orientation, though, then it might not be viable, so that is checked as well. Below a certain threshold, the fragments will be too small to provide viable cover; the AI can detect this and remove the fragments from consideration. In this way, the AI continually tracks the dynamic objects in the world, a capability that is fundamentally built into the AI’s decision-making paradigm.

Indeed, the need for advanced AI is the common thread weaving together all three games vying for the crowing achievement in next-gen titles. It also reflects both Jade Raymond’s and David Braben’s vision of “next gen” as an experience that reaches beyond the “right path/wrong path” structure and lets the player engage the world and the thousands of intelligent NPCs in almost infinite ways, impacting the story each time in almost infinitely unique ways.

Martin McEachern is an award-winning writer and contributing editor for Computer Graphics World. He can be reached at martin@globility.com.