Larger than Half-Life
Issue: Volume: 27 Issue: 3 (March 2004)

Larger than Half-Life

Half-Life 2's story line picks up where the original left off, with Gordon Freeman, a government scientist, once again battling aliens that are pouring through an inter-dimensional doorway that he unwittingly opened as a result of a botched physics experiment. While the plot may seem typical for the genre, the execution of the original, and now the sequel, by the artists at Valve was anything but. According to Valve's founder and managing director Gabe Newell, Half-Life became the new watermark for first-person shooters because it was the first to present a "seamless string of surprising events," a mystery that unfolded with cinematic suspense and pacing, all within a consistently dark and spooky atmosphere.

Unlike similar titles in which the gameplay entails little more than reaching a level's end or finding color-coded keys to color-coded doors, Half-Life 1 and 2 are engrossing because their goals and structural high points elicit a wider variety of emotional reactions from the player. Gordon Freeman, far from being a trigger-happy cardboard cutout, is a multi-dimensional character whose actions and decisions are complicated by simulated real-life consequences and conflicting motivations, including survival and heroic altruism.

In fact, the intensity of the player's immersion in the game is achieved through two bold departures from current conventions in computer-game character development, both of which radically affect the player's point of view. First, at a time when developers are relying more and more on elaborate cut-scenes, third-person scripted sequences, and celebrity voice-acting talent to create fully formed characters, Half-Life's protagonist never says a word throughout the entire game. Second, the game doesn't allow the player to experience the story through anyone's eyes but Freeman's, so the character is never privy to any more knowledge than the player, thus encouraging the player to empathize with Freeman. As Newell states, "You are Gordon Freeman."

The sequel finds Gordon—now under the employ of the G-Man, a mysterious briefcase-toting figure from the original—charged with saving Earth before its resources are depleted and mankind is driven into extinction by the aliens. His mission takes him to City 17, a fictitious European locale resembling a hodgepodge of such Northern European towns as Budapest, Prague, and Amsterdam, where Old World quaintness collides with high technology and futuristic flourishes. Forging across the sprawling cityscape and beyond, Gordon uncovers the ambiguous agendas of those around him, discerns friend from foe, and tries to chart the right course toward his mission's objective.

Along the way, Gordon contends with the ubiquitous combine soldiers—the city's corrupt and heavily armored police force—as well as a rogues' gallery of aliens, including a 50-foot-tall, flea-like creature with impaling legs called a strider; the pack-hunting ant lions; the water-dwelling hydra; and the parasitic headcrabs, which commandeer human bodies by affixing themselves to a victim's skull.

According to Newell, Half-Life 2 was designed from the outset to fulfill the original game's potential as a medium for first-person storytelling. The sequel, therefore, abstains from using cutaways and cinematics that disrupt the player's subjective point of view (POV) on the narrative. In addition, the player's immersion in the first-person POV is intensified through an array of technological advancements debuting in Half-Life 2. The majority of those are embodied in the title's new game engine, dubbed Source. While the original release was powered by Quake technology from id Software, Valve spent the past four years collaborating with Havok, developers of the Havok physics engine, to deliver enhanced character physics and environmental dynamics, which allow for unprecedented interactivity with objects adhering to the laws of nature.

The game features an array of characters, including the corrupt combine soldiers (above) and a host of unusual alien creatures such as the 50-foot-tall strider, all created using Softimage|XSI.




Half-Life 2 also marks the arrival of Softimage as a major contender vying for game-authoring software supremacy. Throwing down the gauntlet to Discreet (3ds max) and Alias Systems (Maya), Softimage worked closely with Valve, making Softimage|XSI the sole digital content creation software used to create the game, and, thereby, resulting in an impressive showcase for its real-time 3D content creation capabilities.

Half-Life 2's environments, which are spread over 10 chapters, range from small indoor areas to the war-torn City 17, a massive urban dystopia where streets are lined with burnt-out buildings, and fires smol-der beneath leaden skies. Valve's unique, multi-pronged pipeline for building the levels began with the formation of a "cabal," a team of modelers, level designers, and programmers who were responsible for the design and gameplay of a particular level.

In the first branch of the pipeline, the cabal's level designers built low-resolution texture maps for the environment and then replaced them with plain "orange" textures that accelerated play-testing and experimentation, saving the cabal from painting detailed textures for scenery that would likely undergo substantial changes later. In the second branch of the pipeline, the programmers worked on the code for the support "entities" in the level, such as monsters, vehicles, turrets, and gates. And in the final branch, the modelers built light, "placeholder" geometry for the various entities. Finally, once there was enough content to "play," the three branches converged and the cabal members commenced testing their levels using the placeholder models, orange textures, and temporary code.
The game's dark, foreboding atmosphere resulted from the use of various CG light sources. This ominous look was further enhanced with volumetric fog and effects primitives, such as smoke and fire.




During this iterative process, every idea is discussed within the cabal, and interesting ones are tried, to see if they add to the gameplay experience. Any of these experiments has the potential to radically alter the scenery or entities in a level, but with Valve's cabal-based pipeline, each level can be previsualized and play-tested with the least possible effort expended on creating detailed content that ultimately will be changed or deleted. "By separating the art and gameplay branches of production, we were able to avoid a potentially huge amount of wasted time and effort on the art side by using placeholders instead of real art," says senior engineer Rick Ellis.

The developer further streamlined its pipeline by decoupling the art asset and level design streams of production as well. All the art assets were prefabricated in XSI by a team of artists and placed in a huge "backlot" of props for all the cabals to use. Designed to be easily editable through a cut-and-paste style, much of this virtual backlot will be made available to mod authors. It also will house every texture map and model used in the game, including such ready-made pieces of scenery as power stations, airplane hangars, and brick buildings, complete with a lower floor, a roof, and an expandable mid-section. It also will contain similar information for generic items, such as barrels, desks, chairs, tables, debris, and vehicles.

Valve's world-creation Hammer tool set also features a proprietary materials system that endows these objects with the physical properties of their applied textures, including weight, density, and sound. For example, applying a brick texture to a rectangular object will automatically make it behave like a brick of that exact size; it will sink if dropped in water, explode if shot, and so forth. Objects also will acquire the friction and collision properties of their textures. Therefore, if a chair is mapped with a wood texture, it will not only float like wood and sound like wood, but, if scraped against a wall, also splinter like wood.
Half-Life 2's level designers used the new Hammer world construction tool to place objects, edit the terrain, and control the AI.




After the levels were play-tested and textured, the group lit them with lightmaps, whereby each surface's texture was paired with maps of various lighting conditions for that surface. For illuminating the props, the artists used vertex lighting to store the lighting information at the model's vertex. Depending on the demands of each scene, they drew upon a wide variety of lights, including spots, directionals, points, and ambients, to maintain the game's ominous lighting and tone.

The dark, foreboding atmosphere overhanging the entire adventure also was enhanced with volumetric fog, one of Source's numerous effects "primitives," which also include fire and smoke. "On top of all this, Source's shaders allowed us to make virtually any effect we could dream of," says Ellis, noting that no off-the-shelf tools or plug-ins were used for the special effects.

One of the pivotal sequences in the game unfolds on a dried-up seafloor, where Freeman tangles with soldiers and ant lions, and speeds across a sandy gorge in a land buggy to elude an aerial assault by alien hovercraft, his only shelter provided by the rocky terrain, some overturned cars, and a half-sunken skeleton of a submarine. While the level displays Hammer's power in creating dynamic natural environments, it is more impressive for its demonstration of Half-Life 2's advancements in contextual AI programming.

If Freeman tries to hide behind a rock, a car, or a closed door in the submarine, the AI programming provides the aliens and the soldiers with numerous paths to choose from in order to find him, such as pushing the car over the edge of the gorge, knocking out windows in the sub and peering through them, punching or shooting through the door, or climbing the struts and beams of the sub to continue the pursuit. These are not scripted sequences, but rather the AI working its way through the level, checking to see if the player is nearby, searching relentlessly for new ways to launch an attack, and then encountering and solving all the obstacles in its way until the target is destroyed. In a sequence set in City 17, for example, Freeman's attempt to elude the 50-foot-tall strider by bolting under an overpass is thwarted when the creature, through its AI programming, "learns" to get past the bridge by crouching down and clambering underneath it.

Maximizing the performance of this highly advanced AI system meant the artists had to refrain from overpopulating the levels with AI entities. Conversely, inserting too few entities would have resulted in an empty and uninteresting level.

Achieving optimal AI during gameplay by adjusting the number of entities in a level also was crucial to the performance of the brand-new physics engine developed for the game. The new engine not only melds the gameplay with unprecedented environmental interactivity and newfound realism in complex physics interactions, but, for the first time, also makes all the environmental objects available to the player for solving puzzles, avoiding enemy fire, or even using as weaponry for combat. "The new physics system allowed us to create interesting puzzle and gameplay experiences that were not possible with the Quake-based engine," says Ellis.

Valve's bar-raising mandate for the game's graphics and gameplay also extended to the modeling, texturing, and animation of the characters. From their detailed brows, to the radial bands in their irises, to the imperceptible furrows in their foreheads, Gordon Freeman and the rest of the main cast are the result of an aggressive mission to leapfrog the current state of the art in character creation.

All the characters, including the humans and the more geometrically complex creatures, such as the gargantuan strider, were surfaced using XSI's subdivision surface tools, which, says Ellis, provided not only a highly refined interface for intuitive, organic modeling, but also the expedience of automatically converting the finished subdivision surface models to their component triangles for in-game use.

For modeling each character, the team used photographic references of actual people, importing orthographic and perspective photos overlaid with grid markings into the XSI viewports to guide the modeling process. In addition, a second set of photos, comprising front and side images of the same people, were shot in more diffuse lighting for creating the face textures. The front and side textures, once resized, merged, and retouched in Adobe's Photoshop, were mapped to the models in XSI at a resolution of 2000x1500. The characters' eyes also were modeled elliptically, outfitted with a series of constraints, and placed off-center. Consequently, when facing the screen, the characters stare directly at the player without displaying the cross-eyed look that plagues most game characters.

Next, to ensure that the sequel would be completely devoid of the generic models that were used in the original game to furnish, for instance, the Black Mesa lab with a large staff of scientists, Valve developed a morphing technology that blends a core set of models for the game's common roles in almost infinite ways to render them entirely distinct from one another. Using the same morph targets sculpted for facial animation, the system automatically alters the facial geometry to create, for example, a flatter or broader nose, or a squarer jaw. As a result, all the scientists, soldiers, and other homogeneous characters appear as unique, differen-tiated models. Nevertheless, because the overall personality of a character's face is forged primarily by the facial texture, the system also draws upon a large database of facial texture maps to diversify and individualize the digital citizenry of City 17.

For the characters' bodies, the alien creatures required custom IK rigs, while all the main human characters were bound to a single, scalable base skeleton that was easily modified to suit each character's proportions. Both the humanoid default rig and the custom alien rigs were constructed in multiple levels of joint detail, providing the appropriate degree of articulation for the corresponding levels of detail (LOD) created for all the characters. Artists used most of these LODs to regulate the geometric complexity of a character with the proximity of the camera. However, one low-polygon version and its respective low-bone rig was used to create collision models showcasing the breakthrough rag-doll effects of the Havok 2 physics engine that simulate the involuntary behavior of the human body when acted upon by external forces, such as gravity, inertia, or concussive blows.
Valve's powerful facial animation system uses 40 different muscles to control nearly every nuance of a character's expression, with the eyes, lips, and brows exhibiting the highest level of expressivity.




In addition to more sophisticated AI, real-world physics, new technology for facial expressions, body language, and lip-synching, Half-Life 2's Source engine features drivable vehicles and a terrain generator. It also sports advanced displacement mapping technology—another boon for mod makers—which enables dynamic, fluid scenery elements such as water, sand, or snow to deform in real time.

Unfortunately, the engine is so revolutionary that Valve has already fallen victim to a major code theft by hackers eager to acquire the technology. While forcing Valve to delay the game's initial October 2003 release date until mid-March, the theft and continued assault by hackers are telltale signs that fans are bursting with anticipation for the sequel, not to mention a testament to the powerful hold the distinctive Half-Life narrative style has exerted over game players.
The artists gave the characters, including Alyx, Freeman's sidekick throughout the game, a wide range of facial expressions.




In attempting to trace the source of this power, Half-Life writer Marc Laidlaw points to the game's distinctive narrative structure, and its ability to transfer the same drama experienced by a passive observer into a medium where the audience is an active participant in the story. "Games require a new way of looking at the concept of a 'dramatic moment,' because the meaning of drama is different for a passive observer than it is for an active participant," he says. "Structure is not something the actors in a story are usually aware of. That awareness is traditionally reserved for the audience. But for an actor in a game (the player), the moments when you perceive structure and realize that you're playing a significant part in assembling some larger pattern can be very compelling."

In fact, the narrative structure in the Half-Life games is there to support this experience, Laidlaw notes, and to provide opportunities for surprise and revelation that feel inevitable rather than random. "Ideally, when you are most involved in playing Half-Life 2, whether you're part of a scene with other actors, solving a puzzle, or fighting enemies, you not only feel challenged and fully engaged, you feel that it all means something," he says. Mastering the dichotomy of simultaneously participating in and perceiving the story structure may, therefore, be the key to unlocking untapped levels of emotional involvement in computer-game storytelling. And if Laidlaw and his teammates at Valve can achieve that goal with Half-Life 2, their pioneering efforts could put them on the cusp of the next era of gaming.

Martin McEachern, a contributing editor forComputer Graphics World, can be reached at martin@globility.com.

Adobe www.adobe.com
Havoc www.havoc.com
id Software www.idsoftware.com
Softimage www.softimage.com




Garnering more than 50 Game of the Year awards, the first Half-Life spawned a number of successful expansions and, with the subsequent release of the game's basic programming code, yielded a flood of user-created modifications resulting in new, free downloadable levels and characters. These, in turn, have led to "total conversions," entirely new games freely distributed over the Internet (such as Counter-Strike) that have formed the basis for a massive online gaming community that is continually weaving new threads into the ever-expanding fabric of the Half-Life mythos. Consequently, during the last five years, fan enthusiasm and the boundless creativity of the mod community have stoked anticipation for a sequel to such a fever pitch that, upon its release this month, Half-Life 2 is expected to single-handedly spark a revival of the PC game industry.


To firmly entrench itself in the future of game development, Softimage will package XSI EXP, a lite version of XSI, with every PC copy of Half-Life 2. With this, Softimage joins Discreet (which offers gmax, a free, scaled-down version of 3ds max) in adopting the increasingly popular strategy of cultivating a user base from the massive mod communities forming around blockbuster PC games. (Valve will still provide exporters for both max and Maya, but contends that XSI EXP—which allows modders to build models of similar complexity to those in the game—will yield the best results.)

Rolled into the mod SDK will be XSI EXP, the programming code for the AI and client systems, and nearly all the other tools developed for the new Source engine, including the Hammer world construction tool, which will allow modders to place objects and characters they've created in XSI in a world, edit the terrain, add water, provide pathing information for various AI entities, and control the AI through a system of inputs and outputs, just as Valve's level designers did. This move reflects Valve's continued commitment to the mod community, which has been responsible for much of Half-Life's staying power and commercial success over the past five years.


Surprised by how endeared they had become to some of the characters after finishing the original game, Valve's founder, Gabe Newell, and his fellow designers were determined to make Half-Life 2 a more emotional experience, to give the cast full personalities, and to force the player to identify and empathize with many of them.

Heightening the game's emotional involvement was the impetus for the development of a highly sophisticated, proprietary face-acting technology based, in part, on the facial expression studies of Paul Ekman, a psychiatrist from the University of California, San Francisco. Ekman's systematized facial lexicon comprises 40 keyframes of expression, of which Valve selected 25 that could be blended into enough hybrid expressions to serve as the basis for a new facial animation system capable of endowing the characters with more natural speech and greater emotional expressivity.

To create these keyframes of expression, the team modeled 34 blend shapes for each character in XSI, and controlled them with an internal proprietary tool called Faceposer, which also offers precise control over lip synchronization. These targets were modeled below the level of actual expressions, depicting, instead, individual facial muscles in their flexed state, enabling the artists to create more natural facial animation. That's because each muscle group is accessed independently, so all the muscle movements don't occur with the same envelope and "hit" at the same time.

In real time, the Source engine can combine the 34 blend shapes non-linearly to make the characters smile, sneer, snarl, or look fearful, menacing, sinister, victorious, condescending, and so forth. It also automatically lip-syncs the dialogue in any one of numerous languages, extracting phonemes from a .wav file and forming the corresponding phoneme blend shape for the mouth. In addition, the engine allows the characters to shift between expressions independently of the context of the line. The result is a finely calibrated and unique performance for every exchange between the player and a character.