Mind Expansion
Issue: Volume: 30 Issue: 6 (June 2007)

Mind Expansion

The great American author Henry James once posed this question: “What is incident but the illustration of character; what is character but the determination of incident?”

To look for such depth in the actions of the non-player characters (NPC) currently inhabiting modern video games would be laughable, to say the least. Often seen walking into walls, stymied by doors, or falling down readily apparent holes, NPCs have gained little in gray matter over the years. Indeed, while motion capture has brought lifelike authenticity to their motion cycles, and soaring polygon counts and intensive normal mapping have defined the pores in their skin and the weave of their garments, advancements in artificial intelligence have not proceeded apace. Today’s NPCs are, unfortunately, all beauty and no brains.

They’ve been lobotomized, in part, by a lack of processing power available for pathfinding—the term used for the technology required to make an NPC react to his or her situation and move from Point A to Point B. Pathfinding can be so taxing on the CPU that the cities and streets of video games—while always store-lined and well landscaped—are curiously bereft of crowds in that they sometimes resemble ghost towns. In addition, writing pathfinding algorithms can be so challenging that programmers often resort to “cheats” by oversimplifying landscapes, thoroughly scripting the characters’ actions, or manually positioning pathfinding data into the world to direct the NPCs—like blind men with canes—around obstacles or toward hiding places.

Even worse, the developers might limit the NPCs’ interactivity with the environment if they cannot, for example, climb stairs or use an elevator. Moreover, the characters’ physical reactions are confined to a finite set of canned animation cycles, which soon grow stale, repetitive, and boring. These crutches not only rob the NPC of autonomy, but also more importantly, rob the player of the anticipation of unexpected reactions, a feeling so crucial to fueling suspense in movies, novels, and other storytelling forms.

As the Xbox 360 and PlayStation 3 shatter the hardware barriers that have previously handicapped AI, several new technologies are emerging to capitalize on this newfound power; finally, this will enable NPCs to set their crutches aside and take the first steps toward moving on their own. And, they will do so in unprecedented numbers, filling the ghost towns of yesteryear’s games with bustling, intelligent crowds. Thanks to advancements in behavioral AI and real-time, synthesized human movement, NPCs will be capable of reacting and moving on their own, and have almost infinite freedom in responding to a situation; they will even learn from human players, game designers, and from their own mistakes, like truly adaptive organisms. 

Natural Motion’s Euphoria

Of course, implanting a highly developed brain into an NPC would mean little without a sophisticated motor control and nervous system to make the characters’ bodies carry out those high-level decisions. Typically, animators would keyframe or motion-capture cycles for various actions, such as running, falling, or jumping, and then blend those same animations ad nauseam during gameplay. Take the example of a baseball player charging home plate and colliding with the catcher as he receives a throw from the outfield. Whether the runner slides headfirst or feetfirst, or tries to swerve around the catcher, the play at the plate can only unfold through a finite set of animations created for each player. The moment is always “canned” (so much for unexpected reactions). Now, imagine if every time the runner collided with the catcher, the collision would transpire according to the characters’ muscular responses, just like in real life. In effect, every single collision would be different.

Dynamic Motion Synthesis (DMS) is poised to cut the puppet strings off digital characters—both in films and now in games. Through Euphoria, the real-time version of NaturalMotion’s Endorphin software, DMS can assume control over a character at any time during gameplay and adaptively drive the character’s movements using AI motion controllers that simulate the character’s biomechanics, muscles, motor control, and nervous system in response to sensory input. As a result, it produces interactive animations and, more importantly, unique game moments.

The first title to introduce this technology to the world will be LucasArts’ tentatively titled Indiana Jones, scheduled for release in 2008. At E3 2006, onlookers were astounded by a sequence set in 1939 San Francisco during which Indiana Jones balances atop a moving trolley car, fending off enemies pursuing him in jeeps. The enemies drove the jeeps in real time, responding to the traffic around them; and if the henchmen hanging onto the vehicle sides sensed an impending crash, they would jump onto the trolley—not to attack Indy, but to avoid the accident. When one was thrown into an oncoming truck, not only did the driver attempt to swerve out of the way, but as the enemy hit the windshield and rolled off the hood, he clung desperately to the grill before getting pulled under the tires.

Self-preservation dictates these behaviors, not a scripted routine or predefined animations. Through the use of a physics engine, Euphoria-enabled characters acquire sensory information about the position, direction, and speed of other characters or objects, and adjust their behavior accordingly. (For Indiana Jones, LucasArts is using Havok Physics for both collision detection and rigid-body simulation.) 

Astonishingly, many viewers reacted with empathy for characters that seemed to be engaged in an independent pursuit of their own self-preservation. Judging by this early reaction, consumer expectation for unique game moments and heightened identification with NPCs may force the entire industry to adopt DMS. Obviously, animators and programmers alike are nervous about how such technology will affect their futures. Will it spell the demise of ragdoll, keyframed, or motion-captured animation?

Natural Motion’s Dynamic Motion Synthesis (DMS) technology uses the processing
power of the computer’s CPU to create character movements in real time that result in
adaptive behaviors like those in the football tackles shown above.

Rewriting the Rules of Animation

It is the opinion of Haden Blackman, project lead for Indiana Jones, that traditional ragdoll animation eventually will become obsolete. “Ragdolls typically look like sacks of flour tied together; characters using Euphoria behave in far more realistic and natural ways because they are literally infused with a central nervous system that takes into account the ways in which muscles, nerves, and skeleton all interact in a real human body,” he says. “Ragdolls flop around when knocked over or thrown; Euphoria-enabled characters protect their heads, roll with punches, try to brace themselves when falling, and even try to regain their balance. You’ll never see a falling ragdoll character grab for another character or object in the world, but at LucasArts, we have Euphoria characters that can perform these types of [self-preserving] behaviors.”

With the Xbox 360 running on three processors and the PlayStation 3 firing on as many as seven, Torsten Reil, CEO and co-founder of NaturalMotion, believes that the new generation of consoles will take their place in gaming history as the birthplace of intelligent, interactive animation. It’s an inevitable evolution, because, as Reil says, “We finally have the CPU power and technology to simulate characters, rather than just playing back animation data. Moreover, it is what gamers want. You just need to take a look at some of the major gaming forums on the Web. People want characters that are believable, that act differently every time. Rendering quality is very high already, but people are dismayed by the artificial nature of static animation playback.”

Euphoria comprises two components: an authoring tool chain for tuning DMS Behaviors (Euphoria:Studio) and a run-time engine (Euphoria:Core) to execute them during gameplay. After modeling and rigging a character—in Autodesk’s Maya or 3ds Max, for example—an artist uses Euphoria’s Maya or Max plug-in to create the Euphoria skeleton based on the full character rig. This skeleton also includes collision volumes representing the character’s mesh.

Using the Euphoria skeleton, an animator—often working closely with a behavior engineer and AI programmer—determines and tunes a character’s behavior during a scene. The artist can direct a character to act drunk, look at another character, attempt to cling to an object or another character, or pursue any other goal. In essence, an animator works much like a director directing actors. To trigger the DMS behavior during gameplay, the game engine sends the current frame of the running animation to Euphoria:Core, which seeds its skeleton with the in-game skeleton, and then takes over. The process simply reverses itself on the handover back to the animation data.

Since Euphoria is skeleton agnostic, it can assume control over any kind of skeleton created in any modeling software—be it biped, quadruped, or the more exotically articulated. In fact, the software does not affect a developer’s existing modeling, rigging, or animation pipeline, nor does it place a greater burden on the AI programmer. “Your existing rig, including muscle deformers, weightings, and blendshapes, continues to work as usual,” adds Reil. In this way, Euphoria can also play canned animation cycles at the same time as a DMS simulation, so a simulated action can run simultaneously with facial animation or lip-syncing, for example. 

 For armor-clad characters, such as the stormtroopers in the upcoming next-generation Star Wars: The Force Unleashed (scheduled for a Spring 2008 release), Euphoria will make the armor and other accoutrements, such as hats and weapons, interact naturally with the simulation of the body. Though Euphoria’s focus is currently character simulation, it can also control vehicles and other rigid objects while interfacing seamlessly with all the major physics engines, such as those from Havok and Ageia.

LucasArts on DMS

So, what will become of the traditional keyframing animator in this new era? “Keyframed animations and motion capture will still have a prominent role in game development, and always will,” says Blackman. “At LucasArts, the size of our animation teams hasn’t really changed. However, rather than wasting time animating the tenth variation of a punch impact or a fall, for instance, these animators are able to focus on character performances and signature animations such as attacks. So, we have the best of both worlds: endless variation supplied by Euphoria, and handcrafted and memorable animations where they are really needed most.” Blackman asserts that in addition to handcrafting animations, LucasArts animators will work closely with engineers to develop Euphoria behaviors, and both will be able to adjust parameters to achieve the best effect and most authentic reactions.

Having unlimited interactivity within a game sequence has triggered a radical mind shift in the way LucasArts now approaches the creation of game environments. “I think that, as an entire team, our mentality has shifted towards creating environments and situations that take advantage of our character’s behaviors and capabilities,” says Blackman. “The Euphoria-enabled characters can do some surprising things, and finding ways to spotlight these behaviors and interactions is a totally different—and sometimes challenging—mind-set for designers and engineers. We’re always asking: In this encounter or area, where are the opportunities to show the player something they’ve never seen before?”

A huge amount of variation in behavior can result from the slightest changes in the environment. “A character thrown from a balcony might try to catch his fall when he hits the ground, but a character thrown from a balcony over a canopy of trees will try to grab hold of branches or perhaps shield his face before he hits the ground,” Blackman notes. Moreover, changing the size, weight, and build of a character—from fat to skinny, for example—will also alter the simulation.

LucasArts is using NaturalMotion’s Euphoria to generate intelligent
characters in its upcoming title based on the Indiana Jones film series.

Collaboration across the pipeline

By strengthening the collaborative relationship between character TDs, animators, AI, and gameplay engineers, Euphoria is breaking down the compartmentalization of the production pipeline. This unifying effect extends to the physics team as well. “Our [behavior] engineers need to be aware of the impact of physics simulation on the characters and their behaviors. We all have to collaborate, iterating on the behaviors to ensure that we get the best payoff for everything the player does,” says Blackman.

While Indiana Jones 2007 will only feature Euphoria-enabled humanoids, Blackman says that LucasArts is considering applying the technology to the creatures, droids, and even vehicles of forthcoming, next-generation Star Wars games. Another revolutionary technology set to debut on LucasArts’ next-generation titles will be Pixelux’s Digital Molecular Matter (DMM).

A breakthrough in material physics simulation, DMM enables every substance in the virtual word—be it organic, inorganic, rigid, or soft—to behave with the properties of its real-world counterpart. Glass shatters like glass, wood splinters and breaks like wood, rubber bends like rubber, stone crumbles like stone, and so forth. Thanks to DMM, even Jabba the Hutt’s blubberous rolls of fat and the loose wattles of flesh dangling from the cackling Salacious Crumb will jiggle and jostle with unprecedented realism.

“We’re truly bringing together two bleeding-edge, simulation-based technologies to make the interactions with characters and environments much more rewarding, surprising, and authentic,” notes Blackman. “A stormtrooper thrown at a DMM wooden beam knows that beam exists and might try to grab onto it. The DMM beam also knows about the stormtrooper, which means the weight of the stormtrooper might cause the beam to splinter and eventually break, resulting in the stormtrooper losing his grip or falling, at which point he might flail or attempt to break his fall.”

Havok’s Behavior

While Euphoria is set to imbue next-generation NPCs with neuromuscular autonomy, new behavioral tools are enabling animators to author extremely complex behaviors quickly by combining huge numbers of animation assets into graphically created blend trees based on “finite-state machines” (branches of motion). Two of these middleware solutions are Havok’s Behavior and NaturalMotion’s Morpheme, and they’re giving artists control over the transition logic and blends of their in-game animations—a power previously reserved for programmers.

 With either of these tools, animators can layer animations for a given situation and evaluate them with a “what you see is what you get” result. Moreover, NaturalMotion’s Morpheme can seamlessly integrate with Euphoria to realize an infinitude of emergent behaviors. While Behavior does not use DMS, it does offer some behavioral controllers, such as grab, tackle, and climbing, to add emergent performance to an NPC. To illustrate this capability, imagine a character standing inside a building just as a missile strikes. As bricks and rubble rain down, the character can use Havok Physics to query for information about collidable objects, such as the proximity and velocity of the debris, and then, using Behavior, procedurally cover his head, run for cover, duck under a doorway, or access any number of other “states” to handle the event.

Moreover, Behavior-driven characters will deflect when they brush up against a wall or another person. They can reach out to touch or grab a nearby object, stagger differently based on the direction from which they are hit, and blend continuously between a walk, run, and turning without losing traction or requiring a discrete change of state. If they fall, they can automatically lunge toward a protective position. They can climb ropes, tackle others, or be tackled themselves—all in unscripted ways.

“Our goal is to keep creative control in the hands of the artists, without requiring a lot of custom programming. With Havok Behavior, artists can immediately pull animation and character assets directly from 3ds Max, Maya, or XSI, and combine them with physics, procedural animation, real-time IK, and even facial animation, to create event-driven character performances that react to changes in the game,” says Jeff Yates, vice president of product management at Havok.

According to Yates, Havok Behaviors comprise “states,” each representing a specific mode of movement for the character (such as cover, run, or hide). Within each state, the artist can empower a character with a wide array of capabilities through the use of blending trees. Using a collection of built-in and user-written nodes, artists can blend different motion types while acknowledging the physical world to alter movements and event-change states.

Blended transitions between the states provide a smooth bridge for shifting a character seamlessly between different modes when a key event occurs. “The real backbone of Havok Behavior is the generalized node processing tree that comprises each state,” says Yates. “The processing tree for a particular state is analogous to shader trees in today’s 3D modeling tools, except that in Havok Behavior, the nodes of the tree are motion generators, not shader programs.”

Havok’s Behavior enables artists to control the transition logic and
animation blends in a game.

Character Behavior

Once animation cycles have been developed for a character, the animator brings the character into Havok Behavior. Here, a behavior “container” is filled with related states, each comprising a component of the behavior. Within each state, a blend tree is built that synthesizes animation for the character, using a variety of operators, or nodes. While the simplest of these operators is an animation clip, more complex operators can blend a variety of clips; at a higher level, they can incorporate physics, IK, and purely synthetic or procedural operators that perform special operations that may sense the environment using collision detection from Havok Physics. In those, sensory information is incorporated into the resulting motion, allowing the character to reach and grab a nearby object, for instance.

“The process of building the behavior is akin to rigging a character,” explains Yates. “It is a task that can be allocated to a single person, like a character TD or a game designer who is in charge of the ‘logic’ of a particular character’s motion graph. This does not need to directly involve the animators, but it can.”

To program a character to catch a football, for example, artists use Behavior to create nodes for a character that sense the environment (through collision detection and raycasting) to determine when the ball is within reach; the character then attempts to reach it through an IK end effector. Complex, procedural interactions between characters, such as tackling, combine balance nodes and keyframe animation with environmental sensing. When the player senses the other character, Behavior drives the end effectors of the arms to the proper location, and then drives to a pose to close the hands around the other player. Simultaneously, the “tacklee” senses collision events and determines if they are severe enough to cause a “recoil,” or to warrant a large-state transition—perhaps to a fully ragdoll-driven state.

At any time, the game’s AI can modify Havok Behaviors by tapping into the values that control blends, transition times, animation speeds, ease in/out values, and so forth, based on the particular circumstances of an event. At critical moments, just before transitions, the AI can intercept event traffic and alter the behavior based on more global conditions that perhaps only the AI knows or understands.

Havok Behavior also fully exploits other physics-based capabilities of the Havok physics engine, including ragdoll simulation and ragdoll “muscle” or constraint systems, which, together, drive the pose of the character in controllable ways. A developer can choose, for example, to transition from an animation-driven state to a ragdoll node. “This transition equates to the familiar ‘death by ragdoll’ effect,” says Yates. “But even better, a game developer may choose to blend the ragdoll death with a [keyframed] ‘death pose’ or ease it slowly into a ‘getting up’ pose so that the character is lying in the right position to return to its feet.”

The Havok Behavior tool and SDK both extend and build upon other Havok products, including Havok Animation and Havok Physics, all of which target PS3, Xbox 360, and Nintendo Wii, making it attractive to developers hoping to produce games across multiple platforms. “Havok Behavior augments [traditional animation] tools by harvesting the keyframe animations they produce, and giving the game creator a tool designed specifically for an event-driven, run-time world,” says Yates.

Of course, Havok demurs at LucasArts’ grim forecast for the future of ragdoll animation, which is integral to Havok’s suite of software. “This is someone’s personal opinion,” counters Yates. “This seems to imply that Euphoria characters are unique in performing self-preservation behaviors, and that they do it alone. Euphoria characters are very much dependent on physics and AI to tell them where they are in the world, and that there is a threat approaching. Ragdolls are still the basic building blocks for creating character performance, whether you’re using DMS or Behavior, or any other tool. Without a stable skeleton with defined joint constraints and correct mass distribution parameters, you have nothing to apply your higher-level behaviors to.”

Indeed, ragdolls have advanced since Havok pioneered them six years ago. At GDC 2005, Havok demonstrated a new generation of ragdolls that can be imbued with sophisticated behaviors, such as ducking to avoid a missile or rolling to protect the body from blows. These new-gen doll behaviors are created through blending procedural controllers such as reach IK and physics.

NaturalMotion’s new Morpheme, an advanced animation engine and
graphical authoring tool chain, gives animators unprecedented control
over the look of their in-game animations.

Softimage’s Face Robot

As progressively autonomous characters cultivate greater empathy and identification in the player, the need for advanced, real-time facial animation systems to express and heighten their emotions will only increase. For example, in Valve’s Half-Life 2 (see “Larger than Half-Life,” March 2004), the game engine and AI combined 34 blendshapes non-linearly to make the characters express a wide range of emotions. 

To address this growing need, Softimage Face Robot has now been updated for real-time use. Face Robot provides artists with tools for creating high-quality facial animations. At the heart of the system is a proprietary soft-tissue solver (referred to as the Jellyfish Solver) that procedurally simulates the flesh, muscles, and bones of the face using motion-captured data or keyframed poses. It will work with any facial mesh that follows the flow lines of the face. Once key points on the mesh are selected, Face Robot automatically determines the underlying musculature and binds the soft-tissue solver to the skin, then allows the artist to animate the face and fine-tune its deformations to achieve the desired look.

With the new game export tool, Face Robot can transfer the entire performance onto a game-ready version of the face, which is typically, but not necessarily, lower resolution, by computing an optimal envelope and animating a user-specified set of bones to closely match the original performance. To capture the highly detailed wrinkles and furrows that could only be achieved through a denser model, Face Robot’s game export tool also generates a series of blendable normal maps and applies them to the face as the bone-weighted mesh deforms, thus re-creating all the fine creases found on the high-resolution mesh.

“Face Robot is all about making it much easier for the artist to create those intensely lifelike facial expressions,” says Gareth Morgan, senior manager of business development for Softimage. “So, if you have a dynamic emotion engine at runtime, and a list of X number of facial emotional states that you have to create, Face Robot will help you make those facial states on your mesh more easily and more quickly.”

Typically, says Morgan, to set up a robust facial animation system within a game pipeline—a challenge that game developers must now inevitably confront—usually takes more than a year’s work. “If you want to do something with the quality level that next-generation gamers are going to expect, Face Robot will reduce to a matter of weeks, even days, that long and complex process of getting faces set up and into an animation pipeline.”

As Morgan points out, building a facial animation system from scratch is something that happens at the pre-production stage, and typically a developer doesn’t get that lead in time to build an entirely new animation pipeline. “Facial animation is different and specialized compared to full-body character animation; what Face Robot offers is an end-to-end solution for that part of their pipeline, shortening the development cycle and providing facial animation at a speed and level of realism that would be otherwise impossible,” he adds. 

LucasArts is also recognizing that the need for greater facial expressivity will only intensify in the wake of advancing AI. According to LucasArts’ senior engineer Steve Dykes, “We’re presently working closely with Industrial Light & Magic on new motion-capture techniques, including facial mocap techniques that will allow our characters to actually act and show an incredible range of emotions.”

For both the player and the characters, these emotions stem from the constant thwarting of expectations during gameplay. Watching a character struggle to cope with such an unyielding world is what allows the player to root for or against their success. But since this struggle has, until now, always been pre-programmed, video games have been unable to exploit this rooting mechanism within the player and, hence, unable to unlock the full emotional potential of the medium.

“Soon gamers will feel like they’re no longer playing a programmer, but a thinking entity. It puts them in an entirely new head space,” says Dr. Paul Kruszewski, chief technology officer at Engenuity, a leader in artificial intelligence solutions. While experts disagree over the specifics of the impending AI revolution, one thing is certain: This is the generation that will sow the seeds of emergent intelligence, seeds that may ultimately grow to realize Henry James’ ideal in the interactive world.

Though not a game AI tool per se, Softimage’s Face Robot allows artists
to create high-quality facial animations for characters that are far more expressive.
 
Next month, Part 2 of this series looks at several AI middleware tools aimed at improving AI in next-generation games.


 

Martin McEachern is an award-winning writer and contributing editor for Computer Graphics World. He can be reached at martin@globility.com.