Mind over Matter
Issue: Volume: 30 Issue: 7 (July 2007)

Mind over Matter

Last month, we explored the revolutionary impact of NaturalMotion’s dynamic motion synthesis technology (Euphoria) on next-generation games, and looked at the power of behavioral AI tools to add event-driven behaviors to an NPC’s repertoire. While Euphoria is giving NPCs the ability to react autonomously to objects and characters in their immediate vicinity, a new breed of AI “middleware,” led by companies such as Kynogon and Engenuity, is sowing the seeds of their emergent intelligence, helping characters reach their goals and destinations in increasingly complex worlds.

For example, if a character’s goal is to exit a building by descending a winding flight of stairs, exit through a back door, and cross a street, AI middleware will calculate the best path through the building, while Euphoria will ensure their bodies move and react intelligently to overcome every obstacle in their way—be it enemy guards, bombs, collapsing floorboards, and so forth—to execute the goal. “AI is a huge field right now. And I would say that Euphoria is solving bottom up, whereas we are solving top down,” says Paul Kruszewski, chief technology officer for Engenuity, provider of artificial intelligence tools for game development.

Formerly the founder and CTO of BioGraphic Technologies, Kruszewski helped develop the pathfinding middleware software AI.implant, which was subsequently purchased by Engenuity. “We are at the other extreme, establishing the goal and motivation, asking, ‘Where should I go, why should I go there, and how do I get there?’ Our programs are at opposite ends of the spectrum, but I think they’ll meet somewhere in the middle, and they are beginning to do that now.”

Incorporating AI middleware into a production pipeline is a growing trend in game development because it relieves the developer of the cost and tedium of having to manually program AI behaviors. Midway Games and Vivendi Games have standardized on Engenuity’s AI.implant for all their next-generation titles, while Electronic Arts, Eden Games, and Lionhead Studios are entrusting their pathfinding problems to Kynogon’s Kynapse. According to both companies, the multi-core power of next-generation consoles has been the long-awaited missing piece needed to awake the latent potential for middleware such as Kynapse and AI.implant in revolutionizing game AI.

“It was hard-slugging in the PS2 and Xbox era,” says Kruszewski. “Next-generation platform development is all about AI. While rendering and physics engines dominated past generations, this is the time for AI to take center stage.” And it’s about time, too; as complex, dynamically modified environments become more pervasive under the power of sophisticated physics engines such as Havok and Ageia, so, too, do the challenges of providing intelligent navigation through them.
 
Midway’s new title Stranglehold focuses on “maximum destructibility,”
whereby nearly every object in the environment can be blown apart.
This was achieved with AI.implant.

Engenuity’s AI.implant

Engenuity’s AI.implant 5.0 addresses those challenges by providing dynamic obstacle avoidance and pathfinding in both wide-open spaces and dense urban settings. Offering integration with Epic’s Unreal Engine 3, as well as with many animation packages, including Autodesk’s Maya and 3ds Max, AI.implant allows developers to author AI behaviors inside the production environment most familiar to them. Moreover, it is designed to make AI programming accessible to the least technically inclined artists, allowing them to create behavior scripts graphically using an intuitive GUI. In this way, it is building bridges across the archipelago of technical and non-technical stations that normally divide and compartmentalize a production pipeline.

This approach stems from AI.implant’s origins in the broadcast and film markets. While AI.implant’s mandate was always to support intelligent humans in any computer graphics application, when the company began six years ago, Kruszewski and his team tailored the software for creating digital extras. In 2004, it helped Stargate Digital simulate a cast of thousands for USA Network’s epic miniseries Spartacus (see “Building an Empire,” April 2004), and more recently, generated the crowd scenes for the 2005 golf drama The Greatest Game Ever Played.

Realizing the potential military applications, the company adapted the software for real-time combat simulation. For example, to help the Marine Corps defend embassies, Firearms Training Systems (FATS) builds simulation caves that are placed in the basement of almost every embassy. With a projector providing a fully immersive environment inside a 2- by 2-meter box, the marines enter the cave for a few hours every day, plug in an M-16, and fire off rounds at virtual enemies guided by AI.implant.
 
Engenuity’s AI.implant lets artists “program” dynamic obstacle avoidance
and pathfinding into a game.

Pathfinding

Pathfinding is usually accomplished through automatic or manual placement of waypoints that direct the NPCs to tactically advantageous positions throughout an environment. Another form of pathfinding can be done using navigation meshes, which, while yielding more detailed information about walkable surfaces, can be more expensive to compute at run-time. AI.implant supports both.

If an animator, having modeled a guard in Maya, wants to make the character patrol the area surrounding a compound, for example, he or she loads the AI.implant Maya plug-in or a level editor such as Unreal 3, and places a network of waypoints in the environment for the guard to follow, then assigns the behavior “Seek to Via Network” to make it vary its path during the patrol. Assigning other behaviors, including “Avoid Obstacles” and “Avoid Barriers,” will make them steer clear of trees, fences, and so forth.

Using AI.implant’s software development kit (SDK), a programmer can link AI.implant’s engine into the Unreal engine, and drag and drop “human brains” into the environment, and then use markup tools to define waypoint networks and points of interest, such as obstacles, rally points, and cover. The AI.implant system takes over during gameplay and, using this knowledge of the environment, carves out a path.

“AI.implant is unique in that we can change the information on the navigation mesh dynamically,” says Kruszewski. “For example, we can mark up dynamically that an area is on fire. So, while the underlying geometry hasn’t changed, the character knows not to run through the fire but around it.” Sensors help the NPCs perceive events in their environment, while binary decision trees guide their response to the events.

AI.implant divides the environments into three classes of objects: static (walls), semi-static (chairs), and dynamic objects (other NPCs). To prevent NPCs from running into one another, AI.implant uses a process called dynamic obstacle avoidance. The NPC’s perception of the environment is achieved primarily through waypoints and navigation meshes. The latter can require the use of raycasting, a form of environmental perception achieved by shooting rays from an NPC’s eyes to map objects in its path.

Programming Artists

Four years ago, Disney purchased AI.implant for its high-end broadcast and film work. Like Stargate Digital, the studio’s staff comprises artists, not programmers. Watching how they worked, identifying how they fell into pitfalls, and using their feedback, Kruszewski and his team progressively refined AI.implant to deliver a tool set that would effectively allow artists to program in a visual way.

“The problem with game development today is that all the bots are still being done by programmers, either using C++ or script, which is both time-consuming and sets up a wall between the creative person and the end result,” says Kruszewski. “We’re eliminating the programming stage; now the designer, using these visual tools, will just create the brain himself and hit Go. It’s data-driven, so there’s no recompiling. If the character should be more aggressive, he can change it literally on the fly. So a process that once took eight hours now takes an hour.”

AI.implant also excels at realistic, real-time crowd simulation. Using simple paint tools to fill an environment with a swarming mass of people, an artist can assign such behaviors as milling, panicking, rioting, or looting.

Many games now have ambient crowds just wandering and milling about, going up and down a street, obeying streetlights, and popping into a store or a restaurant, for instance. Some of these behaviors would be prebuilt, others carved out of the personality of the character. The artist would designate random stores of interest with mouse clicks, and then tell the character to choose one to go to, and then another, and another, and so forth, setting a limit to the number of characters who can enter to avoid congestion. “It’s a lot like directing a simple film,” Kruszewski says.

Psychological Programming

While designing brains, AI.implant allows developers to individualize each NPC’s personality and differentiate the model’s reactions to similar situations. If, for example, two NPCs are thrown from their path, one may look for the closest edge on the navigation mesh to resume their course; the other, however, might ignore the previous path and bolt “devil may care” for its target, ignoring traffic laws or anything else that gets in its way.

“For me, computer graphics in 2007 is about AI; it’s about proceduralism in animation. And in this new era, a new kind of animator is slowly emerging—one who, in a sense, now has the technology to get inside the head of a character and express psychology through motion,” says Kruszewski.

Navigating Debris Fields

While companies such as Havok and Ageia have made it trivial for a developer to set up thousands of rigid-body simulations in a game, sending the NPCs through the resulting debris field can be a nightmare for the AI programmer. But because it is a dynamic physics simulation, there’s going to be a lot of local minimum, or dead ends. So Engenuity worked with Midway on the studio’s big launch title, Stranglehold, shipping this summer.

“The big selling point of the game is ‘massive destructibility,’ so we had to deliver,” says Kruszewski. The game is set among the congested streets, restaurants, and tea shops of Hong Kong, where everything from walls, chairs, and glass windows, to tea cups and chopsticks, can be blown to pieces. To this end, the AI literally analyzes the dynamically changing world in real time, carving out mini-paths for the NPCs through the ubiquitous debris.

The newly released AI.implant Version 5.0 enables massive destructibility and open-ended gameplay by utilizing a unique Dynamic Path Refinement (DPR) technology that adapts and responds to ever-changing in-game physics. DPR allows developers to create games with an exponentially higher amount of game content, while sidestepping the tedium of scripting every possible gameplay scenario. It also features new obstacle traversal and vaulting options that allow characters to navigate environments in multiple ways, including vaulting over obstacles rather than simply walking around them, and emphasizes cinematic movements.

In most games, of course, NPCs spend the majority of their lifetime pursuing the player, making stealth and furtiveness two desirable qualities. Thus, by using its knowledge of the floor material, AI.implant can direct the NPC to pursue a destination via the most carpeted path, because it will be quieter. Also, AI.implant can give the NPC a path that’s either invisible or minimizes exposure to the player.

Not only will AI middleware yield more highly intelligent characters, but the newfound hardware power will ensure the environments are teeming with them. Kruszewski estimates that very soon we will see crowds of over 100, and by the end of the Xbox 360 and PS3 era, crowds will soar to well over 1000. AI.implant’s Special Everything Wrangler (SPEW) can already process a cast of thousands in real time at any one time. If that sounds like overkill, think again. “I can’t disclose details, but we’ve seen specs for upcoming MMOGs (Massively Multiplayer Online Games) that call for over 1000 NPCs interacting at any given time. They are, of course, splitting it over a couple of processors,” he says. “Right now, we’re doing demos in the dual-core PC world of about 300, and so with these new quads (see “The Power of Four,” pg. 44), soon there will be about 1000 on a PC. I think that in the military space and MMOGs, simulations of over 10,000 will become commonplace.”

But what role will AI-controlled NPCs play in the online gaming world, where every character is potentially human-controlled? Will the NPCs be relegated to clerks and waiters? An upcoming MMOG is addressing the problem with what may be the most novel use of NPCs yet, which Kruszewski can only hint at right now. “All I can say is people are building complex virtual worlds, and they could be entertainment facilities, and people will sign on to be entertainment.”

Kynogon’s Kynapse

Unfortunately, as gaming environments expand to encompass huge numbers of NPCs, and massive destructibility becomes a priority, then manually mapping these massive levels with a grid of waypoints could become impossible. That’s because a complete, high-density description of the world could cripple the CPU and overburden the memory, while a low-density description may, again, leave the NPCs blind. Hence, next-generation games may require an automatic method of generating pathfinding data. 

Enter Kynogon’s Kynapse, an AI middleware tool that automatically generates a model of the world, called Pathdata. The Pathdata is extracted not only from the polygonal terrain, but also from movement and physics-based collision models. Other AI middleware requires a navigation mesh provided by the client, restricting the information to ground-based entities. When computing the Pathdata, Kynapse can identify three models of movement: ground-based, flying, and ant-like (entities attached to walls). Using the Pathdata, NPCs can perform a topology analysis, and then use spatial reasoning to identify at runtime places of interest, such as hiding places, fleeing and attack positions, and so forth.

Used by such developers as EA, Sega, Atari, and Real Time Worlds, and also integrated into Unreal Engine 3, Kynapse can handle pathfinding, team behaviors, and spatial reasoning among the increasingly complex collision simulations.

“Ten years ago, we had small, static environments with a few entities. Today, there are three main challenges. The first is dealing with the large number of entities (we’re already dealing with over 1000) functioning completely independently in a real crowd. The second challenge is the size of the environments; next-gen clients are dealing with a 10x10 km, highly detailed urban world. The third challenge is the preponderance of more and more dynamic objects,” says Pierre Pontevia, CEO of Kynogon.
 
Kynogon’s Kynapse AI middleware solution computes Pathdata from a game, identifying various modes of movement.
As a result, NPCs can analyze the topology to make intelligent decisions concerning where to hide, attack positions, and so forth.

Opening an NPC’s Eyes

To enable NPCs to utilize all the dynamic objects in their world and reach their destination, Kynapse employs Path Objects. For example, a locked door, which can be unlocked and opened to deliver an NPC to its destination, would be considered a Path Object.

With Kynapse, getting an NPC to its destination requires three steps. The first is path planning, which involves extracting a variable path that will take an NPC from point A to point B. The second is path following, which smooths the path as it is traversed to yield a nice, non-robotic trajectory. And, the third step is dynamic avoidance, which involves circumnavigating all the obstacles that could get in the way, including other NPCs.

According to Pontevia, there are a few major challenges facing developers and software vendors in the next-gen era, including assimilating all the new technologies so they function synergistically in a game. Another is finding the best way to exploit the new processing power.

“On the Xbox side, you have three processors. If the GPU [utilizes] one processor, the rest will be dedicated to physics and AI. So the challenge is, how do you best use six native threads in three processors for that purpose?” Pontevia questions. “On the PS3 side, you’ve got an incredible multi-processing and multi-core architecture bearing the core processor and eight CPUs, yielding an awesome power of parallelism. It has to be mastered, but by whom? The answer is the physics and AI people. So, the learning curve is quite steep, but eventually we’re going to be able to deliver thousands of entities, smarter entities, much larger worlds, and much more dynamic worlds. We’re going to break a lot of barriers gamers have grown accustom to.”

While the potential may exist to create thousands of autonomous, highly intelligent NPCs capable of outmatching the player at every turn, Pontevia and Engenuity’s Kruszewski caution that such invincibility can result in a frustrating game experience for the player. “Although it’s possible to make an NPC so smart that the player will never see them, is it really worth having them? It’s a game; it’s not a simulation,” Pontevia notes.

Artificial Contender

Imagine any high-impact scenario from last year’s Super Bowl, Stanley Cup, or NBA finals. While Euphoria could procedurally simulate those interactions, what if these simulations could transpire according to each athlete’s unique style of play?

TruSoft’s Artificial Contender (AC), a new behavior-capture AI middleware technology, is making it possible. Already used in Sony Computer Entertainment’s This Is Football 2005, AC allows developers and game players to “train” NPCs, allowing them to learn a human player’s behavior and playing style. Once trained, they’re known as AC agents.

The implementation of AC agents into a game requires three major steps: integration of AC technology into a game using AC’s SDK; tuning and testing of AC agents’ performance using AC tools, which are accessible to both programmers and designers; and training of AC agents by playing a game in the role of each agent that is being trained.

 “Game designers sit down with a console or PC and play the game as it appears to the player, or end user,” says TruSoft’s technical director Iskander Umarov. They play in the role of an AC agent, which learns tactics and strategies from the player without the need for coding. An AC agent can also fine-tune a learned behavior through self-learning (playing the game by itself). In the future, TruSoft hopes to capture data from the real world, such as video recordings of an athlete.

Left is an image from Sony’s This Is Football 2005, which used TruSoft’s AC behavior-capture middleware.
Above are shots from the AC Game Viewer tool applied to a soccer game.

Training an NPC 

By associating similar gameplay situations during learning—adding them to nodes on a graph and associating them through a generalization tree—AC agents can search the graph for learned situations and find the best solution.

“This functionality is the heart of AC technology. AC automatically creates representations for game situations on different levels of abstraction called Zoom Levels,” says Umarov. As the designer or end user plays, AC stores the Zoom Levels in a combination of graphs and generalization trees that create a very compact and fast representation of knowledge. The graphs and trees are grown and analyzed hierarchically in real time, giving the NPCs the means to make strategic and tactical decisions during gameplay. While more training yields better strategic and tactical decisions, the average time required to train an AC agent is 15 to 30 minutes for fighting games, 30 to 40 minutes for sports games, and 40 to 90 minutes for real-time strategy games.

Depending on a game and a situation within a game, one minute of training can generate about 10 to 120 training samples. For example, a typical fighting game will usually generate about 60 to 120 training samples a minute; a real-time strategy game, 10 to 60 training samples every minute. Plus, a designer can add more training samples at any time.

While many types of games will benefit from behavior-capture AI, including sports, fighting, real-time strategy, first-person shooters, and action games, sports games will obviously reap the greatest rewards. For the computer-controlled teams and athletes, the style of play will more closely reflect that of their real-life counterparts. “More specifically, in soccer, Brazil will play like the real Brazilian team, Italy will play like the real Italian team, Chelsea FC will play like the real Chelsea FC team, and David Beckham will play like the real David Beckham,” says Umarov. The integration of AC into a game will also provide players with new game modes, allowing them to play as a teacher or a coach, training allies or training AC bots to compete with one another or human players.

Additionally, if game designers decide to turn on self-learning, players will find that allies and opponents will learn quickly and improve during gameplay. How their favorite athlete or team adapts to a situation the first time they load up a game will not be the same a few months later, making the gameplay more challenging, exciting, and realistic. AC also seamlessly interfaces with a game’s regular AI. For instance, in This Is Football 2005, AC controls the player with the ball, while the rest of the players operate under the game’s regular AI, which could be homegrown or middleware.

Moreover, TruSoft maintains that AC holds its own unique role in the development pipeline that doesn’t conflict with other middleware offerings.

“For example, let’s imagine a typical modern real-time strategy game. While a technology like Euphoria will be responsible for realistic real-time animations, technologies like AI.implant or Kynapse will be responsible for pathfinding, collision-avoidance, and crowd simulation; Artificial Contender’s role will be to provide tactical and strategic commands to armies and units corresponding to specific pre-trained routines by game designers or end users, or dynamically adaptable styles of play,” says Umarov.

While TruSoft’s main focus right now is to integrate AC into more games and support more genres by creating game and genre-specific AC engines, its goal for the future is twofold: enhanced, multi-agent cooperative behavior and training AC agents using captured data from the real world, in addition to that captured from humans playing the video game.

While breakthroughs and creative innovations in AI middleware promise to revolutionize the gameplay experience and streamline the production process, Alex J. Champandard, a professional AI programmer who runs aigamedev.com, an AI Web site and blog, cautions that advancements are still needed. “Kynapse, AI.implant, and Artificial Contender are all great solutions to get projects jump-started, involve the designers early in the development cycle, and not produce anything embarrassingly buggy in the end,” he says. “But it’s certainly the case that industry experts will be able to do a better job with a game-specific system; all the games that are famous for their AI could have never shipped with these products.”

Following the arrival of next-generation consoles, Champandard’s vision for the new norm in video games includes more lifelike animation (like that offered by Natural Motion’s Euphoria or Havok Behavior) as well as realistic goal-driven behaviors for a massive number of characters.

“Just replace the word ‘poly’ with ‘people,’ and you’ll have some idea of how crowds will increase. Think of how unacceptable low poly counts in an environment are; that’s how unacceptable low NPC counts will be, especially for Grand Theft Auto-type games where you’ve got 30 seats of Maya or Max cranking out huge worlds, but only 20 people who don’t react. That’s unacceptable now,” says Engenuity’s Kruszewski. Hence, for those developers ill-equipped to handle the Herculean programming challenges of the next-gen era, Kruszewski and his fellow AI middleware proprietors may be their only hope of keeping up with the new norm in artificial intelligence. 


AI.implant intelligently guides NPCs through a crowded street scene.


Martin McEachern is an award-winning writer and contributing editor for Computer Graphics World. He can be reached at martin@globility.com