Issue: Volume: 23 Issue: 7 (July 2000)

Virtual Agents Get Smart



While he can't program and guarantee intelligent life in the real world, Craig Barnes wants to make it easy to do so in virtual ones. The University of Illinois computer-graphics researcher has designed a system called HAVEN that enables the easy specification of "agent" behavior from within a virtual environment. HAVEN (Hyperprogrammed Agents for Virtual Environments) uses a visual programming language to specify by example agent actions from low-level movement to higher-level reactive rules and plans.

In a virtual environment, intelligent agents are software elements that are programmed to perceive their environment and perform actions in reaction to change. "They can be as simple as creatures that populate the environment, or they can be as complex as tour guides or instructors. They provide a means of enriching a virtual environment with other inhabitants that exhibit believable behavior," says Barnes.
To teach a virtual agent how to avoid an obstacle, users of a new visual programming system called HAVEN can manipulate the object to simulate the desired path. This action is then embedded as a rule governing the agent's behavior under the defined co




Though the concept is simple, the specification of agent behavior is typically not. "Current systems tend to be bound to a programming language, and implementing agent behavior usually requires a considerable programming effort," says Barnes, whose goal with HAVEN is to provide a means for designating agent behavior with as little programming effort as possible. He achieves this through a set of high-level visual tools that encapsulate the basic movements an agent can perform and its reactive behaviors.

HAVEN employs basic motion capture techniques to define the range of movements for a given agent. Simple rules consisting of an enabling condition and an action dictate appropriate reactions. "To build a rule, a user sets up the conditions and then visually [using representative icons] demonstrates the action needed to 'resolve' the rule," says Barnes. For example, an object in the agent's path might be an enabling condition to which the agent must react by, say, walking around it, stepping over it, or backing away from it. "Once a rule has been specified, it becomes part of the agent's behavior, but the rules can be edited or generalized to allow for a larger set of enabling conditions."

HAVEN builds on an existing, vertically layered agent architecture (InterRap) in which each level is designed to handle increasingly complex tasks. HAVEN enhances this architecture through its reliance on a distributed scene graph, which is a database of geometry and transformations stored as nodes in a tree. The agent's input system, which is made up of sensors, is bound to the nodes on its scene graph, enabling it to perceive its environment and communicate that perception back to the graph to relay actions and reactions.

To facilitate an easy interface with virtual reality applications, HAVEN in corporates "world" and "display" modules. The world module is the central manager. It maintains the world appearance, global-state information, and agent and user registration as both enter and leave the environment. The display module contains information about the appearance of the user (as an avatar) or the agent. User or agent actions that cause a change to the local scene graph reflect back to the world module, which in turn updates the user and agent.
By manipulating visual programming icons such as these spheres and blocks, HAVEN users teach virtual agents to perform basic motor skills such as move forward, turn, and jump. The user "demonstrates" the desired action, which then becomes part of the syst




The HAVEN programming environment supports three distinct behavioral layers: motor skills, reactive rules, and plans. Motor skills, such as walking or grasping, are made up of degrees of freedom and are tied to nodes on the scene graph. Motion can be specified by manipulating the nodes. For example, for forward motion, the user can drag a representation of the root node forward and release it. The distance and time it takes to drag forward is computed and used as the default forward speed.




The reactive layer uses the defined motor skills to perform rule-based behaviors, such as moving around an obstacle. The programmer tells the agent that this is a new rule, and the obstacle is its enabling condition. The programmer then moves the agent around the obstacle until the path is clear, then tells the agent the definition is complete. This action then becomes a rule.




Finally, the plan layer is used to build sequences of rules and goal-directed behavior. This layer comprises two sub-layers: plan specification and goal specification. In the former, the programmer specifies the "plan" by defining the enabling condition and performing a set of actions (versus a single action). In goal specification, the programmer specifies a goal state, and the agent computes a plan to achieve the goal using information and rules from its knowledge base. In this way, the hierarchical design of the HAVEN programming in ter face not only eases the specification of agent behavior, but it also enables the creation of increasingly complex behaviors without having to program everything from scratch.

Barnes is considering a number of enhancements to HAVEN, including the development of better techniques for motor-skill specification, possibly in the form of key frame animation or inverse kinematics. Another improvement he suggests, "is to enable agents to affect the geometry of other objects in the environment. Since the behaviors are rule-based, it should be possible to specify rules that would allow agents to construct, tear down, or alter shapes." Finally, the programming approach could be further simplified by including voice recognition, he says. "A user could combine motion and voice instruction, making it easier to specify rules."

Diana Phillips Mahoney is chief technology editor of Computer Graphics World.