About 12 months from now, in the heart of the Big Apple, an estimated 52,000 fans will stream into a new baseball stadium to watch the inaugural pitch. On game day, they’ll race across the steps of the courtyard, walk through the marbled lobby, and pause to admire portraits of baseball heroes on the giant banners. At present, however, oversize cranes tower over the home plate, and scaffoldings lean against the windows.
Nevertheless, to fulfill the resident team’s desire for a fan-friendly stadium, the operators need to know how visitors will react to the environment, currently still under construction. Based on this information, they can refine the fans’ experience. For instance, if they know it takes too long to reach the nearest washrooms from the luxury suites and the club suites, they might consider relocating the toilet facilities. Or, if they find out that most first-time visitors tend to assemble in a specific corner of the lobby, they can pitch that spot to advertisers for top dollars.
Short of herding 52,000 volunteers into an incomplete stadium and observing their random movements, the owners have few options for figuring out the anticipated foot traffic. So they enlisted the help of Baljinder Bassi, a project manager at Hatch Mott MacDonald, a global architecture and engineering consultancy. He can let loose a horde of virtual pedestrians into a digital replica of the stadium. (Per instructions from the client, the architectural firm is not permitted to release the name of the stadium.)
Four years ago, the concerned managers of another stadium approached Kynogon, a company specializing in artificial intelligence solutions and recently acquired by Autodesk. Their stadium was about to host a major sporting event that could attract not just thousands of spectators, but also demonstrators. Could Kynogon help them study a number of riot scenarios at the location? Pierre Pontevia, Kynogon’s CTO, was happy to oblige, commanding an army of computer-driven rioters ready to do his bidding. (Above) Autodesk’s Kynapse, an AI solution for driving virtual entities in games, can be used for architectural simulations, especially to study evacuation scenarios like this one. (Bottom right) Massive Software is introducing AI-driven agents specifically designed for architectural simulation.
A couple of years ago, Diane Holland, CEO of Massive Software, was contacted by marine engineering consultancy BMT to deploy Massive’s AI engine to study marine traffic within Hong Kong’s harbor and the busy ports of Southeast Asia. This came from BMT’s recognition that human factors underpin the majority of ship-collision incidents. Following development of appropriate collision-avoidance rule sets, BMT used Massive, which once helped Peter Jackson re-create the traffic conditions of 1930s New York for the movie King Kong, to simulate the frenetic activity of Hong Kong harbor, which generates more than 15,000 vessel movements daily—everything from the world’s largest container ships to high-speed ferries, tugboats, and sampans.
AI was once considered pure science fiction, but in the last few years, architects, facilities operators, and government agencies have come to regard it as legitimate science. The same algorithms that once brought Orc armies to life in The Lord of the Rings now drive the behaviors of enthusiastic fans and frantic evacuees in computer-run simulations. The experiments in this emerging discipline reveal not only the potentials of the new application, but also the elusive nature of human behavior. Watch Your STEPS
To simulate the conditions of a game day still a year away, Bassi and his team first built a digital copy of the new stadium in Autodesk’s 3ds Max, using the Autodesk AutoCAD drawings supplied by the client as a foundation. Then they brought the model into STEPS (Simulation of Transient Evacuation and Pedestrian Movements), a software program developed by Hatch Mott MacDonald and available for commercial licensing.
STEPS can read 3ds Max’s ASCII Scene Export (ASE) format. It can also read DXF files, along with elevation information. The software has its own built-in 3D modeling tools, but they’re better suited for refining and correcting the imported geometry than for building new structures from scratch. Once the architectural environment is in place, the user selects all the planes representing the surfaces on top of which pedestrians can walk. Usually more than one plane is involved in a simulation scenario, resulting in the creation of a path network. These planes are then stitched together as a collection of surfaces and exits through which the virtual humans can walk.
In the stadium project, programmable digital entities, also called agents, represent the fans. These agents are equipped with a field of vision. Simply put, they can detect the solid objects defined by the user as they travel through a plane. They use this knowledge—or intelligence, if you will—to execute the navigation commands issued to them. So if they are programmed to move from point A (the box office) to point B (the concession stand), they will avoid the obstructions (pillars, walls, and furniture) along the way to get to the destination using the shortest and quickest route. These behaviors can be further refined so that if a chosen route is too lengthy or many other fans are using the same route, the select few representing impatient fans may change direction and use alternative routes available to them.Massive Evacuation
Massive, which the firm describes as “an AI authoring solution for simulation and visualization using autonomous agents,” was previously available on Linux only, beyond the reach of the PC-dominated architecture industry. But at SIGGRAPH 2007, Massive’s founder and product manager, Stephen Regelous, declared Massive to be available on Windows, throwing the doors wide open to the architectural community.
One company that has embraced Massive is Arup, a consulting engineering firm. Arup is using Massive to study occupant behavior in the celebrated Transformation: The Los Angeles County Museum of Art Campaign, a master plan for the museum designed by Renzo Piano.
“Massive is unique in that it simulates reality from the bottom up, modeling an individual’s decision-making process that produces emergent behavior,” says Arup fire discipline leader Nathan Wittasek. “Each agent finds its own way around an environment based on what it sees, hears, touches, and remembers, just like a real person would.”
Massive’s AI engine, employed to re-create the traffic conditions of 1930s New York City for King Kong, is also used to simulate the freight and shipping activities of Hong Kong harbor, among other applications.
Targeting the AEC market, Massive plans to demonstrate the technology’s versatility in accurately simulating anything from humans, planes, trains, or automobiles. “Massive gives architects, engineers, and city planners a range of tools to design better, safer buildings and urban infrastructure,” Holland says. “You could even use Massive to simulate a grocery store, using the behaviors and motion patterns of retail customers to determine how best to place products on the shelves.”
Massive imports 3D geometry in OBJ format, the open format exportable from all major DCC software, including Autodesk Maya and 3ds Max. “One of our current development strategies is focused on using Autodesk’s FBX to further integrate Massive with architectural models, lighting setups, and skeletons,” adds Holland.
Massive’s AI system works with autonomous agents (approximately $1500 to $2500 each) that can detect and interpret nearby pixels. The AI system itself is available for permanent licensing under two flavors: Massive Jet ($5999, with $1299 annual upgrades and a support fee) or Massive Prime ($17,999, with $3999 annual upgrades and a support fee).Kynapse Brain Configurator
Autodesk’s newly acquired Kynapse AI solution contains Map Builder Service as a component of the software, for automatically generating the topology of the virtual world and the navigation paths within it. According to the company, the topology and path data are the equivalent of polygons for rendering or collision meshes for physics. They constitute the basis for [the agents’] 3D navigation and 3D perception.
The program uses several formats, including the open-standard OpenFlight (originally developed by MultiGen-Paradigm) to import the geometry. The agent’s thinking logic is written in C++, or LUA scripts. Users can set the agent’s base behavior (for example, attack, flee, hide, overtake, explore, and so on), and further refine the behavior by placing other constraints (find shortest route possible, take the stealthiest path, avoid zones marked as hazardous, and so forth). Furthermore, the dynamic object management feature lets users simulate the doors and elevators that the agent encounters.
According to Olivier Pujol, Kynogon’s business development manager, the company has been contacted by those interested in using Kynapse to produce architectural demonstrations. “As the architect put it,” he recalls, “[the design] is not just a building, but a life center. And it cannot be presented empty. So the architect wants to let his customers move inside the building like a human would—or a player in a first-person 3D game—and mingle with people in a sensible way.” Such a demonstration would involve random patrons entering and exiting the lobby, the housekeeping staff performing their daily chores, passersby, and even urban traffic in the surrounding area.Training Your Agents
Computer game players who have watched the industry mature—from the blocky graphics that made up opponents in Id Software’s original 1993 Doom to the pixel-perfect NFL stars from EA Sports’ lineup today—will quickly point out that AI-driven secondary characters have become much smarter recently. In the early days, their logic was confined to pacing back and forth between two points and attacking anything that came within the kill zone. But in the latest games, nonessential characters are endowed with memory (see “Mind Expansion” and “Mind Over Matter,” June and July 2008, respectively). To a game character’s detriment, these non-player characters can actually remember a player’s previous strategy and prepare for it when they are assaulted the second time around.
Software developers are now infusing these AI tools with a similar kind of memory management system that will make the agents’ behavior more realistic outside the gaming world.
“Using Massive’s patented vision process and a new memory feature we’ve added specifically for AEC, Massive agents can remember the directional signs they’ve seen. And that memory can be set so it decays at different rates for different people,” says Holland.
The introduction of memory, or the virtual entities’ ability to recall their surroundings, provides added realism to airport traffic simulations, like this one.
Such behavioral details add realism when simulating the foot traffic at London’s Heathrow Airport, for instance. The user can adjust the memory parameters so that an agent traveling along a path the second time would move faster, as would indeed be the case with commuters who have become familiar with the environment.
In addition to navigating using sight (or the detectable pixels), an AI agent may rely on sound, too. For instance, an agent in search of a musical event might move faster once it enters the designated zone where it can perceive the sound cues issued from the stage. Behavioral Experts Wanted
AI developers point out that simulation results are much more accurate if the client seeks expert input on modeling the behavior of the target demographics. Technology lets folks customize the agent’s vision, speed, fatigue level, and other aptitudes, but some basic information about the norms and the social protocols among the target population is vital, too.
For instance, if someone is modeling the crowd movements at an overseas sporting event where alcohol intake is a consideration, the person might want to examine the alcohol tolerance levels of those who live in the area. This way, the agent’s navigational intelligence can be set to deteriorate at the appropriate pace based on its alcohol consumption. In some cultures, isolated individuals tend to stand farther apart (they demand more personal space). In others, they tend to travel in groups. Thus, the agent parameters from the simulation of Grand Central rush hour in New York City won’t be applicable when simulating similar types of movement in a Tokyo subway station.
Indeed, this is the type of information that was requested when simulating the traffic flow for the new stadium project in New York. “The client wants us to simulate different game dates, which is quite complicated,” Bassi notes. “During the weekdays, for example, there’ll be lots of families with children, so you have to consider that fans may be grouped together. However, on Friday or Saturday nights, there’ll be far more young people who are more inclined to consume alcohol and probably more boisterous.”
Children and the disabled, for instance, require different walking speeds in simulations such as this. Therefore, Bassi is constantly on the lookout for hard data that will tell him more about these two segments of the population.
However, modeling the behavior of a young adult with a few beers in his or her system requires more of a nod to behavior science than to AI. So to refine its technology, Hatch Mott MacDonald is partnering with researchers from the TC Chan Center for Building Simulation and Energy Studies at the University of Pennsylvania. The center explores, among other things, the incorporation of realistic human behaviors in crowd-movement simulation. Pioneers Needed
For some segments of the AEC industry, switching from 2D to 3D was a leap of faith. Today, these groups are beginning to exhibit a comfort level in the latest 3D building information modeling (BIM) systems, such as Graphisoft’s ArchiCAD or Autodesk’s Revit. In recent years, the AEC segment also began using computer-based energy-efficiency simulation tools, such as EcoTech from Green Building Studio (see “Measuring Green Footprints,” May 2008). So AI-driven simulation might not be such a hard sell after all.
As a matter of fact, it’s no coincidence that Kynogon (and its Kynapse product) was acquired by Autodesk, makers of Revit, one of the leading BIM solutions in the architecture market. The deal raises the possibility that, by crossbreeding its entertainment and media portfolio and its architecture products, Autodesk could give birth to an AEC-specific simulation solution.
But like any other emerging discipline, adopting new technology requires patience and resources. Pujol points out: “For a single evacuation simulation, you need hundreds of individual behaviors. They can be identical, with small parameter differences, or completely unique. Yet, there is no standard today for creating such behaviors. In other words, there’s no ‘behavior library’ where you can buy off-the-shelf behavior changes.”
This STEPS simulation provides a visualization of a commuter train station evacuation at different times. The lack of hard data on the average walking speeds of children and the disabled presents a challenge.
Massive, for one, realizes this situation. “Despite the extensibility of Massive,” Holland states, “we realize that architects don’t always have the time to study and build in a lot of basic behaviors. Massive has been working with a select number of architects, engineers, and institutions to develop simulation and visualization products for the AEC industry, including a Massive ready-to-run pedestrian and evacuation agent with built-in behaviors.”
As a Massive user, Wittasek notes that most simulation solutions incorporate the model of the building into the agent’s AI, but real people navigate a building using vision and other natural senses. “Massive has a flexible AI-authoring environment for modeling the idiosyncrasies of complex, real-life behaviors into agents who use visual and auditory cues, as we do in real life,” he adds. “Because of this, Massive holds the promise of revolutionizing the architectural and engineering industry.”
Concerning regulatory bodies’ attitude toward this new field, Wittasek says, “In the past decade, regulatory agencies have increasingly embraced the use of computer simulations in the context of fire hazard and occupant movement analyses. The use of such models has been predicated upon a certain level of transparency, requiring the users to document key assumptions, limitations, and design inputs in a rigorous fashion.”
Wittasek admits that such simulations are not standard practice, though responsible designers have traditionally been expected to complete an adequate number of simulations to bind the set of possible outcomes, thus facilitating their engineering judgments. Simultaneously, computer simulations have found favor as communication tools, he adds, permitting the designers to more effectively interface with the authorities having jurisdiction, thus improving the likelihood that the associated projects will gain the necessary approvals.
Because mathematical equations and hierarchy logic dictate the virtual entities’ decision-making in AI, the technology is a dependable simulator of rational behavior. But, for better or worse, human behavior is not always governed by logic or common sense. Sometimes, real-world behavior stumps AI developers.
Eric Pellissier, a full-time consultant at Hatch Mott MacDonald, once ran an experiment to see if the simulation results in STEPS were reliable. To do that, he compared the film footage of a real evacuation to a re-enactment of the same scenario in STEPS.
“It was an L-shape floor, with a fire exit at each end,” Pellissier describes. In the STEPS simulation, the agents behaved as you might expect ordinary people to react: They moved toward the exit that was closest to them. In the real evacuation footage, however, that was not the case. In the real evacuation, the people on one side of the floor walked all the way along the L-shape corridor to the other end to use the exit there because they saw their friends on that side, and they knew it was an exercise, so they weren’t in a panic.
Pellissier cautions, “AI is great at predicting average behaviors, but it cannot predict how an incident would play out in reality.”
Kenneth Wong is a freelance writer who focuses on the computer game and CAD industries, exploring innovative usage of technology and its implications. He can be reached at Kennethwongsf@earthlink.net.