Moving on up
Issue: Volume: 30 Issue: 10 (Oct. 2007)

Moving on up

Motion-capture technology has been on the verge of breaking into wide use for nearly 20 years.
Has the time come?
In part I of a two part series, we look at some non-entertainment applications of mocap.
 
The figures $50 million to $100 million are commonly bandied about to describe the entire motion-capture market. Compared to the digital content creation market, for example, which Jon Peddie Research estimates hit $3 billion in 2006, the mocap market is the size of a dot. And yet, at SIGGRAPH 2007, more than 20 vendors—a significant percentage of the total hardware and software vendors—were on the trade show floor pitching a variety of motion-capture solutions.

Why so many? Three reasons. First, SIGGRAPH has become the go-to place for people looking for motion-capture systems. Second, even though the market isn’t large, it has many niches. Vendors creating new technology for those niches are exploring other possible markets and using SIGGRAPH as a means to do that. And third, aggressive young companies are devising lower-cost systems to nip at the ankles of the well-established leaders.

Although much of the mocap publicity and consumer awareness centers on the application of the hardware and software in entertainment and game development, the tools are widely used in a variety of industries. We’ll look at the latter in this article, and at entertainment and games next month.

Real Data
At first glance, it might seem that the same products work in both types of applications, and the basic technologies are indeed the same. “We have the same problems to solve for industrial and entertainment applications,” says Tom Whitaker, Motion Analysis president, “aligning a virtual scene with the real world.”
 


The differences are in the details. Animators often massage the data captured for films and games to enact a director’s vision. Medical applications, on the other hand, require terrifying precision and repeatable results. Design and manufacturing applications also necessitate repeatable results as well as precise data,  while for sports science, extreme flexibility is often the prime directive. In VR and for many military applications, accurate, real-time response wins the battle.

These applications primarily use optical marker-based systems, markerless optical systems, and intertial markers, although sonic systems and magnetic motion-capture systems have a home here, as well. Prices tend to land in the $50,000 to $100,000 range, but that can vary widely. We’ll look at some low-cost contenders in the next issue.

Mark My Moves
When people think of motion-capture systems, they often picture someone wearing a tight-fitting Lycra suit with attached markers. Marker-based optical systems typically use infrared cameras to capture three-dimensional dots placed on something or someone moving in space. Sophisticated software extrapolates the position of those markers in 3D space and, in many cases, applies that data to a virtual character or scene in real time. Vicon and Motion Analysis are the two leading and arguably the longest-lived companies with marker-based systems. Both firms estimate that about half their business is industrial, half entertainment.

In medicine and life sciences, marker-based systems are widely used for gait analysis, orthopedics, and other applications that depend on accurately analyzing skeletal movement, and years of research centered on marker-based systems have validated the approach.

“For biomedical and biomechanical applications, the number one requirement is accuracy,” says Robin Pengelly, who heads Vicon’s Los Angeles office. “The number two requirement is repeatability. Research centers have agreed on standardized mathematical modeling of such things as feet, knees, and skeletons, and rely on putting repeatable, accurate data into those models.”
 


Also, Pengelly notes that the medical community uses standard marker sets. “As long as they apply markers using the standard sets, they expect the results to be comparable between gaits,” he says. “The standard allows multiple vendors like Vicon and Motion Analysis to produce repeatable points.”

On the other hand, in real-time simulations, accuracy is important, but the data often is not. “The data isn’t going into a spreadsheet,” says Pengelly. “The system is used to track something and tell something else where it is. [The data] is useful momentarily, and it needs to be accurate, but it isn’t stored.”

Pengelly gives an example of a real-time simulation to determine how easy it would be for a racing team to switch wheels on a prototype of a motorcycle. As a member of the virtual pit crew wearing a head-mounted display (HMD) moves his head, the graphics inside the HMD change accordingly. Similarly, motion tracking is helping the military simulate rescue operations with unmanned aerial vehicles. “The system can control in real time a tiny simulation of what’s actually going to happen in the real world,” Pengelly explains. During the simulation, the motion-tracking technology feeds to the rescue team the orientation and location of little software-controlled helicopters flying inside a warehouse.

“Virtual reality is not the disappointment it was in the late ’80s and early ’90s,” says John Francis, vice president for industrial sales at Motion Analysis. “It’s coming full circle; it’s becoming beneficial.” As an example, he cites systems used to simulate maintenance operations for submarines and nuclear power plants. “The radiation is virtual,” he says of the latter.

These types of visualizations are a subset of the engineering market, which Pengelly believes has the largest potential for growth. “Until two years ago, real time was a major roadblock,” he says. “But now, you can render full-resolution, full-scale models in real time with stereo. A couple years ago, that cost half a million dollars. Now, it’s a tenth of that.” He points to Japan-based FiatLux’s Easy VR product as an example.

“I think the visualization market is poised for explosion,” Pengelly says. “When small design shops, even artists and graphic designers, are able to use real-time immersive technology, the market will really take off.”

Look, No Spots
Mova’s Contour markerless facial-capture system was the talk of SIGGRAPH 2006 for its potential in entertainment applications especially, but 3DMD has been working on capturing human surfaces, primarily faces, for medical applications for 11 years. Eleven years ago, the company’s system would integrate 52 static images into an animation. Five years ago, 3DMD developed a prototype dynamic system, and three years ago, in 2004, installed the first one. “It was only 48 frames per second,” says Chris Lane, CEO. “We used it for a psychology project. It wasn’t medical quality.” The company installed the first medical-quality system in April 2007, in the dental school at the University of North Carolina. 
 


“The technology is now running at 60 frames per second,” says Lane. “We generate 60 separate 3D models every second. In each case, we have a perfect surface anatomy and vast amounts of detail.”

With the 3DMD system, four cameras, two on each side of a face, capture stereo pairs—each pair in 1/500th of a second—to produce the 60 stereo models per second. To capture movement on the face, the system tracks approximately 40 facial features. As the subject’s face moves, the system deforms and reshapes an accurate mesh of the face accordingly. “Any frame is accurate to .2 millimeters,” says Lane. “We can’t afford to estimate. We have to be sure every frame we build from has provable medical accuracy.”

The first use of the dynamic 3D­MD system is for treating cleft palates. Lane ex­-pects the next application will be for smile dynamics, to help orthodontists understand how to engineer teeth to improve facial appearances. But, he also believes game developers will be interested in capturing facial expressions and, using full-body scanning, characters in full costume.

3DMD and Mova aren’t the only markerless companies with camera-based systems, of course. Image Metrics, for example, began life as a medical imaging company, but split into two—Image Metrics for entertainment and Octavia Medical—to serve the two different markets. “For medical applications, you want to take measurements from the image,” says Mike Rogers, senior research engineer at Image Metrics. “It needs to be repeatable, accurate, efficient, and every step has to be auditable. In entertainment, the image just has to look good. We drive animation. They see whether a drug is having an effect.”

And, at SIGGRAPH this year, a new mocap company, Organic Motion, unveiled what it claims is the first commercial markerless system. The system, developed by Andrew Tschesnok, founder and CEO, captures motion with 14 proprietary cameras typically set up in an eight- to 12-foot-square space fitted with a white, reflective backdrop. But the magic is in the software. “We don’t see everything,” says Tschesnok. “We only see what’s important given the way humans move and the way we see each other.”
 


Tschesnok predicts his biggest market will be in life sciences. But, because the system requires nothing of the subjects being captured—someone simply walks into the space—Tschesnok believes it will open new markets. “You’ll see systems in retail outlets within a year,” he predicts. Meanwhile, researchers at Harvard’s Spauld­ing Rehabilitation Hospital are testing it for clinical gait evaluations of children with cerebral palsy. And James Oliveria at the University of Florida’s arts and engineering Interdisciplinary Institute is developing applications for gait analysis and choreography, as well. Look for the first systems to roll out of Organic Motion’s doors this quarter. 

Whee, Moving Free
With most optical systems, real-time motion capture can happen only within a prescribed and calibrated space, whether or not the subjects wear markers, and it’s most efficient if that space is indoors. Three companies, Animazoo, Xsens, and Innalabs, offer an alternative: full-body motion capture using inertial sensors. Put simply, the sensors are tiny gyroscopes. The person being captured wears as many as 20 sensors that all connect to a wireless transmitter worn on the body. The small, matchbox-sized sensors of Xsens’ Moven are built into a full-body mocap suit that someone could wear under a costume or uniform, and Animazoo uses a Lycra suit for convenience, too. By contrast, people strap on Innalab’s 3D “suit.”

The vendors claim a high degree of accuracy for the systems—within .1 degree of resolution. “It’s comparable to motion-capture systems based on cameras, but the errors are different,” says Per Slycke, chief technical officer at Xsens. “Our sensors can track orientation accurately; we work out position based on orientation. Markers work out orientation based on position.”

Industrial applications for inertial-sensor systems center on the science of movement. Animazoo’s Gypsy system is analyzing golf swings. And, the Slovenian ski team recently used a Moven system to track data through entire downhill ski runs. So, it’s easy to imagine that the systems will attract attention from game developers.

But, inertial motion systems can’t do everything. “You can capture two people with no problem using Moven,” says Slycke. “But you can’t capture the mutual distance between them.”

Perhaps some day we’ll see systems that combine the best of all these techniques.

Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at BarbaraRR@comcast.net.
 

Sitting Pretty
Motion capture helps Ford design cars
 
At Ford Motor Company, Elizabeth Baron, a virtual reality and advanced visualization technical specialist, helps engineers and designers sit inside their designs before the company makes a physical car. To do this, she uses motion capture to track hands and the head in real time.
 


“We started with an Ascension magnetic tracking system in 1999, and switched to optical in 2003,” Baron says, noting that the company chose a Vicon system for its small, four-megapixel cameras, which the group could install inside a vehicle as well as around it.

The vehicle is a scalable prop. “We can set it up for whatever is being designed and marry it with virtual data, then track hands and the head in that physical virtual world,” Baron says. Passengers wear a head-mounted display (HMD) to have a stereo view and gloves for hand tracking. “If they grab the physical wheel, they see their virtual hands turning the virtual wheel,” she explains. “If they turn to the right and physically grab the seat, they see their hand on the [virtual] seat.”

The drivers are scalable, too. “We can have a six-foot person experience what it would feel like to be a five-foot person,” she says. “They’re always astounded.”
 


The drivers check reachability: whether they can see virtual cars passing them on the right, whether they can see a stoplight if they’re five feet tall, and so forth. Ford can also change the prop to a competitor’s vehicle to make comparisons. And, the firm can record and play the session later so engineers can see where the driver put his or her hands and where the person looked.

“It’s submillimeter accurate,” says Baron. “That’s incredibly important.”

In addition to the scalable prop, Baron puts designers and engineers in a four-sided cave (ceiling and three walls) and tracks eye view and perhaps the viewer’s finger using various devices and the motion-tracking system. She also has a motion-capture stage where designers wearing an HMD can walk around a virtual car and see it moving in traffic.

From this design lab, the data moves to a manufacturing lab where a virtual-reality system that uses Motion Analysis equipment helps engineers and designers look at the car’s manufacturability.

Baron, who has been working with visualization systems since 1995 and with graphics systems for 20 years, is now pushing the bounds of realism and interaction. “When I first started, I was limited to 60,000 polygons for an entire scene,” she says. “Now we animate other cars in the environment, pedestrians crossing the road. And, we can add shadow maps, transparencies, glossiness, shininess, shaders, and texture information to the models. We’ve come a long way from generating spheres.” —BR


A Leg Up
Motion capture aids anthropologists
 
David Raichlen, an anthropology professor at the University of Arizona, studies the evolution of locomotor systems. For humans and such animals as goats, sheep, and dogs, he uses Vicon’s marker-based system. In the past, he also worked with Qualysis’ marker-based system. For chimpanzees, he uses digital video.

“You can’t really put 3D markers on chimps because they pull them off,” Raichlen says. “So, we use 2D high-speed video, and instead of markers, we use nontoxic paint.”

Raichlen enters data captured by the Vicon system into MathWorks’ Matlab, a numerical computing environment and programming language. Similarly, he manually identifies painted markers on the chimps and enters that data in Matlab. “I always prefer 3D marker-based infrared over video because it speeds the process,” he says. “Everything I do uses the same kind of math, it’s just a matter of how automated the process is.”

The anthropologist also integrates the Vicon system with other pieces of equipment. “I use a force plate,” he says. “When you step on the plate, it tells you the forces being generated. Also, I can put all sorts of sensors on people and plug them into the same box. The software can integrate the 3D points with forces and biomechanical instruments that tell you what the body is doing.”

Why is Raichlen doing this? He and his colleagues are trying to answer such questions as why humans started walking on two legs. “Energy efficiency is important to locomotion,” he notes. “The way we combine motion analysis, forces, and energy consumption tells us something about fossil records and evolution. We can apply the data back to fossil records to see if fossil hominids walked the way we predicted.”  Apart from answering fundamental questions about our history as a species, this research has some practical applications. “We’re starting a project that looks at running,” Raichlen says. “We think running played an important role in our evolutionary past, but we were running with different mechanics. We were running barefoot.” So, the study might help shoe manufactures, for example, design running shoes that help prevent injuries.

But, Raichlen’s passion is answering the big questions about evolution. “As you start to put together the whole story of human evolution, you get a sense of where we fit in the natural world, which I think matters a great deal in how we treat the world.” —BR

Dots in Space
Real-time motion tracking powers VR
 
The Virtual Reality Application Center (VRAC) at Iowa State University has the highest-resolution immersive VR system in the world, according to professor Eliot Winer, who works there. The C6, built by Mechdyne Corporation, is a six-sided CAVE. Each side is a 10- by 10-foot projection surface. Twenty-four Sony projectors beam 4096x2160 resolution images onto the walls, ceiling, and the acrylic floor. By stacking four projectors vertically for each surface, the VRAC crew achieves 4k-by-4k resolution in stereo.

The C6 is but one of several CAVEs and immersive environments at VRAC, and the center offers various ways to track the motion of people in virtual worlds. “One way is with a magnetic field,” Winer says, using Ascension’s Flock of Birds as an example. “You stick a marker in the middle that creates a disturbance, and that gives you the position of the marker. The disadvantage is that you can’t use metal, so it doesn’t work for driving simulators, and the magnetic field can’t provide submillimeter accuracy. But, it’s portable.”
 


With a second system, viewers wear inertial-acoustic tracking equipment from InterSense. “The line-of-sight trackers elicit sonic chirps,” Winer explains. With those, they can track with submillimeter accuracy. “If you want to touch a bolt on the hubcap of a virtual vehicle, you can,” he adds.

For the C6, however, Mechdyne installed a real-time optical capture system from Motion Analysis. “We’ve become so sold on optical, we’re purchasing one to use in our four-wall CAVE,” says Winer.

Applications range from educational to industrial to military. For example, to study photosynthesis in a soybean plant, students riding in a virtual vehicle shoot water into subcell structures. Meanwhile, designers from John Deere interact with virtual tractors. And, a military application under development would have pilots operating real Unmanned Aerial Vehicles by flying virtual planes.

The C6 is so new that VRAC is only just beginning to produce high-quality visuals. “But that’s going to happen quickly,” says Winer, who claims people will soon believe they’re interacting with real objects in the CAVE, in real time.

“When that happens,” Winer says, “things will get really interesting.” —BR