Setting the Stage Act II
Issue: Volume: 26 Issue: 12 (December 2003)

Setting the Stage Act II

By Karen Moltenbrey

VIRTUAL REALITY | OPERA

From opera and film to ballet and even marionette renditions, Mozart's masterpiece The Magic Flute has been performed in just about every type of production since the composer finished the piece more than 200 years ago. Yet the students at the University of Kansas's (KU) University Theatre delivered a performance unlike any other by infusing the presentation with real-time computer-generated scenography and virtual characters created by the school's Institute for the Exploration of Virtual Realities.

The production, which marks the theater's seventh experimental work and its first venture into opera, featured live virtual scenes that could be navigated in real time during the performances, making each presentation unique—as live theatrical events are intended to be.

Photos and images courtesy The University of Kansas and the University Theatre.





KU's University Theatre used projected video (top) and CG imagery (bottom) to provide visual references to explain the story line of The Magic Flute, which was performed in German, as is customary.





University Theatre started using live, real-time computer graphics in 1994 with its rendition of Elmer Rice's classic American expressionist play The Adding Machine. "Back then, we were way out on a limb using stereoscopic imagery, which required the audience to wear polarized glasses," says Mark Reaney, KU theater professor and designer/technologist at the Institute for the Exploration of Virtual Realities. "Now we're taking small, steady steps, but still trying to do something different each time. With The Magic Flute, for instance, it was the first time we used rear projections on a wide-screen format and front projections on portable screens. This approach enabled the singers to interact with the imagery, rather than having it used only on a backdrop, as we had done in the past."

The Magic Flute—which incorporated 3D dragons, sorcerers, and other fantastic creatures—builds on the techniques the group used to create "digitally enhanced characters," or synthetic .human-performer hybrids, for its 2001 production of Edward Mast's and Lenore Bensinger's Dinosaurus, originally written as a shadow puppet play, in which actors work behind a screen. During The Magic Flute performance, the group projected digital images onto specially designed costumes, props, and masks. In one scene, a singer was given the body of an extraordinary beast, while in another, actors performed in front of a giant representation of their characters or appeared to float through imaginary landscapes.

"To create these creature-performers, we needed to experiment with new techniques of projection," says Reaney. "In our previous productions, our main objective was to create virtual settings, so we relied on fairly standard arrangements of rectangular rear-projection screens. For The Magic Flute, the actors stood alongside the imagery, so we needed surfaces that would move with the performers and be manipulated by them."

The concept of using visible projection screens on stage evolved from lab research Reaney had done previously using a smoke curtain, onto which he projected images. The difficulty, though, was maintaining a consistent column or pattern of smoke. "I'd either run out of smoke, or the room would fill with smoke, and I'd lose the image," he explains. Next, Reaney looked at moving screens, and in The Magic Flute, there are six different types, including a "lollipop" screen, onto which the group projected a bird. Then, a stagehand carried it while running in front of the audience, thus making the bird appear to fly.

According to director Delbert Unruh, the fantastical world of The Magic Flute proved an ideal project for the technology. Despite its complex, multi-level plot, The Magic Flute often is the first opera to which children are taken because of its charming, dreamlike settings and fairy-tale story. In the plot, the young Prince Tamino embarks on a bizarre journey to rescue Princess Pamina, and along the way, he encounters enchantment, dungeons, and an assortment of wild beasts. By using virtual techniques, the group was able to stage this imaginative universe in a fluid and seamless fashion so that the imagery moved almost as fast as the music. Because nearly every element on the stage was computer-generated, scene changes could be done instantaneously for the opera's numerous settings, offering unprecedented freedom and flexibility compared to traditional scenery. Additionally, the VR scenery was malleable, allowing the environments to move, grow, and change to reflect the development of the drama.

"VR enabled us to achieve effects that could never have been done otherwise," says Reaney. For example, a number of scenes occur inside a castle, and the initial plan was to have the VR navigator traverse the various rooms and stairs using the computer. "But we ran out of time, so we rendered those scenes with the spatial inconsistencies of an M.C. Escher painting," he adds. In one instance, each time the computer operator turned the stairs in a different direction—left, right, and even upside down—the look of the room changed, thereby allowing the audience to infer that the characters were traveling from room to room in the palace, without the artists having to model and render the imagery between the location points. Escher's work, in fact, became the inspiration for a number of other scenes, and appropriately so, says Reaney, as the opera is about bewilderment and navigating a fantastical maze.

"Because we could do scene changes so quickly, this was probably one of the shortest versions of the 200-minute opera that anyone has ever seen," says Reaney. "But, at 150 minutes, it was still a lengthy production, requiring a tremendous amount of CG."

The artists also had to create additional content to fill the wide format of the main screen (36 feet wide by 13 feet high), which required the use of two Epson rear projectors situated behind the screen in the center of the stage. Furthermore, a 22-foot-high screen on each side of the stage filled out the visual field, creating a cinemascope-like effect. "We gave the objects the appearance of depth, in the traditional sense, through forced perspective and careful manipulation of the light—something that theater artists have been doing for thousands of years—only this time it was done within a virtual setting," Reaney adds.

Reaney created the majority of the digital content himself while on sabbatical, receiving artistic assistance from former graduate student Aaron Dyszelski and alumni Nathan Hughes and Avraham Mor. In a few instances, they also incorporated live video elements into the production. For the most part, though, the imagery was three-dimensional, created in Discreet's 3ds max running on Windows-based PCs from Dell Computer, though Dyszelski built the dancing beast characters in Maxon Computer's Cinema4D. The textures were generated in Adobe Systems' Photoshop and Corel's Painter.

To bring these models to life, the group used Breider Moore & Co.'s WorldUp VR development tool for programming the interactivity that enabled the live actors to traverse the virtual worlds and the synthetic animals to jump and dance, all at the hand of a computer operator using a joystick and keyboard. The live actors, in turn, responded accordingly, interacting in real time with the virtual imagery. "I don't think there was a single scene in which one or more digital elements weren't moving on the stage at any given time," recalls Reaney.

The virtual elements, like the actual actors, had an extremely visible role in the opera, as they were projected directly onto the stage. For this setup, the group used a front projector located in the orchestra pit and a rear projector placed upstage. These were trained on six mobile screens that were pushed, pulled, wheeled, or carried by stagehands during the performance. "From the beginning, our intent was to open the edge of the illusion to the audience," says Reaney.

The technologists took this a step further by hanging the main screens a foot or so off the ground, revealing the legs and feet of the stagehands. "By bringing the audience in on the mechanisms of the illusion, they became partners in making the magic. Therefore, the effect became stronger rather than weaker," explains Reaney. "Just as when the audience can see the workings of the animal puppets in the stage version of The Lion King, one gains, rather than loses, appreciation for the theatricality. Penn and Teller do a magic trick, and we enjoy it. But then they show us how it was done, and we marvel at the cleverness and imagination of its creation." In this way, nothing has to be perfect, either. "And in live theater, it never is," he continues, "especially when it is experimental and you are doing it for the first time."






A lengthy production, The Magic Flute required a great deal of digital content, including this backdrop model (bottom), which was later projected onto a large center-stage screen (top).





According to Unruh, the digital technology had little effect on the performances of the singers/actors. "From the viewing angles on stage, the actors really didn't see the images on the screens," he says. "They just had to hit their marks and know what happens and when. The images were there for the audience."

Looking back, there is very little that Reaney would do differently. "Right now it all seems so perfect, but back then I remember wanting to pull my hair out," he recalls. Still, he believes that the use of CG enhanced the storytelling capabilities of The Magic Flute, which was performed in German, as is customary. "In the typical format, the performers stand in front of painted backdrops and sing in German, and the audience either knows the story or they don't," he explains. "By using CG and projected video, we helped tell the story visually so everyone understood it." In one scene, for example, Prince Tamino sings while gazing at a tiny picture of Princess Pamina. To make this point apparent to the audience, the technologists projected a live video feed of Pamina's face onto a large screen, made to look like a huge picture frame, above the prince.






The technologists let the audience in on the mechanics of the effects by projecting digital models (right) onto portable screens that are wheeled in front of the audience by stagehands (left).





When Mozart completed The Magic Flute in 1791, he probably never realized how popular his composition would become. By the 19th century, no major operatic center was without a production of The Magic Flute, and today it ranks among the most staged operas of all time. Yet through cutting-edge computer graphics technologies, University Theatre was able to present this time-honored opera in an entirely new way, and in so doing, has opened the stage doors to even more novel theatrical approaches.

CG | PERFORMANCE ART

Capacitor, a contemporary fusion dance company, blends a range of eclectic movements from aerial dance, martial arts, world dance, and even the circus to create complex stories that forge an equally unusual mix of concepts from scientists, artists, and thinkers. To enhance these unique live performances, the troupe adds a rich mixture of computer-generated backdrops, props, and animations to give the modern art form its new-age slant.

According to Capacitor's artistic director/performer Jodi Lomask, the group's aim is to create live-performance pieces that explore the impact of technology on our culture. "Productions like these are both heightening the audience's sense of what it means to see live performance and digital images, and blurring the line between the two," she says. "They bridge the visual magic of film and animation with the visceral world of dance. People know how to watch movies, but they often feel as though they don't know what to look for when they watch dance. So the visual component of our work gives them an 'in' to the dance and movement."

Images courtesy Capacitor.

Fusion dance company Capacitor integrates CG imagery into its productions as a way of helping audiences interpret the group's complex story lines and eclectic dance numbers, including this one from Avatars.






Since its inception six years ago, Capacitor has produced three such works that were born from its Capacitor Lab creative brainstorming sessions. One of these productions, last year's Avatars, combined live performance, digital technology, art, and animation to illustrate the mythical journeys of six characters inside a video game. "Video games are a place where people play out new aspects of the self," says Lomask. "Persons can experience the kind of power and triumph that they may feel is out of reach in the real world. I wanted audiences to see how the fantasy world of gaming is a contemporary expression of a natural human desire toward heroism and fulfillment of the self."

The group's current project, Digging in the Dark, unites the concepts of geophysicists and psychologists to create a geological-based metaphor depicting the exploration of the human consciousness. To reinforce this concept, the group is using a wide range of CGI created by digital artist Steve Vargas.

"During the lab sessions, people from a number of disciplines provide an opinion as to how a given piece should play out," says Vargas. "And everyone builds on those concepts to evolve the overall project."

A work in progress, Digging in the Dark incorporates motion data of Lomask that was captured during a session at the Los Angeles studio House of Moves (HOM) using a 24-camera Vicon V-8 mocap system. After cleaning up the data in HOM's Diva editing software, technical project manager Joshua Ochoa and technical director Garry Gray applied the motions to a 3D skeleton of a synthetic dancer that Vargas had created in Alias Systems' Maya. Vargas then skinned the skeletal model and rigged it with additional controls so many of the dynamic simulations could be achieved. "In one scene," he says, "it appears as if the character is dancing in a particle cloud."

For the most part, the group is using the 3D models—which are projected onto large background screens—to create an intimacy between the movement of the dancers and the motion graphics. For example, in the second scene of the show, titled Majestic Body, Majestic Earth, the performers move slowly in front of an animated 3D model that's textured with geophysical maps and data and whose movements parallel those being executed on stage. "We are relating the body and the movement of the body to the Earth and the movement of the Earth," Lomask explains, "and describing the body [through the use of textures] as terrain."

In a different scene, the motion derived from the HOM capture session is used to illustrate imagery of convection currents (which scientists believe shift the Earth's magnetic fields). By packaging the imagery and motion in this way, Capacitor is able to visually represent complex and often abstract theories and processes addressed in its production. When you add scientific and technological content into that relationship, says Lomask, the audience is better able to connect to the "what" and the "why" of the performance.

According to Vargas, the biggest challenge thus far has been getting the various tools and technologies to work together. For this project, he had to incorporate geological tomography data as a set of Cartesian coordinates within Maya, where he would program a camera to move through the data particle system and, at times, have a 3D character interact with the information.

"However, Maya wasn't sure how to interpret the data," says Vargas. "Most of the time the process would just shred my model to pieces or produce a fog effect whereby the colors changed constantly."

Through trial and error, the team found that the fewer coordinates they fed to Maya, the cleaner the results became. Using an algorithm that selected coordinates at set intervals, they were able to control the look and dynamic properties of the data representation, generating a mix of both low-resolution imagery and cinematic effects. With this method, Vargas eventually achieved the desired result—a cinematic-quality model that interacts with a low-res graphic in a way that "the character looks like it is passing through a block of Neapolitan ice cream."

"Taking data that is consistent with visualization systems and geophysical graphs, and blending it with a high-quality human model, provides a unique way of showing the connection between ourselves and the Earth," Vargas says, "which is closer than most people realize."

To enhance the scenes, the artists are creating additional effects, such as particles, using Discreet's combustion, software they also are using to color-correct the imagery. The team is employing Maya for rendering the geographic data and Mental Images' mental ray for rendering the cinematic imagery in order to take advantage of its global illumination capability, which gives the graphics more of a dynamic look. To composite the imagery, which consists of various resolutions and bit depths, the group is using Apple Computer's Shake. Once the animations are completed, Vargas will edit the piece using Canopus's Edius and place it onto a DVD that will be played during each live performance. Because the visuals are prerendered, the actors must synchronize their movements perfectly during the performance with the animations on the DVD.






Capacitor is projecting real-time images (above), captured using an infrared camera, onto a backdrop (left) to visually illustrate the connection between boring through the Earth's mantle and boring through a person's mind, one of the concepts in Digging in the Dark.





According to Vargas, the CG should augment the stage performances, rather than become the focal point. "Essentially, the performance is about the dancers, and if you're not careful, the CG can own the stage," he says. "We use it to set the tone, and we'll pull it back if necessary."

Capacitor beta-tested the production this past August, eliciting audience feedback prior to the May 2004 premiere in San Francisco. Like all Capacitor's audiences, this core group does not fit the profile of traditional theater-goers. Rather, the people attending Capacitor's performances are younger on average (in the 20- to 30-year-old range), digitally oriented, and, perhaps most important, interested in alternative events and performances, according to Lomask. The group has a strong following of college students as well.

"While those who like classic arts come to our shows, we have a contingent who would not normally attend a theatrical performance," Lomask says. "This includes people who can be turned off by 'high culture' but can enjoy an alternative experience because it combines the majesty of theater with intriguing, relevant, and entertaining content."

In Digging in the Dark, a performer struggles to uncover memories buried in his subconscious, while CG images representing his thoughts rotate in the background, triggered by the ball hitting the surface on which he is standing.




In Lomask's opinion, audiences are drawn to Capacitor's shows because they want to experience something that can't be found in film or traditional theatrical performances. "In a world where 'action' means The Terminator, 'ballet' means The Nutcracker, and 'digital' means computer games, how do you come up with a one-word description for a scientific concept based on a fusion performance of dance, martial, acrobatic, aerial, and fire arts, and still translate how much fun and exciting the show is?" she asks. "If we solve that issue, I think it will be easier to interest a broader audience in this unique style of theatrical performance."

MOTION CAPTURE | BALLET

When the Montreal-based dance company La La La Human Steps wanted to add drama to the complex choreography of its production Amelia, it turned to computer graphics to create some impressive 3D dancers. Because the virtual performers would share the stage with the troupe's real dancers—often described as among the most talented in the world—the movement of the characters had to reflect the same precision and discipline displayed by their live counterparts. So choreographer ´Edouard Lock looked to the best source possible—the dance company's own performers, whose physical likenesses and movements were replicated using sophisticated CG techniques.

La La La Human Steps is recognized internationally for pushing the boundaries of dance to reflect the tastes of a contemporary audience. And Lock's latest work continues this tradition with the exploration of human gesture through state-of-the-art computer graphics, according to Lock. Integrating digital imagery into the performance allows him to better represent his vision of contemporary man and woman, whose stability appears threatened in the chaotic world in which they exist. This portrait is achieved through the powerful interplay of extremes in human motion, as illustrated through the digitally altered moves of the virtual dancers. Meanwhile, a dizzying play of lights and shadows on the stage adds another dimension of movement to the performance.

Images courtesy La La La Human Steps and Kaydara.

La La La Human Steps used its own dancers as the animation source for the 3D characters in the troupe's presentation of Amelia. The precise steps of the real performers were motion-captured and mapped onto the models.




Amelia begins with a 3D ballerina rendered in real time and projected onto a 14-foot screen that's suspended above the stage. In a style reminiscent of the opening to the classic film 2001: A Space Odyssey, the audience's eyes are drawn to an expanding image against a black background. For Amelia, that image is the virtual ballerina, which appears suspended in midair. Meanwhile, the virtual camera moves around her, allowing a multitude of lights and shadows to fall across the model's face in a disorienting pattern.

Then, the real dancer, on whom the character was based, prances across the floor. The screen retracts, and the other live performers appear, moving chaotically on the dark stage lit only by a spotlight, which pairs of men and women chase frantically from one illumination point to the next.

At several times during the nearly two-hour production, the screen reappears with an image of one of five different virtual ballerinas projected onto a giant suspended mirror. In these scenes, the women appear to lose their sense of self, as their "images" look like plastic dolls. Their movements—deliberately slow and puppet-like—contrast strikingly with the fast tempo of the show's live dancers, adding to the staged confusion orchestrated by Lock through this choreography of extremes.

Yet this slowed motion also provides the audience with a glimpse of the precision and technical difficulty demanded of the actual dancers, which is nearly impossible to see in real time as they perform.

Creating the most realistic models possible required the collaborative effort of two CG technology vendors—real-time animation provider Kaydara and scanning specialist InSpeck. Also assisting in the creative process was Realities, a Montreal-based 3D animation boutique. First, InSpeck captured full-body surface point data of the five ballerinas using its white-light scanning technology. With its InSpeck EM editing and modeling software, the company pieced together the scans to create complete high-resolution models of each dancer. Next, the InSpeck and Kaydara teams used a Vicon motion-capture system along with motion-sensored gloves from Immersion to acquire nearly 15 minutes of precise animation that matched the actual movements of the women.

The performers appear to lose their individuality amid the staged frenzy of the ballet, as seen in the face of this doll-like character, created using scanned data from one of the dancers.




The group also added a great deal of automated secondary animation such as eye blinks, toe flexes, slight hand movements, and other mannerisms into the final images, making the models appear almost real as the camera zooms in on their faces, hands, and feet. Kaydara then assembled the mocap data using its Online software to create the animation files.

During each live performance, La La La Human Steps technical directors use Kaydara's Online software suite of interactive animation and rendering tools running on a Boxx Technologies graphics workstation to control both the animation and the playback of the computer graphics. Because the imagery is rendered in real time, the team can change the camera angles and movements as they wish, providing every audience with a unique performance.

According to Kaydara president Michel Besner, using digital technology allowed for more flexibility in the overall performance and expanded on what can be achieved with onstage entertainment. "Combining traditional elements with digital techniques," he says, "creates an entirely new dimension in the performance arts."

Karen Moltenbrey is a senior technical editor at Computer Graphics World.

Adobe Systems www.adobe.com
Alias Systems www.alias.com
Apple Computer www.apple.com
Boxx Technologies www.boxxtech.com
Breider Moore & Co. www.breidermoore.com
Canopus www.canopus.com
Corel www.corel.com
Dell Computer www.dell.com
Discreet www.discreet.com
Epson www.epson.com
House of Moves www.moves.com
Immersion www.immersion.com
InSpeck www.inspeck.com
Kaydara www.kaydara.com
Maxon Computer www.maxoncomputer.com
Mental Images www.mentalimages.com
Vicon www.vicon.com