Spying in Stereo
Issue: Volume: 26 Issue: 8 (August 2003)

Spying in Stereo

Stereoscopic effects place moviegoers in the middle of the action in Spy Kids 3-D

In his latest Spy Kids film, released last month, director Robert Rodriguez takes movie audiences to a whole new dimension—literally. Approximately 66 minutes of the 80-minute film is shown in stereoscopy. "It's a cool approach," says Chris Olivia, previsualization supervisor at Troublemaker Digital Effects, the previz and effects division of Troublemaker Studios, Rodriguez's Texas-based production company. "The use of stereoscopic 3D lent itself perfectly to this film, and to its target audience of kids ages 6 through 12." In Miramax's Spy Kids 3-D: Game Over, the third installment of Rodriguez's popular Spy Kids films, youth sleuth Carmen (Alexa Vega) is caught in a virtual-reality game designed by the Kids' new nemesis, the Toymaker (Sylvester Stallone). Juni (Daryl Sabara) must make his way through the game to save his sister and, ultimately, the world. According to Olivia, the second act—about 60 minutes—takes place inside the VR game in stereo, as do some portions of the first and third acts. Spy Kids 3-D also boasts approximately 700 stereoscopic digital effects shots and 55 traditional digital effects shots.

To ensure that the film could be shown in conventional movie theaters using conventional projectors and screens, Rodriguez utilized stereo technology based on the red/cyan filtration technique known as anaglyph. First used in film during the 1950s, anaglyph 3D doesn't require special projection techniques, unlike the more recently developed polarized stereo technology commonly used in IMAX theaters. Plus, only one projector is needed.

Simply put, anaglyphs are based on the principle that when you view a projected color through a filter of the same color, that color disappears. To use anaglyph 3D technology in film, one camera shoots the equivalent of what a person's left eye would see, and another camera shoots the equivalent of what a person's right eye would see, with both cameras aiming at the same point of interest, or convergence point.

When the left-eye version of the image is projected in the red channel and the right-eye version is superimposed onto that in the cyan channel, and you view the result wearing special "3D" glasses with a red filter in the left frame and a cyan filter in the right, you see a stereoscopic image. "By using anaglyph 3D rather than polarized 3D technology, we could release this as a mass-market film," Olivia says. "The movie theaters just hand out the 3D glasses to viewers, and they're good to go."

Although viewing the stereo imagery in Spy Kids 3-D requires simply donning a pair of throw-away red and cyan 3D glasses, shooting the film and generating its popcorn-spilling digital effects required special technology and months of R&D.

With most films that incorporate digital effects, CG models are composited into live-action backgrounds. But in Spy Kids 3-D, nearly every character is real, and it's the backgrounds—primarily the VR game's different levels—that are computer-generated. As such, almost all the live action was filmed on a greenscreen stage. To shoot in stereo, the production crew used high-definition 24P cameras developed by Pace Technologies, with each unit comprising two cameras stationed side by side like a pair of giant eyes. The camera on the left shoots the left-eye version of the footage, while the camera on the right shoots the right-eye version.

In Level Two, Juni battles a rival player while standing atop CG robots in a digital environment. The 6-minute sequence, referred to as the Robot Battle, was crafted by Hybride Technologies.
All images ©2003 Dimension Films/ Troublemaker Studi




The digital effects houses working on the film—Troublemaker, Computer Café, Hybride Technologies, Janimation, The Orphanage, and CIS—had to render left- and right-eye versions of their CG imagery as well. What's more, they had to ensure that the lens, height, angle, and convergence used in their left and right CG cameras matched those used in the Pace cameras. "Only if these were matched could we end up with an accurate final comp for the left and right eyes," Olivia says. "We had to have these comps to give to Post Logic and eFilm, where the final anaglyph and color correction processes were conducted."

None of the facilities, except for CIS, had previous experience in creating CG for stereo. So before any of the facilities began working on their respective shots, Olivia conducted preliminary tests to get the ball rolling and to make sure the animatics were taking full advantage of "3D goodness," a term Rodriguez used throughout production to push the studios into thinking in stereo.

Hybride also spent several months researching the best way to create digital effects for the film, and compiled that information into The Hybride Recipe, which details how anaglyph works and certain elements to consider, including issues regarding convergence points and which colors to avoid. The report was subsequently distributed to the other facilities to use as a guideline, along with information from Rodriguez regarding the cameras used for the greenscreen shoot, such as the distance between the left and right cameras, the distance between the actors and cameras, and other data pertaining to the perspective they needed to match.

In addition, all the artists received original concept sketches and 3D rendered keyframes created in Alias|Wavefront's Maya by Troublemaker, which handled the previz for this film, as it did for the previous Spy Kids releases.

The VR game—the focus of the film's stereoscopic effects—contains five levels. Janimation created Level One, which contains three subsequences: PogoLand, Sublevel, and BetaLand. In the PogoLand segment, Juni sees other kids, the game's beta testers, as they frantically try to avoid being squished by CG frogs bouncing on pogo sticks or ensnared by the frogs' tongues. In the Sublevel, an underground industrial environment, a CG character named Orbit explains the game's rules to Juni, who then moves on to BetaLand, where he encounters other beta testers playing the game as they attempt to progress to the next level. BetaLand combines lush green fields, mountains, and a desert, and is full of flying targets, one of which Juni hits, projecting him through space to Level Two.

According to Steve Gaçonnier, executive producer/visual effects supervisor at Janimation, all the backgrounds and characters in this 5-minute sequence—except for Juni and the beta testers—were created and animated in Softimage|XSI 3.5 and rendered with Mental Images' Mental Ray 3.2 using the BatchServe render management tool on Boxx Technologies' hardware. Chroma-keying and rotoscoping were done in Softimage|DS, and both XSI and DS were used to composite the greenscreen left versions with the CG lefts and the greenscreen rights with the CG rights to make the anaglyphs.

Janimation also used the XSI compositor to "prove" the anaglyphs. According to Ludovick William Michaud, technical supervisor/3D anaglyph camera operator at Janimation, all the facilities rendered their imagery using the left CG camera in their respective rendering programs. "The right-eye version of the imagery is based on the left-eye version, with the right CG camera offset slightly," he says. "The amount of that offset differs depending on the shot. If we needed a greater stereo effect, like when the frog's tongue darts out at a kid, we shifted the right-eye camera more."

Juni navigates through action-packed Level One, complete with three subsequences developed by Janimation, including PogoLand (above), Sublevel, and BetaLand.
Image courtesy Janimation.




Figuring out how much they should offset the right camera required that all the facilities test and manipulate their shots as necessary. For Gaçonnier, Michaud, and the rest of the team at Janimation, that process involved donning 3D glasses and then testing and tweaking the results in the XSI compositor. "When a stereo effect is too pronounced, it hurts your eyes and the stereo effect is ruined," says Gaçonnier. "When it's not pronounced enough, it's boring."

To get Rodriguez's approval of the layout, timing, and general effect of its shots, Janimation used the QuickTime Synchro System, a feedback tool developed by Hybride and leased to all the facilities working on the project. With this system, the facilities uploaded QuickTime files of their shots to Troublemaker, where Rodriguez viewed the effects while wearing 3D glasses and discussed the work with the artists via conference call. Rodriguez could also mark up the frames using a digital pen, so the artists could make the required changes based on those notations.

Once Juni leaves PogoLand, he enters Level Two of the game, where he must battle another player while both are standing atop CG robots in a digital environment meant to resemble a boxing arena on the moon. At the end of this 6-minute sequence, Juni progresses to Level Three, an 11-minute sequence in which he hops onto a CG vehicle and joins several other live-action kids on virtual vehicles as they make their way through various synthetic environments, including a city, an underground tunnel, and a desert. Creating these two action-packed sequences—called the Robot Battle and the Bike Chase, respectively—was Hybride's responsibility.

Using Troublemaker's previz as a guide, the artists created all the CG elements and animations in XSI. "We also did animatics in XSI of some action shots in the Bike Chase," notes visual effects supervisor Daniel Leduc. To composite the actors onto the robot bodies and CG vehicles, they used Discreet's inferno. The artists also used NewTek's LightWave for particle effects, Science-D-Visions' 3D-Equalizer for tracking, and Mental Ray in XSI to render the anaglyphs.

To prove their anaglyphs, the Hybride artists developed a special rig that worked with XSI and enabled them to output their tests quickly. "This allowed us to look at four or five types of convergence for the same scene and choose the best for each shot," Leduc explains. "Early in the project, the efficiency of our rig helped us discover 'magic ratios' representing the convergence point in relation to the distance between the two cameras, helping us optimize the 3D effect."









Level Three encompasses virtual vehicles and various synthetic environments, including a city, an underground tunnel, and a desert. Hybride Technologies developed all of the CG vehicles and environments employed in Level Three using XSI.
Images cou




Hybride also developed a pre-processing algorithm the team applied to its imagery that minimized pure illegal colors without greatly modifying the general color scheme in the anaglyph. "You cannot use pure colors like red, green, blue, or cyan in the image because they won't be visible on both sides," Leduc says. However, with the pre-processing algorithm, Hybride could add details to both the left and right renders of objects or sections of an image that were composed of pure colors.

After Juni successfully completes this level, he moves to Level Four, a 10-minute sequence created by Computer Café, in which he must fight another beta tester using a staff resembling a light saber, while other testers look on. "Juni and the others are on metallic blue blocks that constantly form and re-form," explains Jeff Barnes, executive producer and a partner at Computer Café. "Juni and his opponent jump from block to block as they fight, and the pieces move around in the sky." The artists built the CG elements in LightWave. They then tracked the scene using 2d3's boujou, and later composited the greenscreen footage into the digital environment and the staffs into the actors' hands with eyeon Software's Digital Fusion. Last, the team rendered the anaglyphs in LightWave on Boxx systems.

The artists, wearing 3D glasses, proved the anaglyphs in Digital Fusion, tweaking the shots as necessary. "We were constantly experimenting, tweaking, massaging, re-rendering, and re-aligning things to make it look and feel right," says Barnes.

Computer Café employed NewTek's LightWave, 2d3's boujou, and eyeon Software's Digital Fusion to create the unique battle sequence in Level Four.
Image courtesy Computer Café.




After finishing Level Four of the game, Juni advances to the fifth and final level. The journey to Level Five is depicted in an 8.5-minute sequence created by The Orphanage. According to Stuart Maschwitz, the company's digital effects supervisor, the artists created several digital environments and characters that Juni and the other kids encounter on their way to Lava Mountain. "They confront Tinker Toy characters that erupt from the ground. They surf on lava using surfboards made from hardened lava. They go up against the Lava Monster, who throws lava balls at them," he says. "We also did an interesting under-lava sequence in which the kids are swimming in the lava."

The artists primarily used Maya to create the 3D environments and objects. Because the lava, which is nearly red in color, would be difficult to see clearly when proving the anaglyphs while wearing 3D glasses, the artists tested the stereo effect early in the modeling process by crossing their eyes. "When you view an anaglyph wearing 3D glasses, your left eye sees only the red channel and your right eye sees only the green and blue channels," explains Maschwitz. "We wanted to previz the lava shots in 3D within the Maya interface, so we trained several of our leads and supervisors to cross their eyes while looking at the screen. This way, they could see the 3D rendered in front of them, in full color and full resolution, without having a filter over their eyes."

Juni's journey takes him to Lava Mountain, where he encounters the Lava Monster (created by The Orphanage).
Image courtesy The Orphanage.




When Juni and Carmen leave the game, they come face to face with giant CG robots (created by Troublemaker).
Image courtesy Troublemaker.




Other tools the Orphanage artists used include Photron's Primatte for keying, Pinnacle Systems' Commotion for rotoscoping, Next Limit's RealFlow computational fluid dynamics software to create the lava splashing and burbling, boujou and RealViz's MatchMover for tracking, and Adobe Systems' After Effects for compositing. To render anaglyph tests, they used the Maya renderer on Boxx systems.

Once Juni and Carmen reach Level Five, they leave the game, and the Toymaker unleashes giant CG robots to chase them. According to Leduc, Troublemaker and Hybride created the CG robots in this sequence using Maya and XSI, respectively. In addition, Hybride used inferno to composite these shots into a live-action background of the Austin, Texas, city streets.

Aside from these effects, there are 71 stereoscopic effects shots created by CIS Hollywood that are sprinkled throughout the film. In these shots, the Toymaker appears in a CG environment, called the Brain Room, which was modeled by Troublemaker in Maya. "Troublemaker sent us the Maya files, and we did the tracking and worked out camera angles in inferno," says visual effects supervisor Ken Jones. Also, CIS composited Stallone into the Brain Room using inferno, and rendered the anaglyph shots using the Maya renderer.

Although creating the effects for Spy Kids 3-D required far more work than usual, those involved with the film say it was worth the extra effort. "Creating a stereo film is challenging from beginning to end, but that's why Robert did the film this way. He was looking for an additional challenge," says Troublemaker's Olivia, "and to do something new and different for today's kids."

Adds Janimation's Gaçonnier: "The end result really looks cool. Kids will eat it up."

Contributing editor Audrey Doyle is a freelance writer and editor with more than 17 years of experience covering the computer graphics industry. She is based in Boston.

2d3 www.2d3.com
Adobe Systems www.adobe.com
Alias|Wavefront www.aliaswavefront.com
Boxx Technologies www.boxxtech.com
Discreet www.discreet.com
eyeon Software www.eyeonline.com
Mental Images www.mentalimages.com
NewTek www.newtek.com
Next Limit www.nextlimit.com
Photron www.photron.com
Pinnacle Systems www.pinnaclesys.com
RealViz www.realviz.com
Science-D-Visions www.sci-d-vis.com
Softimage www.softimage.com