Turbo-Charged FX
By Martin McEachern
Issue: Volume: 29 Issue: 8 (Aug 2006)

Turbo-Charged FX

Rhythm & Hues stages a high-speed race through Shibuya Square in The Fast and the Furious: Tokyo Drift  
Surrounding one of Tokyo’s largest railway stations, Shibuya Square is Japan’s mecca for shopping, entertainment, and fashion: a massive commercial complex of stores and theaters that is home to the world’s busiest Starbucks and eclipses even New York’s Time Square in scale and candle power. At night, the square is aglow in the brilliant colors of animated signs and teems with enough people to fill a major airport. With three billboard-sized televisions erected high above and Carmina Burana thumping in the night air, wave upon wave of pedestrians flow into the famous seven-way scramble crossing featured prominently in 2003’s Lost in Translation. Nestled anachronistically in this ultramodern setting is one of Japan’s national treasures, a bronze statue of Hachiko the dog. Erected in 1948, the statue commemorates the purebred Akita that would greet his owner every day after work at Shibuya Station, and, when his owner died, kept vigil there in hopes of seeing his master again.  
Needless to say, staging a high-speed car race through a crowded Shibuya Square at night would be a ludicrous proposition. Or, would it? Because that’s just what director Justin Lin had to do for the latest installment in The Fast and the Furious film series, which takes viewers on a high-octane ride through the underground world of Japanese drift racing. The script for Universal Pictures’ The Fast and the Furious: Tokyo Drift called not only for a high-speed chase through Shibuya Square, but for the camera to rotate 360 degrees around the cars during the chase. It was dubbed the “Kurosawa” shot—after legendary Japanese director Akira Kurosawa—and soon seemed impossible when Lin could not obtain permission to close down the square.
Undeterred, Lin turned to the digital wizards at Rhythm & Hues Studios (R&H) to make the impossible possible. After executing principal photography of the cars on a parking lot set in Atwater Village in Los Angeles, Lin charged visual effects supervisor Raymond Chen with digitally transforming the parking lot into Shibuya Square. This required the construction of an entirely digital Shibuya Square.
In addition to combining shots of the real cars with their CG square, the crew also had to composite CG cars into stitched background plates of the real square. These plates, along with the digital set, also had to be added to the many close-ups of the drivers shot on a greenscreen stage. Special effects coordinator Matt Sweeney built an air-casters rig that floated multiple car bodies on a large greenscreen stage. “This gave the director organic movement between the cars and the camera,” says Chen. In the close-ups, the cars do not have windshields or windows, which were added digitally, along with reflections.
These close-ups were crucial to Lin’s visceral approach to the film. Unlike the first two films, which reveled in impossible, over-the-top action, Lin wanted the racing in Tokyo Drift to feel more realistic. To that end, he kept his camera as close to the drivers as possible. This required a special multi-camera rig for shooting plates that would not result in parallax problems when they were composited with the extreme close-ups.
These panoramic plates were shot using two sets of three Arriflex 435 cameras from the ARRI Group, each bearing a 20mm lens, affording a 150-degree field of view for each set. Designed by overall visual effects supervisor Mike Wassel, the system allows objects to get very close to the camera without producing the parallax problems. “The ability to use these stitched plates for backgrounds gave Lin a lot of freedom in placing the camera, which was closely focused on the actors,” says Chen.
Tokyo, LA Style
“While the LA set was built to the same dimensions as Shibuya Square, it had no built structures or facades, save for a couple of sidewalks, streetlights, and subway entrances. We had to create the entire intersection, including CG buildings, signs, trees, as well as CG crowds and CG cars, to augment what was already there,” says Chen. 

The fastaction and complex camera moves scripted for Tokyo Drift required extensive digital work, including a virtual version of Tokyo’s Shibuya Square, complete with CG buildings and people.
The digital square comprises both 3D models—of buildings, trees, signs, and so forth—as well as multi-layered 2D matte paintings placed on cards in 3D space. Some of the buildings also have fully furnished interiors, either because they were visible through the exterior glass or because they were open storefronts. Using point-cloud data from LIDAR scans of the square as reference, artists modeled the buildings as polygons using Autodesk’s Maya and R&H’s proprietary modeling software, dubbed And. With the square as the focal point of the chase, the modeling team built the location in the highest level of detail, dividing it into five pieces, while coordinating the detail of the rest of the surrounding cityscape geometry with its proximity to the camera.
Although the Tokyo Drift production also provided Chen with LIDAR scans of the vehicles, the team found the point clouds to be too inaccurate to reproduce the highly streamlined, reflective surfaces of the cars. After abandoning an attempt at photomodeling, the group found greater accuracy by using an arm digitizer. To model both the cars, the artists again used     Maya as well as R&H’s And software to construct the polygonal geometry.
To ensure that the CG set and cars sported accurate shadows and reflection maps, R&H went on location to the Shibuya set in December 2005 to shoot reference photography and take HDRI cube captures using its proprietary HDRI cube-capture system. The system, which was also used to map the lighting environment of the Atwater Village set, employs a “cube” camera to record various views of the sets through six lenses. Using proprietary software, the views are then stitched together into a spherical image, which is used to reproduce the lighting conditions inside the digital environment. For further accuracy, the team filmed a reflective silver ball mounted on a camera truck traveling in the path of the cars through Shibuya Square. In addition, the artists had thousands of digital still photographs shot through a fish-eye lens mounted on the same truck. The Tokyo Drift production also provided R&H with film from the multi-camera rig drive-throughs of Shibuya and Ginza, which the group used extensively for adding deep vistas to the digital Shibuya Square set.

In the film, the busy location is a composite of real and computer-generated cars, and CG
sections of the square combined with background plates of the actual locale.
In order to capture the smooth, flowing specular highlights of the metallic car bodies, the crew rendered each car in two passes: one overall pass and another for the flaring, Fresnel edges. For texturing, artists painted maps using Adobe’s Photoshop.
Rapid-Fire Reflections
Because the scene is set at night, the team had to contend with an ever-changing kaleidoscope of reflections dancing across the surface of the cars, from mirrored animated signs to intense specular reflections from lights. With the many animated signs, illuminated storefronts, and billboards, getting the timing, positioning, and resolution of the reflections on both the cars and buildings right was one of the greatest challenges, according to Chen.
Further complicating the reflection mapping was the fact that some of the closeups of the drivers shot on the greenscreen stage were done using moving, interactive lighting designed to simulate the rapid-fire reflections from Shibuya Square. Lin used everything from a helicopter blade-like rig to dollies carrying banks of lights pushed by hand. To match this interactive lighting, R&H used sequences of digital stills and multi-camera footage from Shibuya Square to add reflections of signs, advertising, and lights to the cars in synchronicity with the practical lighting.
“It was so challenging we often had to re-time the footage so our added reflections would match the greenscreen shots,” adds Chen. “For the buildings, however, we didn’t need to match any principal photography. We were primarily concerned with the positioning of specific reflections. We ended up modifying the reflected environment significantly in order to get the buildings to look the way we wanted them to look. This involved mapping both still and animated textures of neon signage and positioning cards to get the reflections to look interesting.”
For lighting and rendering the cars and city, R&H used its proprietary rendering software, Wren, to perform numerous lighting passes on each scene, which were then combined in compositing. The lighting methods used for each pass included: raytracing, for reflections and refractions; global illumination, for diffuse lighting; and ambient occlusion.
Matchmoving and Massive Agents
Previsualization was done for the Shibuya Square sequence, and could be implemented for the all-CG shots; however, for any shots involving real cars, it was almost impossible for the stunt drivers to adhere to the previz, which made matchmoving all the more important. To matchmove the footage of the real cars at Atwater Village with the digital environment, the crew had to contend with the problem of elevation differences between the topology of Shibuya Square and the LA tarmac set. “To overcome the problem, we ended up re-positioning buildings one by one to fit onto the tarmac ground,” says Chen. Artists then used Voodoo, R&H’s inhouse matchmoving software, to matchmove the rest of the Kurosawa shot, which was assembled from two different takes: re-timed and stabilized.
Lin’s decision to focus on the characters rather than the cars during the races meant that the frequently used stunt drivers had to be replaced with the actors. In one such shot, the Yakuza villain’s car, driven by a stunt driver, speeds head-on toward the traveling camera car. R&H matchmoved the shot using Voodoo, then exported the camera information to guide a motion-control shoot using the actor playing DK. “By combining the motion-controlled footage of the actor with the original stunt footage, we were able to place DK into his car,” says Chen. Surprisingly, animators drove all their digital cars using keyframed animation, even controlling suspension compression by hand. No rigid-body dynamics or expression-controlled animation was used.

From top to bottom: Rhythm & Hues
generated bystanders using Massive’s
crowd-sim software, and then added
them into the heavily composited shot,
thereby achieving the final street scene.
In any race, of course, the illusion of speed is enhanced by scattering sparks and smoking tires. To add tire smoke to both the CG cars and the real ones inside the digital set extensions, R&H created particle animation in Side Effects Software’s Houdini, refining its motion with the studio’s in-house fluid simulator, Ahab. The particles were then rendered through Wren using R&H’s in-house field expression language, Felt.
To create the throngs of people along the many storefronts and the crush of humanity colliding in the zebra crossing, R&H used Massive’s crowd-simulation software, motion-capturing approximately 200 distinct actions for the Massive agents at Giant Studios. In the most crowded shots, some 5500 to 6000 agents are interacting simultaneously, leading to high-memory renders that needed to be tiled for efficiency. Modelers sculpted six scalable base models for the men and women, and outfitted them with a male or female rig. To differentiate the character models, artists added various props such as suitcases and shopping bags. In high-angle shots, the crowd is composed solely of Massive agents, while ground-level shots required both Massive agents and greenscreened extras in the foreground. To add tangles of traffic to the streets, artists created four base car models for use as Massive agents, and then varied their color and texture maps during rendering.

(Top left) To accurately represent the square,
artists filled the streets with thousands of Massive
“agents” that all interact simultaneously. (Top
right) The CG people were then composited into
the shot. (Bottom) More CG bystanders and
objects help flesh out the scene.
Finally, the artists used Autodesk’s Inferno to composite the backgrounds into the greenscreened windows of the close-up interior shots of the drivers. Meanwhile, to incorporate the digital cars, crowds, and Shibuya Square into the shots, they relied on Apple’s Shake and Icy, R&H’s in-house compositing software. “Using Icy and Shake, we were able to resolve the inherent difficulties of shots that combined elements that originated in very different places,” says Chen. “For example, we were able to combine greenscreened extras with CG crowds, add them both to the tarmac street of the [parking lot] set, which was then inserted into the virtual cityscape of CG and matte-painted buildings.”
Tokyo Drift’s Shibuya Square sequence is one of the most dramatic foreshadowings yet of the impending union between the digital artist and the production designer. With their fates and futures inextricably intertwined, the digital artist will graduate from the periphery of the filmmaking process to a more integrated and collaborative role, as indispensable to any production as the director, editor, or cinematographer. And it is the software companies that are in the unique position of catalyzing this convergence, argues digital set expert Vlad Bina (see “Staging Action on Virtual Sets,” pg. 36), by bringing within the same software pipeline the three aspects of film production that still remain relatively distinct: production design, previsualization, and digital set design. “Switching back and forth between digital and analog data as well as between built and virtual constructs is still very complex and must be organized better,” he says. “Once these three aspects are fully integrated, the result will be a new pipeline that will streamline the moving image from concept to printed stock.”

Martin McEachern is an award-winning journalist and contributing editor for Computer Graphics World. He can be reached at martin@globility.com.