Inside Job
Issue: Volume 33 Issue 11: (December 2010)

Inside Job

Pioneers, people say, catch arrows in their backs, and that was certainly true for Disney’s groundbreaking film TRON. Released in 1982, it was the first feature film to use CGI extensively. Not to create photorealism, but to visualize computer technology in the future. Today, many people credit TRON with the birth of computer graphics in feature films, and the story about why TRON did not receive a visual effects Oscar is the stuff of CG legends.


A 50-polygon Recognizer in the 1982 TRON became this giant machine in TRON: Legacy that captures
Sam when he first arrives in the digital world. All enemy vehicles and Programs have orange glow lines.


It was ironic even in 1982, given TRON’s story line: Kevin Flynn, a video game developer played by actor Jeff Bridges, searches through a Master Control Program (MCP) to prove that another programmer had stolen his code. Digitized by a laser beam, Flynn enters the computer, a dark place where circuits of light outline characters, objects, and vehicles in an expansive virtual world. Once inside, Flynn, the computer User, battles captured Programs in life- or-death games, sometimes while riding speedy Lightcycles inside a Game Grid. Helping Flynn is Clu, a hacker Program Flynn wrote that looks like a mirror image of himself. Flynn escapes at the end, but MCP “de-rezzes” Clu.

“The guys who made that film were very much ahead of their time,” Bailey says. “Steve Lisberger [writer, director] envisioned people living as avatars in a digital world, and that was in 1982, when most people didn’t have computers.”

Today, visual effects and computer graphics are synonymous, people live as avatars in digital worlds, computer games are nearly photorealistic, and the former writer/director has now taken the role of producer. “I made my TRON,” Lisberger says. “I didn’t want to compete with myself 28 years later.”

Jeff Bridges, however, does compete with himself 28 years later. With the help of computer graphics, he plays two characters in TRON: Legacy. One is Kevin Flynn, at Bridges’ current age. The other is Clu 2.0. In the story, we learn that Flynn re-entered the Grid and programmed a new version of Clu about two years after the first. Clu 2.0 looks like a clone of Jeff Bridges at about age 35.

A team at Digital Domain—led by visual effects supervisors Eric Barba and Steve Preeg, who had won an Oscar for the effects in Benjamin Button—performed the technical and artistic magic. This time, not to imagine an elderly Brad Pitt, but to replicate a familiar face. And this time, they created a digital human in stereoscopic 3D.

“One of the freakiest things was when we scanned Jeff [Bridges],” Lisberger says, referring to one step in Digital Domain’s Emotion system that was a facial capture of Bridges with Mova’s Contour software. The scan produced a lifelike, animated digital doppelganger. “For the first film, I just made up that idea,” Lisberger says. “Now, I could see it on the Grid.”

Back to the Future
As TRON: Legacy opens, we learn that Flynn disappeared when his son, Sam, was a child. Now 27 years old, Sam (actor Garrett Hedlund) finds himself investigating a strange signal apparently sent from his father’s old video-game arcade. He discovers his father’s hidden workroom and, while there, the same laser beam digitizes and sends him into Kevin Flynn’s digital world. Once there, we discover that when his father revisited and upgraded the digital world he had left, things didn’t work out so well. Flynn’s creation, Clu 2.0, took control of the system and trapped him. The Grid is now even more dangerous than before, and Sam must fight gladiatorial games to survive.

All told, TRON: Legacy has close to 1500 visual effects shots, of which approximately 170 feature Clu 2.0, the younger version of Bridges. “When Joe asked us to do this job, we knew we’d do the character Clu here based on our work on Button,” Barba says, referring to director Joseph Kosinski. “But we also had to create an entire world in 3D, a world never seen before. We kept the major sequences that establish TRON here at Digital Domain: the disc game battle, the iconic bike sequence, the small, difficult character pieces. And Joe wanted to keep the two big sequences at the end here. We found outsource partners to do the rest.”

Those partners included Mr. X, Ollin Studio, Prana Studios, Prime Focus, Whiskey Tree, and Yannix. Electronic Arts did motion capture in Vancouver, where Kosinski shot the film, and House of Moves assisted in postproduction.



Actors filmed on bluescreen stages fought disc games on bluescreen stages. Artists rotoscoped
the images to create accurate reflections on the glass floors. At right, crowds of digital agents
surrounding the ever-changing courts cheered the action.


So that crews at the outsource partners and at Digital Domain could publish assets directly into shots and move into shot production quickly, previs and layout supervisor Scott Meadows led a team that blocked out the entire film in stereo. “The film was conceived to be filmed with a 3D system from the first,” Kosinski says. “We spent a year getting ready. We shot the film in 75 days, and then spent a year and a half in postproduction.”

Also to help speed postproduction, sequence supervisors at Digital Domain gave the outsource partners assets and asked them to switch to The Foundry’s Nuke, Autodesk’s Maya, and Chaos Group’s V-Ray. “Additionally, we gave them scripts in Nuke to handle vertical disparity [for mis-aligned stereo cameras],” Barba says. “We got them up to speed on a short schedule.”

On Set
Kosinski’s director of photography, Claudio Miranda, shot the film in stereo using six Sony CineAlta F35 cameras on three Pace Fusion 3D rigs; the second unit used a fourth rig and cameras. “Joe [Kosinski] wanted to build sets for the actors to interact with, to ground us and to lighten our load, so we shot much of the film on sets,” Barba says. The disc games and Lightcycle sequences, however, are predominately full-CG, as are two climactic sequences at the end of the film, which include a spectacular Lightjet dogfight.

Track Stars
Approximately 25 trackers worked on TRON: Legacy, some as long as a year. “We had the whole film,” says Ross Mac­Kenzie, 3D integration supervisor, “1500 shots, and they were long shots. Most were over 300 frames, and they kept getting longer and longer.”

Although trackers at Digital Domain sometimes use commercial software, the mainstay at the studio and the workhorse for TRON: Legacy was the studio’s in- house software Track, for which software engineer Doug Roble received an Academy Award in 1998. Simply put, Track calculates the position of the camera used to film a shot as it moves through three-dimensional space, frame by frame. Similarly, the software can also track the path of any objects or characters as they move in the scene. With this information, artists can position a virtual camera in a 3D scene built to match footage shot on set, insert CG objects into the footage, and/or replace something in the scene with a CG object. Once rendered, the inserted objects fit correctly into each frame of the 2D scene, even as the camera changes perspective.

“I started writing Track in 1973, 17 years ago,” Roble says. “Every once in a while I come back to it, and for TRON, I realized, well everybody realized, it needed significant upgrades. The software has 175,000 lines of code. I probably modified about 20 percent of it, and I added things that we needed.” Roble primarily updated the user interface and added math to manage more than one camera and do calculations with enough precision for stereo. “I wrote my tracking camera with a main camera in mind,” Roble says. “With TRON and other movies nowadays, people have learned that witness cameras and other cameras on set can really help. So on TRON, a lot of the shots had the two main left- and right-eye cameras, plus multiple witness cameras. I had to add a heck of a lot more data and make it easy for the artists.”

The artists begin by lining up one frame and attaching markers to various features in the filmed footage to help the software calculate the camera path. “It’s all about managing errors,” Roble says. “The pipeline was to track the main camera, choosing one eye [camera] as the main eye. Then, if everything was perfect, you would only have to know how far away the other eye was. But nothing is perfect. There might be an error in the left eye that you notice only because the right eye is off.”

The mechanical rig might have vibrated one camera more than the other, or one camera might have pointed up a little more than the other, and that would mean the tracked points would be in different positions from one frame to the next. “In the CG world, you assume perfect cameras and perfect lenses, but in the real world, the lenses aren’t even exact,” Roble says. Moreover, simply finding points to track wasn’t always easy. “Lots of sets have brick, corners, and other details the computer-vision programs can hang onto,” Roble says. “We had smooth sets. All we had were lines separating one smooth thing from another. Even the corners were nicely rounded and smooth.”

“We used pieces of dust,” MacKenzie adds. “We used what we could. We also had a lot of reference photography that we used to reconstruct the geometry. We brought those photos into Track and used them as an additional camera.”

The artists could also use data from the witness cameras and survey data—measurements of the set—which gave the tracking team the distance between objects in an image. “We have a strong survey team,” MacKenzie says. “We can feed their measurements into the software, and Track will solve using those parameters.”

In fact, the artists could add new points at any time, and those points could affect the track. “Tracking becomes an organic pipeline of adding and building enough points,” Roble says. “The solvers take in identified points on the image and any other information we have in the scene, and as you add new data, the original tracks get better. It’s an iterative process.”

Once the artists tracked the camera, they ran test footage and looked for any problems, checking to see if the track looked good in both stereo “eyes,” that is, from the left- and right-eye camera views. This was especially true for the tracked head. To replace Reardon’s head with Clu’s CG head, the team needed to know the exact position of the body double’s head in every frame.

“When these guys were tracking Reardon’s head, they’d figure out where the camera was, and then figure out where the head was in the scene,” Roble says. “But, tracking deformable objects like a head is particularly difficult. If it moves, or if the eyes move, or if the hair moves, the track is squishy. There’s no solid place on the head that you can rely on. The points shift all over the place. There’s a point at the top of the nose where it meets the eyes, but even that part moves when someone frowns. The back of the head attaches to the neck, and the neck changes shape all the time.”

To have Clu’s digital hair fall properly from his CG head onto the image of Reardon’s collar and jacket, the trackers also needed to precisely determine where the body double’s costume was in 3D space. “We had to be careful that there was no high-frequency jitter that would cause the hair simulation to freak out,” MacKenzie says.

Even though a track might look good visually, any vibration in the collar would affect the simulation. “The hair is like tiny springs,” Roble says. “If there is any vibration, the hairs would be like a needle on a record player bouncing like crazy; they’d do a little dance. The precision had to be rock solid. Everything is sub-pixel accurate.”

If the animators saw that the CG model didn’t match the body double’s head—Clu’s chin might not dip as much as Reardon’s, for example—they’d send the track back for a better fit, even if the track was correct. “You’re putting someone’s head on another person’s body,” MacKenzie says. “Nuance plays into it. Sometimes you need to be less objective and more artistic. Otherwise, they just don’t look right together.”

Before Roble updated Track, he spent time researching computer-vision literature and technical papers, looking for ways to compute the tracking calculations. He didn’t find much help.

“There isn’t much research in deformable head tracking, deformable object tracking,” he says. “A lot of the computer-vision literature deals with tracking rigid objects or reconstructing scenes. I found other papers on tracking deforming things, like cloth. But most of that research was in constrained circumstances. We had moving cameras, moving sets, moving everything, and we had to track deformable objects. We could use some help from the academic community [on this problem].” —Barbara Robertson

The narrative sequences filmed in the End of Line Club and inside Kevin Flynn’s house, on the other hand, take place on enormous sets built with glass floors. “I wanted to capture as much in camera as possible,” Kosinski says, “the characters’ reflections in the floor, the sets, each other’s eyes.”

Each actor on set wore a costume patterned with TRON’s iconic glow lines, foam rubber suits laced with encapsulated wiring that connected to electroluminescent lamps on the surface. The lamps, made from a flexible polymer film, emitted yellow glow lines on Clu’s suit, orange on the other bad guys, and white and blue for the heroes. Extras on set controlled their own suits, but principal actors had their glow lines operated by remote control.

Costume designers conceived the Lightsuits using Autodesk’s Softimage and Pixologic’s ZBrush, and scaled the designs over digitized bodies of the actors. Quantum Creation Fx sent the data to computer numerical-controlled (CNC) manufacturing machines.

“It was fun working out what sort of Program I was,” says actor Michael Sheen, who plays the flamboyant club owner Castor, provider of any and all entertainment and diversions. “But, it took skill to make it look like I could breathe and move easily in the suit.”

On each Lightsuit’s back is an all-important disc. The disc contains the character’s identity, and when removed, becomes a dangerous, Frisbee-like weapon. Glow lines outline the discs, as they do for the suits, vehicles, and other objects and structures in TRON. The prop discs housed batteries and inverters, and glowed thanks to 138 LEDs.

These glow lines would become a major part of the visual effects effort: Artists fixed and enhanced the light on the real suits, imitated the light on the digital doubles, added a glowing jagged edge to the discs, and created the light for the vehicles and the CG environments (see “Glow in the Dark” pg. 16). “The key to this movie is how light integrates into every object,” says Darren Gilford, production designer. And that includes the CG characters.

Clu 2.0
Two supervisors at Digital Domain, Jonathan Litt and Greg Teegarden, led lighting teams that altogether included approximately 40 artists. “We basically divided the work into the Jeff Bridges facial replacement shots and everything else, and it ended up pretty much equal,” Litt says. “There are around 170 head shots, so that’s a small part of the total work, but the amount of mental effort is high.”

To create Clu 2.0—that is, the young Jeff Bridges—Digital Domain’s artists fitted an animated digital head onto a stunt actor’s body using many of the same methods they had developed to put an aged version of Brad Pitt’s face onto a child-sized body (see “What’s Old Is New Again,” January 2009), and many of the artists who worked on that film moved onto TRON: Legacy. But, there were differences, and the differences mattered.

“Clu is the most difficult thing we’ve ever done,” Barba says. “We had just come off Benjamin Button, which was the most difficult thing then. This, however, proved to be much harder. No one knew what Brad Pitt would look like at age 80, but everyone knows what Jeff Bridges looked like when he was younger. And little Benjamin was a bit passive; he moved through the world, but he wasn’t a driving force. Our character Clu is the opposite. He’s a major character who gives major speeches. He’s a bad guy, the one Jeff Bridges plays against.”

It may be the first time that an actor has performed in scenes with a younger version of himself. And, in one flashback scene, Bridges does double digital duty when a young Kevin Flynn shares a close-up scene with his clone.

Capturing the Performance
“We started with the same methods we had used and learned from on Benjamin Button,” Preeg says. “The main difference was that with Brad [Pitt], we did everything post shoot, but Jeff wanted to be Clu in the moment.”

As they had for Benjamin Button, the team began by capturing Bridges performing a set of FACS expressions with the Mova system to help modelers identify which facial muscles the actor moves and how much they move as he smiles, frowns, and creates other facial expressions, and as he enunciates particular phonemes. Modelers used Mova’s captured data to sculpt shapes they applied to a digital model of the younger Bridges/Clu. To build that model, they referenced a scanned Rick Baker maquette of a young Jeff Bridges’ head, “scanned into the computer like in the [first] film,” Kosinski says with a smile. “But we did it for real.”



Jeff Bridges as he appears in the role of Kevin Flynn. At right, Clu 2.0, a digital clone
of Bridges at age 35 created by Digital Domain and animated using dialog and facial expressions
captured from Bridges.


The studio also captured lighting reference from the maquette using a system from LightStage, but afterward, the filmmakers changed Clu’s age. “The original maquette was no longer valid for what we wanted to do, so we had to scrap it and rebuild the model at a younger age,” Barba says. Animators worked with a low-resolution version of the final model and blendshapes, while lighters received a higher-resolution version.

For Button, after the director had filmed the shots, Pitt had watched the footage with the double playing his part, and then performed the dialog in a controlled environment. “He was locked into position with four high-definition cameras shooting him,” Preeg says. That made it easier for Digital Domain to accurately motion-capture and track his performance. Bridges, however, performed his scenes on set with the other actors.

At first, the crew thought Bridges might do the entire performance and Digital Domain would apply motion data from his performance to a CG character. “We were ready to capture the body and face simultaneously,” Barba says. “We did some tests, but no offense to Jeff, he’s not 35 now. You could tell in the motion capture that he was older.”

So instead, the crew captured Bridges’ facial performance as he acted out the scenes with the other actors, while John Reardon, his body double, watched. Then, they shot the scene again with Reardon­—in Clu’s Lightsuit costume—acting the part. Because the tracking team would need to locate Reardon’s head in the filmed footage so they could replace it with Clu’s CG head, Reardon wore a gray hoodie with markers and had dots on his face.

“I had told Joe [Kosinski] that the key would be in getting a body double who could mimic Jeff [Bridges] as closely as possible,” Barba says. “John Reardon was perfect. He nailed Jeff’s nuances, timing, movement, and the eye lines.”

To capture Bridges’ on-set performance, Digital Domain put 52 marker dots on his face and a helmet on his head. The helmet had four cameras mounted on it, two on each side of his face near the jaw line—lip sync would be especially important for this character. “The cameras were arranged so that two of them would see every point and we could triangulate and get true positions in space—locators in space for every marker,” Barba explains.

In addition, two witness cameras and two sets of primary stereo cameras (four cameras) shot the performance—for a total of 10 cameras. “We would shoot with Jeff [Bridges], and then we would shoot with Reardon, which meant we could use data from 20 cameras for any shot,” Barba says.

A Data-Driven Double The team ran the triangulated data from the four helmet cameras into a converter that produced FACS-based individual muscle firings—curves the animators could use in Maya to tweak the performance. “We spent a decent amount of time on that and got pretty good results,” Barba says.

On Benjamin Button, they had relied on Image Metrics to provide that data. This time, the team brought the process in-house. “We wanted to give the animators direct control of the data,” Preeg says. “The animators determined whether what was coming out of the motion-capture solver was in line with Jeff’s performance. They could put weighting on shapes, solve only for the eyebrows, and so forth. They could interact quickly with the data and make decisions.”

To make sure the “solve”—that is, the conversion of data from dots to animation curves—was accurate, the crew decided not to interpolate the motion for a younger Bridges. “We wanted to know precisely how far he pulls the corner of his mouth to the left in his current age because that’s where the data came from,” Preeg says. And in fact, when they compared the FACS data captured from dots on Bridges’ face during his performance to the data captured from his skin with the Mova system, they discovered a good correlation.

Hair Today
To give Clu a hairstyle appropriate for a 35-year-old Jeff Bridges, Digital Domain artists relied on an in-house system that works within Autodesk’s Maya. To move the hair, they used a proprietary dynamics solver written by Mattias Bergbom, digital hair supervisor, and Robert Luo, hair technical director. As with most hair systems, artists place guide hairs—typically between 500 and 700—to shape the style. “We had different styles to fit different sequences,” Bergbom says.

The guide hairs are curves with controls points, usually between 15 and 30, that become 30,000 to 200,000 when interpolated. “Once you interpolate the hair, it tends to look like cotton candy, so we added clumping to break it up and get enough negative space to convey the shape, surface quality, and depth,” Bergbom says. “It’s like sculpting. If you look at artists sculpting hair out of clay, what’s important is where they carve the gaps and spaces. We used negative space to show detail at a distance. For close-ups, we had a different groom.” Texture maps and parameters controlled clumping, color, stiffness, how prone the hair is to clump and break apart, and so forth.

“The dynamics were the big challenge,” Bergbom says. “For Button, we used Maya’s nCloth or Maya hair, but we had such extreme variation in the shots for this show, we needed to have more flexibility and control. So, we based our solver on literature coming out of Stanford University on mass and springs, like a soft-body solver.”

In one shot, for example, Clu is skydiving, and the artists needed to send wind through his hair at high speed without having it look too extreme. “Once we had the solver and whole dynamics framework, it was easy to find the balance in wind forces,” Bergbom says. “We might add more turbulence and less wind drag to give the hair the right motion.” The artists didn’t want to mimic skydiving reality; they wanted to mimic an actor working on a soundstage with a big fan.

“People are so used to seeing actors with hair blown by a fan that when you show them the real thing, it looks too extreme,” Bergbom says. –Barbara Robertson

The animators, however, saw Bridges at his younger age—that is, as Clu 2.0. “They’re looking at a 35-year-old Jeff, but behind the scenes, the rig is Jeff at his current age,” Preeg says, referring to a system rigged to give animators control over the shapes that form facial expressions. “When we interpolate to a younger skin, we look at that movement to see if it looks like skin moving on a younger version of Jeff. If there is inaccuracy, we can fix the skin movement as part of an artistic process, but we know the underlying motion is correct, assuming we did the conversion well.”

As the animators worked, they could look at video of Bridge’s performance in a single view or with four cameras at a time to compare the captured movement they had tweaked on the digital head to that performance. They also used the performance from the body double, Reardon. “If the eye line depended on the dialog, we used Jeff’s eye line,” Preeg says. “If there was a lot of head and body movement, we would get the eye line from the body double because it related more to the head motion.”

Two of the biggest challenges remained: fitting the CG head onto the body double’s body, and making the CG Clu look, not just move, like a young Jeff Bridges.

Find the Head
Because TRON: Legacy is a stereo 3D film, the digital head had to sit on the stunt actor’s body precisely; the fit had to be sub-pixel precise, not just from camera view on a 2D image, but in depth. Otherwise, stereo would reveal the magic trick.

Before anyone could place the CG head on Reardon’s shoulders, a tracking team needed to find Reardon’s head in three-dimensional space in every frame of the footage from both cameras in the stereo rig, the camera shooting the left-eye view, and the camera shooting the right-eye view. Tracking is never easy, even when all a studio needs to do is, say, place a CG car on a highway shot with a locked-off camera. Tracking moving cameras and moving objects in stereo is much more difficult.

“This was the toughest tracking show I’ve been on,” says Ross MacKenzie, 3D integration supervisor. “It was hard to begin with, and stereo amplified any error. We had head replacement, disc tracking, multiple objects to match in both cameras. Any 2D cheating that we were used to doing on a non-stereo show became apparent right away. So, we got the Button band back together. We needed the experienced trackers, the best we could find.”



At top, lighters used maps with tones ranging from white to gray to modulate the sharpness of deep
reflections in surfaces. At bottom, body double John Reardon, wearing Clu’s Lightsuit, matched Jeff
Bridges’ performance as Clu leads his pack of Programs. Digital Domain then replaced Reardon’s head
with a CG head to create the clone of young Bridges.


The tracking team sometimes began working on a scene by using commercial software programs that they have in the studio: 2d3’s Boujou, The Pixel Farm’s PFTrack, Andersson Technologies’ SynthEyes, among others. “They were a good starting point,” MacKenzie says. “Then, [our] Track came into play. A lot of the other software programs force you into a solve, and if it’s wrong, you’re stuck with it. And, there is no way to lock a parameter.” For shots with Bridge’s body double, the artists tracked the cameras in each frame; that is, determined the viewpoint and movement of the camera, and then they tracked Reardon’s head and other objects, such as discs, if need be.

Doug Roble, software engineer at Digital Domain, received a Technical Achievement Academy Award in 1998 for Track, the studio’s in-house tracking software. For TRON: Legacy, Roble updated and modified the software to handle data coming from multiple stereo and witness cameras (see “Track Stars,” pg. 8). Even so, tracking was so difficult and important that many of the tracking artists stayed on the show until the end. “There was always something that needed to be tweaked,” MacKenzie says.

Having the digital head mouth Bridges’ words correctly, and placing the digital head on the double’s shoulders precisely, took the team halfway toward photorealism. The rest was up to the artists who refined Clu’s look.

Looking Good
“People know what Jeff Bridges looked like at 35,” Barba says. “But, if an image of Clu comes on screen and it’s supposed to be Jeff at 35, your friend might remember him from the film Against All Odds, and you might remember him from Starman. So you’re trying to match people’s memories of Jeff, not his reality. Plus, he’s a movie star. He looks different in different roles.”

Moreover, creating young digital humans is more difficult than creating believable older characters. “All the wrinkles, age spots, and so forth that we had on Benjamin Button helped us visually,” Litt says. “Younger Jeff has smoother skin. We had to find that Jeff-ness, his essence, with less help physically. Every lighter had a dozen or more photos of Jeff. We were always thinking about what we could change to capture that Jeff-ness.”

A technique the crew had tested and used slightly on Benjamin Button helped: high-resolution displacement maps tied to the animation. Modelers working in Autodesk’s Mudbox “painted” Clu’s face in high-resolution 3D and converted fine details, such as pores, to displacement maps. The displacement maps, based on shapes captured from Bridges and massaged for the younger clone, added other details beyond those in the model used by animators.

“There wasn’t enough resolution in the animation model to capture subtle wrinkles, nasal labial folds, crow’s feet, things like that,” Litt says. “We had between 50 and 100 of these high-resolution displacement maps to use in lighting, and we created a pipeline to have them blend properly. It’s all dynamic; it happens on the fly in every frame. You don’t really see it in Maya until you render. The system builds a final map in Nuke every time you render.”

For rendering the CG head, the crew used Mental Images’ Mental Ray to match the futuristic lighting used on sets and the bluescreen stage. Usually Clu appears with other characters, sometimes CG, sometimes actors, but all wearing Lightsuits. “There was lighting under the glass floors and glow strips on the characters’ suits,” Litt says. “We had to add the head to a body lit with those lights. And, there was a lot of blue in the light, but when skin is too blue, it’s scary-looking. Or, in a couple sequences, he’s in entirely red light, which is also scary. And some environments weren’t available until later in post. So, it was difficult.”

In fact, in the disc game, the Lightcycle sequences, and in most of the final sequences, light defines the environments and the action within.

Light the Way
Objects in the Grid are made of metal and glass, all of which reflect light, often primarily from the glow lines on vehicles, structures, and the characters’ suits. Thus, one of the most interesting lighting challenges was modulating the sharpness of reflections across surfaces. For rendering those references, the team developed interesting techniques within the V-Ray raytracing software.

“On buildings, we used panel maps to change up the reflections from sharp to broad, to create a layer of depth,” Teegarden says.



The maps had tone values that could range from white, which told the renderer the surface was a mirror, to gray, which told the render to increase the cone angle of rays bouncing light onto the surface until the reflection became diffuse. Teegarden used these maps, for example, during a sequence in which Quorra (Olivia Wilde), wearing a Lightsuit, drives a vehicle outlined with glow lines across a mishmash landscape of reflective geometry at various angles. The map changed the reflections from sharp to diffuse depending on the angle.

“To do this, a raytracer needs good control of how it samples the surfaces,” Litt says. “We could hit performance bottlenecks, but it was doable. It’s pleasing to see something with the subtlety of blurry reflections.”

The lighting supervisors worked closely with compositors to build the lighting templates and then, for consistency and to make compositing faster, tried to output images with as much of the final look accomplished in rendering as possible. This was especially important for the Lightcycle sequence. During this sequence, Sam and four or five Programs, all wearing the Lightsuits with glowing white lines, race motorcycle-like bikes, also outlined with white light, against Clu, in a Lightsuit glowing yellow, and his riders, wearing suits and riding bikes outlined in orange light. Trailing streams of white or orange light stretch wide and far behind all the bikes.

“The Lightcycle sequence was the hardest,” Teegarden says. “We had three layers of glass and characters riding these bikes up and down the glass. And then the Game Grid has squares stenciled onto the surface with glow lines. As a result, we also had to figure out how much light they’d cast. If we treated the surfaces like real glass, they would be two mirrors facing each other, which got confusing quickly and skyrocketed the render time. So, we cheated. The bike reflects on the surface it is traveling on, but that surface doesn’t reflect on the one above. It probably took a good 60 to 90 days to nail down, but it’s my favorite. When people think of TRON, the thing that pops into their head is guys in suits with glow lines on light bikes [Lightcycles] on a Game Grid.”

Compositing supervisor Paul Lambert notes that this sequence would not have been possible to create using environmental reflection maps. For this show, the reflections needed to be physically correct. When an object is close to the surface, the reflections on that surface needed to be sharp, but the part of the object farther away needed to fall off into nothing, especially when viewed in stereoscopic 3D.

“We shot TRON with two cameras and created the CG with two cameras, and that gives it a different look,” Lambert says. “It’s a shiny world, stylistic and beautiful, and you see reflections everywhere. If you tried to create that using projected textures, you wouldn’t get the depth in the reflections and refractions. You just couldn’t do it; the reflections would be on the surface. If you look closely at the light-bike sequence, you can see that our reflections have their own depth. The glass floors are reflective and have a surface texture. You can see when a reflection is far away.”

Although the Lightcycles are basic black outlined in light, they do have wear and tear. “The vehicles and environments aren’t pristine,” says Nikos Kalaitzidis, sequence supervisor for the Lightcycle and Lightjet sequences, and look development supervisor for tent-pole shots through the film. “They have paint scratches, chipped paint. It’s a realistic world even though it’s inside a computer. We had a lot of texture painting for each asset as part of the look development phase.”

To put riders on the Lightcycles, the team motion-captured stunt actors riding a rig, applied that data to CG doubles, and then projected photographs of faces onto the bodies. For the disc game, though, the director shot the actors on bluescreen.

“Our biggest challenge was developing the look of this world, getting it right, and getting the approvals,” Kalaitzidis says. “We were responsible for the look development on each of our sequences.”

The disc game takes place in a huge stadium with glass courts floating in the middle. Thousands of cheering and booing spectators ring the stadium—digital people/Programs all controlled by Massive Software’s crowd-simulation software. The studio also used Massive to march Clu’s army of digital prisoners later in the film.

After the opponents battled, the glass courts would reconfigure themselves based on the outcome, so the crew built digital models they could rotate and view from any angle.

“We had rotomation done of all the actors so we could generate reflections,” says Kaliatzidis. “The reflections had to be accurate because [the characters] were running around on a glass surface.”

It’s during this game, which takes place soon after Sam enters the TRON world, that he and the audience see de-rezzing for the first time—initially when part of a glass floor shatters into tiny cubes, and then when Sam fires his disc at his opponent and a Program explodes into similar tiny glassy cubes (see “De-rez,” pg. 19).

Glow in the Dark
The suits, discs, Lightcycles, Lightjets, all the vehicles, and nearly all the objects in the TRON world had glow lines, whether practical or digital. “I think figuring out how to deal with the glow lines was the most challenging part of shading,” says Jonathan Litt, lighting supervisor. “There wasn’t an out-of-the-box way to generate them.”

One question was whether the glow should emit light. “Initially, we turned [the digital glow lines] into lights or achieved the same effect with global illumination, but we decided not to do that unless they needed to shine on a head,” says Greg Teegarden, lighting supervisor. “Having them cast light made only about a two or three percent difference. It didn’t buy us anything.”

Another question was color. “We started with about 12 colors,” says lighting supervisor Jonathan Litt. “But it was confusing, and the glow line was so intense we couldn’t tell the difference.” Instead, they settled on four colors: yellow, orange, blue, and white.



For the original 1982 TRON, the crew had filmed people wearing suits with lines in black-and-white on black sets, printed the images on high-contrast film, and rotoscoped the images to create the glow lines. The result was not consistent. Thus, to pay homage to the original film, compositors added a glow pass that gave the Lightsuits worn by actors a subtle, pulsing flicker. “The glow lines on the costumes were impressive and actually emitted light,” Litt says. “If one character walked in front of another, the glow lines would light up the face. They couldn’t have shot the movie without that. But, we tweaked them for color and other things.

Also, the suits would sometimes switch off or flicker on their own. So, rather than shooting the scene again, compositors often would fix the problem in post, sometimes, ironically, by rotoscoping the glow lines. “We had to come up with a methodology for pulling keys or roto’ing the glow lines when Joe wanted them brighter,” Lambert says. “And in some shots, where the suits had ripples in them or sections turned off, we had to roto in those lines. It was painful. We outsourced a lot of that work.”

To give digital characters and objects their iconic TRON glow, lighting artists started by sampling values from the live-action plates. “Our glow lines mimic the behavior of a neon tube,” Teegarden says. “The illumination emits from the core and falls off at the outside edges.”

Painted maps specified the color and amount of falloff: “quite a number of maps to control the glow lines,” adds Litt. “We used shaders provided by the renderer and combined everything into a network. Two-thirds of the network dealt with glow lines.”
–Barbara Robertson



At top, the Lightcycle and the riders are digital; motion captured from stunt actors on rigs helped the
animators perform the digital doubles. At bottom, a digital Lightjet flies through digital clouds in a
shot during an all-CG sequence.


De-rez
All the material in the TRON world—Users, Programs, buildings, discs, Lightcycles, the Game Grid…everything—is made of 3D voxels. When Sam swings his wand to form a Lightcycle or Lightjet, the vehicles “rez” on—that is, change from wires and cubes into their 3D forms. When one thing smashes into another—a disc flung during a fight into a Program, one Lightjet into another—the object explodes into tiny cubes. “They’re like glassy ice cubes,” says sequence supervisor Nikos Kalaitzidis. “The material of the object is on an outer shell, and the inside is a glassy surface.”

Effects lead Byron Gaswick’s team developed a procedural particle-based system within Side Effects’ Houdini that uses rigid-body dynamics to manage the colliding cubes when something de-rezzes. But, causing something to rez on or off was a combined effort between animation, effects, lighting, and compositing. “Animators would have a certain amount of geometry to provide the timing for when the effect starts and ends,” says Kalaitzidis. “Once approved, effects artists did all their crazy things. Once the cubes come out of effects, they go to lighting for rendering in [Chaos Group’s] V-Ray. And that output goes to compositing for pretty glows, color, and flares. And then you see piles of cubes acting like they would in the real world.” –Barbara Robertson

Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at BarbaraRR@comcast.net.