Lemony Aid
Issue: Volume: 28 Issue: 1 (Jan 2005)

Lemony Aid

The film Lemony Snicket’s A Series of Unfortunate Events may be about a series of mishaps, but happily the creation of its 505 visual effects shots at Industrial Light & Magic occurred under a more fortuitous set of circumstances. Directed by Brad Silberling, the Paramount Pictures release stars one of the most compelling digital doubles yet created-a toddler named Sunny-who was so ready for her close-up that the camera came within inches of her face.

Based on the popular children’s storybook of the same name, the fantastic tale about the three rich and unusual Baudelaire orphans stars actors Jim Carrey as evil Uncle Count Olaf, who is determined to get the children’s money, Emily Browning and Liam Aiken as the orphans Violet and Klaus, and twin toddlers Kara and Shelby Hoffman as Sunny, a baby with a mighty bite.





In fact, in one scene, Sunny grabs onto the edge of a wooden table with her teeth and hangs off the ground, swinging her feet; in another, she catches a spindle of yarn in her mouth-feats beyond the capabilities of the tiny human stars. In these shots, and others impossible for the real babies to accomplish, a digital Sunny created at ILM under Stefan Fangmeier’s supervision toddled in front of the camera. Fangmeier, who boasts three visual effects Oscar nominations, for Twister, The Perfect Storm, and Master and Commander, describes this film as “Fellini for the family.”

“The hardest effect to do was Sunny,” says Fangmeier. “She had to be cute, she had to be endearing, and she had to be on model.” That is, she had to match the real baby exactly.

Fortunately, one of the real babies turned out to be a better actress than expected, which reduced the load on the digital crew from an anticipated 50 shots of the wee double to less than 20. “We had just the key moments,” says Fangmeier. “And that allowed us to get it right. We had time to finesse.”

To create the baby’s body, the modelers started working before the twins were cast by using director Silberling’s one-and-a-half year old daughter as reference. And, as modelers often do for digital doubles, they began with a scan. But, working with a baby wasn’t as easy as working with an adult actor.

“The baby wouldn’t sit still,” says lead creature modeler Martin Murphy, “so the R&D guys here developed a way to do a scan in seconds.” He expects that the split- second accuracy of the new technology could also be helpful in modeling animals.

Thus, starting with scan data made possible by the new technology and with Silberling’s home movies as reference, the crew created a viable baby body using B-splines in Alias’s Maya. “We spent several weeks on the body, and then we found out that she would wear one long dress for the whole movie,” Murphy says, which meant the body model hadn’t needed as much detail as they had created. The dress, however, did require careful fabrication.

The crew built Sunny’s elaborate dress from individual pieces and long lengths of ribbons. “We started with flat surfaces that were wrinkled during cloth simulation,” Murphy says, “but the wrinkles had to be so small, we ended up sculpting them and using displacement maps for even finer detail.”
Left: By using blue for the specular highlight map, ILM’s CG supervisors discovered they could closely match the real baby’s skin. Right: The digital double for Sunny was rendered with subsurface scattering, which added translucency by bouncin




Once the twins were cast, the digital model was refined to match. “We did a photo session with the baby that took an entire day,” Murphy says. “We had to know how thick her eyelids were, how to shape the inside of her mouth, and how many wrinkles to put in her forehead. If it was off just a bit, she wasn’t believable.”

To simulate the soft curls in the Hoffman babies’ fine hair, the crew used a complex design that included several levels of guide splines, each having different parameters that controlled the geometry and simulation for her full head of hair.

To form Sunny’s head and face, Murphy used ILM’s I-sculpt software to manipulate the geometry and create facial expressions. For the latter, he sculpted nearly a thousand shapes for animating her face, including special sets for scenes in which the baby talks. “Some were one-offs,” he says. “Others, like the one that was used to lower her chin down, were used all the time.”

The shapes were so important, in fact, that Murphy, who spent 13 years in the theater before switching careers to digital modeling, became a facial animator as well as a sculptor. “It was just easier,” he says. “If I needed another shape for the animation, I could sculpt it right then.” Simple shapes could be fashioned quickly, but some took entire days to mold.

For Sunny’s facial performance, Murphy used photo references of the twins. For her body, animators started with data captured from babies wearing tiny motion-capture suits.

“First, we used the director’s baby,” says Philippe Rebours, CG supervisor. “Then we used a bunch of ILM kids. But, even though we had the motion capture data, we did a lot of shape work to be dead-on.” In one scene, for example, a digital Sunny jumps up and turns to the camera only seconds after the real baby was on screen. “Her shape had to be perfect,” says Rebours.
Top: Animators placed a proxy in the matchmoved digital set. Middle left: Fine wrinkles in the simulated cloth were sculpted and created with displacement maps. Middle right: Guide hairs controlled the baby’s fine hair. Notice how lighting adds expr




To fit the baby into such scenes as this, the crew took photographs on the sound stages using Panoscan’s high-dynamic range MK-3, a rotating camera that scans 360-degree images in a single pass, swiveling the camera from one point and then another to capture the complete set. Next, they modeled a matching digital set based on the photographs and measurements taken on location, projected textures onto the rough geometry, and match moved the camera. Animators then placed the double in the resulting digital set, which matched the real set precisely. Thus, they could, for example, put the baby’s mouth around a table edge.

In addition to capturing objects in the set, the Panoscan photographs provided information about the huge area lights used to create diffuse lighting for the film. One key to slipping the CG baby into the scenes was having her skin look as if it were reacting to the real lights.

Christophe Hery, who won an Academy Award for technical achievement for his work on subsurface scattering equations, led the R&D effort. Hery collaborated at first with CG supervisor Gerald Gutschmidt, and then as Gutschmidt began working on location, with Rebours. View painter Terry Molatore led a team of photo processors and texture painters. Rebours, working with Hery and Molatore, took charge of shader writing, with Jeff Hatchell helping craft the baby’s hair. For image manipulation, the artists used Adobe’s PhotoShop 7 running on Apple’s Macintosh computers; for rendering, they used Pixar’s RenderMan.

For subsurface scattering, Hery started with two sets of photographs of one Hoffman twin: unaltered photographs for reference and photographs taken with polarizing filters on the camera so there were no reflections on the skin. Of the latter, Hery says, “This gave us a pure base with diffuse lighting.” These “diffuse” photos were converted to texture maps, stitched together, and put onto the digital model. They became the input for Hery’s subsurface scattering equation.

“When I want to render with the subsurface scattering equation,” Hery says, “I need to give it scattering coefficients, the scattering depth.” Therefore, to calculate the scattering depth, Hery makes the assumption that the diffuse color map made from photographs taken with the polarizing lens is the result of uniform lighting on the character. Thus, the skin color is the result of subsurface scattering. So, Hery simplifies the subsurface scattering equation using the assumption that it will be working only with uniform lighting, and runs it in reverse.

“I run the equation backwards, so that instead of giving me color, it gives me the coefficients to get the color,” Hery says. When he has derived the varying scattering depth over the surface from the input images, he’s done with the images.

“Once I have the coefficients, I can run the equation forward in new lighting to calculate the color for the shots,” Hery says.

To further make the digital Sunny convincing, Hery combined the subsurface equations with global illumination, something he had not done before. “With the old method of subsurface scattering, we had to light the characters with direct lighting because we needed the Z buffer,” Hery says. “Now, we can use lighting that is physically modeled from the environment.”

First, Sunny’s digital model was lit using ILM’s lighting tools, but rather than rendering the image, her face was rendered as a point cloud. Because each particle in the point cloud was illuminated, information about how much light each particle received was used as input for the subsurface scattering calculations.

Next, after placing Sunny’s model in a digital set, the crew calculated the global illumination by bouncing light around the set and again rendering the result as a point cloud. The digital set matched the real set and was created by projecting photographic textures onto simplified geometry.

“Global illumination is slow because you’re bouncing light, so we decoupled it, simplified the model, and used fewer points for this point cloud,” Hery says. Also to speed the process, the illumination was calculated using one bounce. The data that resulted was fed into the higher resolution point cloud as one component of the lighting used for subsurface scattering on Sunny’s skin.
ILM rendered the model of the digital baby in neutral light to check her against reference photos before rendering her with shot-specific lighting conditions.




Lastly, to match the effect of the six huge 8-foot by 10-foot area lights shining down onto the actors from the top of the set, exact measurements of the lights were given to ILM technical directors who placed digital replicas in the digital sets. As a result, the area lights put specular highlights on the digital baby’s skin and tiny rectangles of light in her eyes just as on the real baby.

With this process, the lighting behaves properly up close or far away. “We were lucky to get one picture of the baby that wasn’t blurry,” Hery says. “If you can’t capture the actor or if you want full control over the lighting, this is the way to go.”

To create texture maps that added details to Sunny’s skin, view painter Molatore worked with seven photographs of one of the twins. “First we sharpened the images and then, bit by bit, processed the photos in Photoshop,” she says. “We didn’t want to paint them because that would soften the images, but we needed to remove all the shadows and highlights. Also, we needed to keep the tiny veins, and if we started painting, we’d lose them.”

In addition, Molatore created some 40 texture maps that controlled the density of Sunny’s hair, the wetness of her skin, and the placement of subsurfacing scattering and specular highlights. She painted Sunny’s eyebrows, but created the lashes with geometry. Bump maps helped replicate the pore structure on the baby’s skin. “In some shots, we replaced only the lower part of her face, so it had to be seamless,” Molatore says. “There couldn’t be any difference in the quality of her skin.” She worked with Rebours, who wrote the shaders, tweaking the colors to match the baby’s pearlescent skin tone.

To render the baby’s hair, Rebours, working with Hatchell, calculated the difference in colors between the polarized and non-polarized images of the Hoffman baby. “The difference is the reflection,” he says. “I discovered that the specular highlights were blue on her skin, and where there was less specular, it was redder. But with her hair, the specular value was yellow.” By assigning different colors to the diffuse and specular lighting in various groups of digital hairs, he simulated the layers of soft, fine, wavy hair on the Hoffman baby’s head.

Creating the digital baby involved a lot of handwork by the ILM crew at great expense-both in terms of people time and render time-but the crew felt it was worth it. And, luckily, the reduction of shots during production-because one Hoffman twin was a better actress than expected-gave them more time than they anticipated. As a result, even though Sunny appears in only a handful of shots, if she weren’t doing things no baby can do, it would be nearly impossible to tell when she’s digital-even when she’s right in front of the camera.

“Sunny is in only a few shots, but she’s really great,” says Fangmeier. “Like we did with the dinosaurs in Jurassic Park, it was wise to get our feet wet before trying a full movie with digital humans. The next question is how to scale what we’ve learned.”

The answer seems to be: one baby step at a time.

Barbara Robertson is an award-winning journalist and a contributing editor for Computer Graphics World.