Night Vision
Issue: Volume: 30 Issue: 1 (Jan 2007)

Night Vision


The premise is simple: At night, everything in the Museum of Natural History comes alive—the skeletal T. rex, the animals in the African pavilion, Egyptian gods, and, in the dioramas, tiny Roman centurions, Civil War soldiers, Mayan warriors, and Wild West cowboys. But, there are issues: The Romans and the cowboys get territorial. The lions want to eat something…or someone. And, the T. rex wants to play. Play? Of course. Night at the Museum is, after all, a comedy, directed by Shawn Levy. It stars Ben Stiller as Larry Daly, a hapless museum security guard, Robin Williams as Teddy Roosevelt, Owen Wilson as a cowboy, and a legion of other comedians. The 20th Century Fox production, which is based on an illustrated children’s book by Milan Trenc, opened in time to catch holiday season moviegoers.

Three-time Oscar winner Jim Rygiel puzzled through the visual effects, which were largely accomplished by Rhythm & Hues, with Rainmaker Animation & Visual Effects handling a large sequence and two Egyptian jackals. In addition, The Orphanage caused an Easter Island statue to blow bubbles, and Weta Digital animated a water-spewing whale. Image Engine Design prevized the show, and Giant Studios provided the motion-capture data. The digital makeup work by Lola Visual Effects remains, as always, a highly guarded secret.

"One of the reasons I picked this show was for the range of effects," says Rygiel. "And it was a comedy. Some shows, even though they’re done very well, instantly scream ‘effects.’ I like the challenge of making things so real that you know you’re watching something odd, but you’re not quite sure where that line is."

Wild Animals

Although the film was prevized, Rygiel knew that the director, who was working with effects for the first time, might want to enhance the chaotic feeling of the sequences by adding more CG animals later. "We had Shawn [Levy] shoot the scenes so that we could add the CG animals or not," Rygiel says.

To capture the lighting on set and on location, Rhythm & Hues brought along a six-sided camera the studio invented to capture HDRI data. "It’s a cube with six cameras," explains Rhythm & Hues’ visual effects supervisor Dan DeLeeuw. "We can gather a range of exposures to determine how intense the lights were and where they were. It captures from only one position, so it isn’t exact, but you can get very close."

Equally important, using this gizmo kept the action moving on the set. "It was a huge saving grace," says Rygiel. "Normally, we have to shut down the crew for 15 minutes. Instead, we’d pop this thing in there, and in 30 seconds we’d be done. Then, when we’d start lighting, we were 80 percent there, so we could work on fine details rather than starting from scratch. It was a big help."

As the production progressed, Rhythm & Hues’ shot count grew from approximately 280 to around 400. "That’s the nature of comedy," says DeLeeuw. "Everything changes by the minute. Ben [Stiller] and Shawn [Levy] would walk on set and Ben would get a brainstorm. He’s say, ‘We need a moose that gets stuck at the door.’ I’d say, ‘We’re not supposed to build a moose. But, OK, sure, we can build a moose.’"

Rhythm & Hues’ work on Aslan and other animals for Narnia helped the studio land this show, but the moose wasn’t the only new animal created for Night at the Museum. "With each show, our library of animals gets bigger," says DeLeeuw. "For this one, we added 10 animals, including an antelope, elephant, mammoth, oryx, ostrich, moose, zebra, and a T. rex." The studio builds models in Autodesk’s Maya, creates particle and other effects with Side Effects’ Houdini, and paints in Adobe’s Photoshop. For everything else, the artists use proprietary software.

Even though the lion and the other animals in Night at the Museum first appear as the work of taxidermists, when they came alive, Rygiel and the director wanted them to be believable. The effects crew needed to convince the audience that the security guard would think these are real animals, not lifelike "stuffies."

To do that, the modelers started with reference data from the taxidermy animals on set because the CG animals needed to be the same size and shape as the exhibits. For the lion, they built on technology developed for Aslan, but changed his face and eyes. Giving the creatures’ fur more texture and sheen than the stuffies helped bring them alive.

Aslan’s mane, for example, helped the crew create the woolly mammoth’s long hair, which they draped over its 14-foot-tall body. "Our mammoth just fits through the hallways," DeLeeuw says. "It was a lot of work designing the hair to keep the right scale because hair doesn’t scale proportionately."

To help the hair groomers create appropriately sized clumps of fur, and the technical animators move the hair at the right speed, DeLeeuw had them position a human stand-in next to the digital creature. "A strand of mammoth hair might be three feet long," he explains. "If you didn’t keep that in mind, it would swing too wildly and blow the scale."

One of the more interesting characters that Rhythm & Hues built was the T. rex. Even though the animal is a skeleton, the bare-boned behemoth behaved as if it were a fully muscled dinosaur. And, the lack of fur gave the animation crew freedom to transition the animal from an evil, angry creature into a playful puppy. "With this animal, the animators could play with the performance," says DeLeeuw. "They didn’t have to worry about technical aspects. But, they did have to give the skeleton the weight of an actual T. rex." The skeleton runs as if it weighed tons. In fact it’s the only animal that shakes the camera as it runs through the museum. Compositors added the camera shake by vibrating the live-action plate.

"Obviously, the T. rex was not complicated," says Rygiel, "but it’s cool that we see this literal skeleton of bones, on which we didn’t do any squash or stretch, turn from a looming character in the beginning that Ben [Stiller] thinks will tear his head off, into a puppy dog that only wants to play fetch. And it’s just bones moving. You can’t get a smile from that mouth. It’s all in the animation."


Visual effects artists at Rhythm & Hues matched the taxidermist’s lion and then brought the creature to
life by applying some of the hair and fur technology they used to create Narnia’s Aslan.

Although the lion and the T. rex were the hero characters, one animal squeezed past them to center stage—the ostrich. To create feathers for the big bird, Rhythm & Hues modified the studio’s fur system to make the hair longer and wider. To create plumes, they had the hair grow hairs.

It was the creature’s actions, though, that pushed this animal into the foreground: Animators couldn’t resist giving the bird bizarre behaviors. "It’s like the extra that always wants to get his face in the shot," says DeLeeuw. "If you look to the left in the African hall, you can see the ostrich hamming it up in the background. It was funny, so he got more and more shots."

In fact, at the end of the film, the animators put the ostrich right into the camera. "It’s in a shot that we called the ‘widow-maker,’" DeLeeuw explains. "Everyone is playing soccer. The ostrich runs across the room, gets in front of the camera, and breaks the fourth wall. It gave the animators a chance to have more fun."

That sequence was one of two that DeLeeuw singles out as most difficult. It takes place in the long main hall of the museum, which he estimates to be over 100 feet long. "The shot has every asset we built, every animal, every diorama," he says. "We hooked a Spydercam to one end and flew it to the other, and then repeated it using different layers of greenscreen so that we could insert CG characters between layers of extras." The Spydercam system is a computer-controlled rig that can speed a camera along a cable at up to 60 miles per hour.

The second difficult sequence was an animal stampede down a staircase. "It was hard because we had to animate each and every animal ducking and running and jumping over other animals, and the ostrich gets tossed from side to side," DeLeeuw says. "In CG, there’s nothing to stop one model from interpenetrating another."


 
Animators at Rhythm & Hues fl exed their muscles to turn the boney
T. rex from a scary skeleton into a playful puppy, while giving the
creature the same weight it would have had if fully fleshed.
 


The ostrich, elephant, zebra, and oryx are among the 10 new animals
Rhythm & Hues produced for Night at the Museum. To create ostrich
feathers, the crew modifi ed the studio’s fur software.

To create the scene, an animator performed each animal separately starting with one, and then adding the animal next to it, saving out that section, and so forth. "It was a puzzle," says DeLeeuw. "Once the director approved the animation, we corrected everything from the camera view with brute force." They removed the lioness’s leg from inside the zebra, slid feet, pulled aside rib cages, and so forth.

The Littlest Cowboys

Perhaps the biggest puzzles, however, were the dioramas. At one point in the film, all the little characters in the dioramas, the tiny models that depict ancient civilizations, come to life and attack Stiller’s character. The most important of these from a visual effects standpoint were the Wild West diorama, with its cowboys and railroad workers, and the Roman exhibit, with centurions and archers. The miniature people were actors shot on greenscreen stages with digital crowds following behind. One of the cowboys is actor Owen Wilson.

The film crew shot the dioramas on huge greenscreen stages in Vancouver; the diorama set was eight feet by six feet. "In some shots, the camera on the greenscreen had to be 20 or 30 feet in the air to match medium shots on Ben [Stiller]," says DeLeeuw. "When the stage wasn’t big enough, we had to scale the photography in compositing and use CG little people. The rule we used for the CG characters was that we kept them to one-third the size of a frame. And we didn’t want to ever replace actors like Wilson with CG."

When the characters leave their dioramas and fight for territory on the floor of the museum, Rhythm & Hues called on Massive software to create behaviors using motion-captured cycles and pre-programmed brains. "It was a comedy, so we couldn’t let the fight get violent," says DeLeeuw. "They threw punches, but there weren’t any sword hits." The Massive characters also knew to move out of the way when Stiller’s character and a monkey walked through the fight. Rhythm & Hues scanned Stiller to create stand-in geometry for casting shadows and for the Massive characters to have something to avoid or, in some cases, walk onto.


The crews shot the dioramas on huge greenscreen stages, fi lmed
the actors playing the little people separately, and at times added
CG people to create such scenes as these with actor Ben Stiller.

Lighting and compositing these little people into the live-action sequences required careful preparation during filming. "One of the hard parts with the miniature people in the dioramas," Rygiel says, "was that we could shoot them on greenscreen, but to make them feel like they were in dioramas, we had to do multiple shots with depth of field."

The same was true for any background in which the little people appeared. The depth of field for a three-inch cowboy is only two inches; everything outside that area is blurred. "We wanted to control that, and control how quickly the focus would fall off," says Rygiel. To gain that control, they shot multiple depth-of-field passes with a still camera any time they were shooting in the miniature world, changing the focus every few inches to get the full range of depth of field. Later, they could mix and match the stills in the virtual world to put everything they wanted into focus.

"It was like what you do when you’re creating virtual-world tile sets," explains Rygiel. "Except, we had the added problem of needing not only tile sets all around, but for each tile we had to rack-focus 10 times so each set had a depth of field from one inch to infinity."

In Deep

For example, during one sequence the little people run outside and jump off a loading dock to stop two of the actors from escaping in a van. For this shot, Bruce Woloshyn, visual effects supervisor at Rainmaker, shot 4k digital stills of the backgrounds to vary the depth of field. "A majority of the shots that needed multiple plates were handled by Dan [DeLeeuw], though," he says. "It was fun to be there when he was shooting his plates and have him there when I was shooting mine."

Woloshyn’s years as a compositor served him well as he considered what he would need during principal photography. "You have to know in your head what you will do months later, but you have to make the decision right on set," he says. "We had only a limited time with Owen [Wilson], so we shot backgrounds and greenscreens on the same days. I thought the digital stills would give us more flexibility. I knew that when we got into [Autodesk’s] Inferno later, we could pick which pieces to use and do artificial camera moves."

Rainmaker also created the Egyptian guards—creatures that have the bodies of human men and heads of jackals. The guards were Rainmaker’s first feature-film creatures. "It was a big learning experience," says Woloshyn. "We weren’t doing skinned creatures; we were trying to do rock. But making the statues look big and muscular was a big challenge."

Modelers used scans by Gentle Giant of full-sized set pieces as reference for the digital doubles, which they created with NewTek’s LightWave. In the film, even when the statues appear to be still, Rainmaker added subtle movements—a slight tightening of a fist, a head turn. To rig the jackal-like creatures, Rainmaker used Maya, creating both inverse and forward kinematics so that the digital guards could more easily use the spears they carried. "Sometimes a hand controls the spear," Woloshyn says. "Sometimes the spear controls a hand."


Rainmaker broughtthese Egyptian guardsto life using a combinationof NewTek’s LightWaveand Autodesk’s Mayasoftware, rigging themwith inverse and forwardkinematics. They renderedthe living statues inMental Ray andcomposited themin Digital Fusion.

To find a balance between making the creatures look alive yet be convincing as statues, Rainmaker experimented with a muscle system, but rippling muscles made the granite appear rubbery. Instead, they built a rigid-body skin and used custom blendshapes at the knees and elbows for extreme poses. "It was hard to find the right balance of rigid stone and flexible joints," says Woloshyn. "But we couldn’t have the statues look like men in rubber suits."

Using data captured by Rhythm & Hues with the six-camera HDRI device, the Rainmaker crew lit the gold-skirted creatures to match the on-set lighting. "I also had 4k fish-eye images, but taking those photos wasn’t always an option because the production crew moved so fast," Woloshyn says. "So, I shot a color chart on the floor of the set at 4k in the same lighting setup. That way, I could check the color between my system and Rhythm & Hues’ to be sure there were no issues."

The crew rendered the creatures in floating point EXR format using Mental Images’ Mental Ray, generating separate passes for the granite bodies and the gold costumes. Compositors assembled the layers in Eyeon Software’s Digital Fusion. For removing the miniature people from greenscreens though, Woloshyn used Inferno. "On weekends, I could sit in an empty suite and play with the depth of field, then save the setups I liked and hand them to Randall Rosa, who supervised all the work on the jackals once the models were built."

Blowing Bubbles

Like Rainmaker, The Orphanage had to make a character created from rock move believably: During their 20 shots, they lip-synched an Easter Island head and caused it to blow a bubblegum bubble. The studio worked in Maya for modeling, rigging, and animation, and rendered the result in Maya through Mental Ray.

"The way we sell the idea that this rock talks is that as his mouth moves, pieces of rock crack and shift around his mouth," says Kevin Baillie, visual effects supervisor at The Orphanage. "So, we still have the stretching that you need for a talking 3D character, but it’s hidden by the plates shifting around his mouth. It ended up being a really successful look. People kept referring to it as a simulation because it looked like we moved geometry."

For its part, Weta Digital brought a whale statue to life. In the sequence, a whale sprays water at guards walking through a door. A simple sequence, but to create it, Weta used two simulation systems: a fluid simulation system for the water and a new skinning system for the whale. "The new skinning system calculates volumetric collisions on the muscles," says Eric Saindon, visual effects supervisor at Weta. "[Night at the Museum] was a good test; we got good results." The team accomplished most of the work in Maya, then rendered the whale in RenderMan and composited the shots in Apple’s Shake.

Although the film has received mixed reviews, the effects have been lauded by the visual effects community: Night at the Museum was one of seven films chosen to compete at the annual visual effects bakeoff for an Oscar nomination. Rygiel believes that may be partly because the film has such variety—including practical effects. "We had the greenscreen rigs, a rig for the kid riding on the back of the T. rex, and a riding rig for Ben [Stiller] during the horse chase, as well as snow effects and explosions," he says.

In fact, you could think of the film as a collage of effects.

"That’s what I find so interesting," says Rygiel. "I love the puzzle work."

Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at BarbaraRR@comcast.net.