FOR CENTURIES, THE TOPIC OF RELIGION VERSUS SCIENCE HAS SPURREDHEATED DEBATE, AND ONE THAT CONTINUES TODAY. THE TYPICAL VIEW, INITS RAW FORM, IS THAT SCIENCE AND RELIGION ARE MUTUALLY EXCLUSIVECONCEPTS, AS SCIENCE IS BASED ON TANGIBLE EVIDENCE, WHILE RELIGIOUSTEACHINGS ARE BASED ON INTANGIBLE BELIEFS. YET THERE ARE SOME WHOPURPORT THAT MANY BIBLICAL ACCOUNTS, THE FOUNDATION UPON WHICHMANY RELIGIONS ARE BUILT, CAN INDEED BE PROVEN WITH SCIENCE.
For the documentary The Exodus Decoded, set within an entirely virtual environment, artists crafted a wide range of CG imagery, used to illustrate various points made by the filmmaker. Using nearly every tool within Maya, the artists re-created artifacts, animations, maps, and more. For instance, Maya’s ocean shader and fluid were used to create the oceanscape in the image above as well as for other simulated water effects in the program.
That was the approach taken by filmmaker/investigative journalist Simcha Jacobovici in a 90-minute documentary titled The Exodus Decoded, which aired recently on the History Channel. In the special, Jacobovici and director James Cameron presented what they believe is archaeological evidence that one particular biblical account—the exodus of the Jews out of Egypt—actually occurred, albeit two centuries earlier, based on archaeological evidence. Ironically, Jacobovici’s “real” scientific evidence is presented in a “nonreal” computer-generated museum setting.
At first, examining the ancient past within a futuristic virtual environment seems a bit odd, particularly for an investigator who is determined to prove his theories based on factual evidence. However, the use of CGI enabled Jacobovici to tie together vast amounts of material in a logical fashion and provide a three-dimensional look at numerous historical artifacts, particularly through animations.
The Exodus Decoded is one of the first documentaries to use an all-virtual environment. Tasked with creating this unique digital space for presenting information and making artifacts come alive was Gravity Visual Effects, a VFX boutique with offices in Canada and Israel. “Many documentaries use CG for re-enactments, but Effi Wizen, our company’s CEO/creative director, wanted us to use a different approach,” explains senior art director Hili Tsarfati. “He is an architect, and he provided a blueprint that eventually became the virtual museum, inside of which the material was presented.”
According to Tsarfati, the documentary contained a great deal of diverse information from many sources throughout the world, including numerous video clips of experts in the form of talking heads, or head-and-shoulders shots of the interviewees speaking. Wizen’s idea was to bring the material into a unique, consistent location that added some flavor while the story unfolded. At times, the digital artists augmented the interviews with 3D replicas of actual artifacts and various animations. “All of this added visual strength to the story Jacobovici wanted to tell,” she says. “Jacobovici was missing a lot of visual reference that would help tell his story.”
According to Zviah Eldar, the company’s other CEO/creative director, the group began working on the CG portion of the film before the off-line work was done. “We started before we had the backbone and skeleton of the film; the director was still on location gathering information,” she says. “But the environment we chose gave us the flexibility to constantly bring in new material. Because the script was not locked, that gave us a lot of artistic freedom.”
The film was planned inside the virtual environment—specifically, the camera moves and the shots to be edited—using the voice-overs from the interviews (the group imported the wave files into Autodesk’s Maya, the tool used to model the environment). “The workflow is similar to that of a feature film, but here it was a film that lives inside the VFX, not vice versa,” says Yuval Levy, head of 3D. Initially, the project called for the artists to create four minutes of high-def special effects. But after the first off-line, Eldar says, it became apparent that more CG had to be embedded into the film to achieve the original goal of creating a stimulating backdrop. In the end, the group, which was led by four senior artists, crafted 43 minutes of high-def imagery.
Narrator/filmmaker Simcha Jacobovici and director James Cameron (above) were filmedagainst bluescreen and composited into the virtual set
“You can see how addicted the director was to the special effects,” Eldar says. “They solved many of his problems—the diversity of the material, the large amounts of narration, and the talking-head interviews—which could have made the film boring. Because he was talking about the ancient past, there weren’t many actual visuals he could use.”
Even after the artists received all the material, there was still a great deal of collaboration among postproduction, the off-line editors, and the 3D artists as work went in and out of Discreet Flame, an Avid system, and Maya. “We worked within that circle nearly every day,” says Tsarfati.
After the group received the last of the material from the filmmaker, it still had to integrate footage of Cameron and narrator Jacobovici, who were filmed at the end of the off-line work. The two were shot against a greenscreen; since Gravity had used previz extensively for the project, the team knew exactly where to place them and which angles to use in the final cut. “We did the whole process differently than most,” Levy says. “We built the virtual environment, placed the artifacts, integrated the sound for the timing, and gave all that to the off-line. Once the off-line was locked, then we did the greenscreen portion.”
The space inside the virtual museum was vast; the design enabled the artists to continually add to it as new material was acquired and submitted. The basic design uses a high-tech grid of connecting cubicles that seems to extend endlessly inside the space. “That allowed us to easily change and modify the design to coincide with the camera moves,” says Levy. “The museum is huge; we only showed it in snippets in the program.” On the sides of the CG cubicles, the group displayed virtual video monitors, on which film clips of experts—scientists, theologians, archaeologists, and others, all shot head-and-shoulders—played on cue within the steel- and glass-like environment aglow in rich gold hues.
Perhaps most impressive, though, were the 3D artifacts created by the artists that were randomly, albeit strategically, placed inside the virtual museum. One such item was an ancient stele whose hieroglyphics recount a story involving a severe storm, which rarely occurs in Southeast Africa, along with details of catastrophic events (like the plagues) that “mirror those of the biblical tale,” Jacobovici states in the documentary. Today, the real stone tablet remains abandoned in delicate pieces in the basement of the Cairo Museum. Because no known photos of the find exist, the artists used copies of sketches made by the person who found the object to re-create the stele inside Maya. The team also reassembled the stele so it could be viewed dimensionally and as a whole.
The artists spent a great deal of time carefully re-creating objects like the stele from whatever sources were available, oftentimes photographs or film footage shot by the crew at various museums. “Sometimes we had good-quality images to work from, and other times we didn’t,” says Tsarfati. When possible, the group used textures from the reference material. Or, it used verbal descriptions from experts, the Bible, and similar period objects to craft items that today do not exist. “We had to create visualizations that explained a lot of the archaeological work,” says Levy. “I felt as if we were visual archaeologists; we were extracting images from real artifacts and bringing them to life. Often this required us to connect the dots and do in-betweens—just as we do in animation—when certain archaeological information was missing.”
Using Maya, artists built a time machine that contained various imagery to give viewers avisual reference for events that occurred thousands of years ago, as well as the stone stelein the following image, rendered with caustics and global illumination within Mental Ray.
In all, the artists crafted approximately 20 CG artifacts, including hieroglyphics, tombstones, a fire pit, a granite monument, and more, as well as water elements, geographical maps, geological data, a re-creation of a volcanic eruption, and frogs, locust, and other objects from the plagues. For one scene, the group crafted a mummy of an Egyptian pharaoh. The scene begins with film footage of Ahmose, as the narrator explains why he believes this was the pharaoh responsible for banishing Moses and his people from Egypt. The camera then suspends upward from the actual mummy and seamlessly transitions to a CG replica of the mummy inside the museum.
The artists similarly moved through dimensions in a number of other scenes, such as when the narrator explains the importance of a Greek wall painting depicting what he believes is the Exodus. Using a high-res photo of the actual wall painting as a reference, the artists rebuilt the structure in Maya, using photographic textures to surface the model. As the camera pans across the scene, the ancient Greeks’ perception of depth is realized, explains Jacobovici, by translating objects situated farther away in the Z axis to higher up along the Y axis. Using a high focal length and perfectly aligned camera, the CG artists seamlessly blended the 3D into the original painting. In addition, the artists animated the people, dolphins, and other objects in the painting, bringing the entire scene to life.
“We had to recheck with Jacobovici and specialists on everything we did to make sure our models and animations were accurate,” says Levy.
In one instance, using 3D imagery actually enabled the filmmaker to, for the first time, place side-by-side a series of objects (tombstones), some now housed in different locations, to extract a cohesive picture. The stones were found in 3500-year-old tombs in Mycenae (Greece) along with numerous Egyptian gold items. As Jacobovici states in the documentary, the images on the stones have never been deciphered until now, and each contains a sliver of a story, which he contends is that chronicling the parting of the sea. To better illustrate the visual story on the stones that Jacobovici saw that no one else did, the digital artists made an accurate version of each stone, and then, with the stones lying flat, “lifted” the carved images upright, similar to a paper image in a pop-up book.
In a scene, the camera cutsfrom filmed footage of the Pharaoh Ahmose mummy at an actual museum to a digital version placed inside the virtual museum.
“We extracted carved images from each of the three stones and put them next to each other to combine them into a story, creating in-between animations so we had a full animated story from those tombstones,” Levy explains. In instances such as this, the group extracted displacement and bump maps from images of the actual stones, then converted those into a polygonal model in Maya using the displacement information, and rendered the imagery using the Mental Images’ Mental Ray tool within Maya. Later, the group used Autodesk’s Discreet Flame to composite the imagery into the live action.
In addition to the museum and models, there were numerous effects—water, fire, and more—that collectively taxed the rendering. Throughout the project, Gravity had a renderfarm of 40 Dell CPUs running around the clock, yet the group still had to render a good portion of the museum in layers. The water, which was used inside a clear “cubicle” of the museum as well as in a number of animations, was created using Maya’s Ocean Shader. In addition, the group also used Maya Particles and Maya Fluid Effects for the fire and smoke elements. The team also used the particle instancer for a number of objects, such as to create the locust in the depiction of the 10 plagues. “We used everything Maya had to offer,” Levy says. For the locusts, the group created a Maya script that would dynamically render closer locusts with a high-detail mesh and the more distant models with a low-poly stand-in.
When possible, real artifacts were used to craft the 3D models. The CG Ark ofthe Covenant was based on biblical references and a piece of ancient jewelry.
The artists also built a unique time machine. Many of the events have a disputed time frame, most occurring between 1200 and 1500 BC; so, when an expert began talking about a certain event, the massive time machine beside the video monitors provided a visual date reference for the audience.
For this documentary specifically, “CG became an excellent storytelling tool; it helped the director visually explain a very complicated story,” says Levy. So much of the story depends on the filmmaker’s ability to connect many disparate events, and without the use of CG, many of those connections would have been confusing or not easily understood. Adds Tsarfati: “We discovered while making this film that if you provide information or details differently than the eye and mind is used to, a person will be more accepting of the information. It makes the information easier to understand and more appealing.”
Undoubtedly, that’s because today’s society is a visual one. “People are used to going to the movies and seeing special effects and visual aids,” says Tsarfati. “Today’s audience is ready for documentaries to be served a different way.”
Karen Moltenbrey is the chief editor for Computer Graphics World.