It was the summer of 2000. We had survived Y2K, and I was busy working on David Fincher's Panic Room. We were attempting to previs the entire film before the start of principal photography. This was detailed and meticulous work. Next to my office was the editor, who was responsible for cutting the previs together into an exciting cat-and-mouse suspense thriller. Farther down the hall were the director's office and a small screening room.
It was late. Most people had gone home. I was working furiously to complete a few more shots before wrapping it up for the day. From down the hall I heard David shout for me, "Ronnie!" I found him in the screening room playing Madden NFL. He had just completed a reception and was flying a camera around while watching the instant replay. His question to me: Why couldn't he do this with the previs? Why couldn't he grab his game controller, move the camera around, and design his own shots of Meg, Burnham, Raoul, and Junior?
It was a great question. And now, over 20 years later, is a great time to answer it.
What Fincher recognized that evening was the power that technology brings to the visualization process. Game engines, like the one he was using, are designed to run in real time, and that is far faster than rendering animation frame-by-frame and then playing it back. Real-time interaction feels more natural and intuitive.
The problem was, in the early 2000s, there was nothing intuitive or natural about using game engines for anything other than developing games.
People were talking about using game engines for previs, and early experiments were attempted using emerging tools like the XSI Viewer. Still, the workflows were too complex for the quick turnaround world of feature-film previs. Developers needed to write code and test features. Then they needed to train artists on the new animation techniques. All of which might be feasible with months to design a pipeline, but our projects were measured in weeks, with a lead time of days.
Fast forward to today and you have a very different technological landscape. Motion--capture technology has advanced and matured. Game engines like Unity and Unreal are common in industries ranging from architecture, to automotive design, to biomedical research. And graphics cards now carry more memory and computing power than high-end workstations from a decade ago.
So, how has all of this changed previs? What would my work on Panic Room look like today?
The first answer is it would look a whole lot better. Better not because it would look more realistic, but better because I could choose how it would look. Back in 2000, I had very few choices for visual style. I took what I could tease out of my Windows NT workstation running an early version of Softimage XSI, and that was that.
Now, with more powerful hardware and software, the visual style is a choice. And with game engines like Unreal, we can apply different looks in real time based on the creative goals of the project. It might look realistic, with natural lighting and atmospheric effects. It could also look more hand-drawn, like Proof's work for The Blacklist
Season 7 finale (see "Hybrid Drama,"
Issue 2, 2020).
Filming for the episode was cut short due to the pandemic. That left the studio with little more than half the episode completed and no practical way to finish it. Jon Bokenkamp, the show's creator, wanted to push the graphic novel, film-noir feel of the show, so Proof's team dialed in a visual style, and a few weeks later our previs was aired on national television. Better-looking previs means it is tailored to the taste and style of the project and the filmmakers. The "look" is a creative choice in a field of creative choices.
At the other end of the spectrum is our work on Amazon Prime's recently released The Tomorrow War.
Proof joined the VFX team on The Tomorrow War during postproduction as they were building the director's cut. The bulk of the work was postvis – adding CG creatures, effects, and backgrounds to the practical plates. We needed to make the shots as believable as possible, while also ensuring a lightning-fast turnaround so the director, who is also an editor, could experiment with the animation and craft the story he wanted to tell. That meant delivering multiple iterations of shots, all with excellent creature animation and nuanced lighting so the creatures felt integrated with the plates.
Reaching a higher quality was particularly important for the lab sequence where the creature slowly wakes up and begins to struggle against the chains holding it down. This is a quiet, intimate, and tense cinematic moment. Our animators had to dig deep, imbuing the creature with a sense of emotion and purpose as it slowly becomes aware of the danger it's in.
We developed a lightweight rig, giving our animators the subtle control they needed with the creature's limbs and appendages, while also keeping it simple enough to work quickly. The rig was custom-built for the show, although we managed to repurpose an old set of "vines" from another film and transform them into the chain harness that imprisons the creature. These shots go beyond simple temp comps. They are early animation passes that evoke mood, tone, feeling, and story.
While the lab sequence showcased our ability to create a dramatic moment, other sequences were all about the chaos of war. For those, our team used MASH within Autodesk Maya to animate hordes of creatures, then layered in specific keyframed performances to tie the scenes together. We augmented the creature rig with multiple "damage" states to reflect injuries incurred over the course of battle.
We used hardware rendering for the CG elements, combining Autodesk Arnold shaders with the Maya Viewport 2.0 renderer to achieve fast renders with sophisticated lighting. And every shot was finished using Foundry's Nuke, where Proof's compositors could focus on color and integration. The movie was filmed using anamorphic lenses, so great care was devoted to extracting the lens distortion for animation and rendering, then re-applying it in Nuke to match the film's final look.
Achieving a compelling cinematic look was essential because the postvis remained in the edit for months and was used for multiple screenings with the filmmakers, studio executives, and test audiences. The shots had to be believable without sacrificing turnaround time.
Time is another big difference in previs. While it's true that on Panic Room
we were attempting to previs the entire feature film, the reality is we ran out of time. And that film was very contained: five characters trapped inside a townhouse over the course of one night, with only a handful of visual effects shots. The films we're working on now involve ensemble casts, multiple locations, and over-the-top visual effects sequences.
Take Proof's recent work on F9
. This was a project that spanned every aspect of visualization – from story development with the filmmakers, to very detailed technical visualizations to aid in the filming of complex action shots, to version after version of postvis for editorial, to completing dozens of visual effects shots that are in the final film. Previs alone covered six sequences totaling over 3,000 individual shots ranging from Southern California to Scotland, Eastern Europe, and beyond.
And for postvis, we completed over 700 shots touching nearly every sequence in the film. In some cases, we were replacing bluescreens with CG set extensions. In others, we were generating complex animation and effects to embed actors filmed practically into entirely computer-generated worlds.
Our work on F9 spanned 26 months with a team that averaged 12 artists. Panic Room
was ambitious in its attempt to previs an entire film. The visualization work for F9 was epic, its scope touching every aspect of the film - from preproduction, through the shoot, and deep into post. That level of involvement is typical on blockbuster films. Previs is an essential component of the creative decision-making process from beginning to end.
And that gets me to the true answer to the question. The significant change in previs is not how much work we are able to produce, and it's not what the visualization work looks like; it's about how it feels.
My working relationship with Fincher on Panic Room
was call-and-response. He would call out changes, and I would respond and get them done. Sometimes, however, that response would take a few minutes, or perhaps a few hours. In rare instances, it might even take a day or more. The goal was to make the feedback loop as small as possible, but it was always a loop. Now the goal is to remove the loop, engaging the filmmakers directly in their creative decision-making process. To hand them the game controller and let them create their own shots.
To accomplish this, we employ an array of real-time technologies that allow us to capture action and record it as animation. We suit up performers in motion-capture suits and have them drive CG characters that are embedded in the story world of the film. The director can step in and block the action as if it's happening for real. We hand the director or cinematographer a virtual camera and let them compose the shots themselves. They can experiment with composition and coverage, and then play back the sequence in real time to evaluate what's working and what isn't.
It's what Fincher saw that night in 2000 – using real-time technologies to engage and interact with the animation as a living document.
Visualization is all of this. It is reaching for higher quality, while doing so much more, and incorporating new technologies that either minimize, or even sidestep, the iterative loop. Interestingly, what hasn't changed is the "why." Previs has always been about creative communication, ideation, and technical problem-solving in the service of telling better stories. How we do this work at Proof has definitely changed, and for the better, but why we do it will always remain the same.
Ron Frankel is the founder and president of Proof Inc., and partner/managing director of Proof London Ltd. Founded in 2002, Proof is the original visualization studio dedicated to providing the highest-quality visualization services for feature film, broadcast, and immersive entertainment industries.