'Dead Space 3'
February 11, 2013

'Dead Space 3'

Digital Domain creates unique TV spot that combines live action, gameplay, and CGI for Dead Space 3 title.
When Mothership and Digital Domain created a TV spot to launch the third chapter of EA's acclaimed Dead Space game, they wanted to engage the audience. To do so, they used a combination of live action, gameplay, and high-end CGI.

The spot, "Take Down Terror," was directed by Mothership's Neil Huxley for Draftfcb San Francisco. The work was produced by Academy Award-winning VFX studio Digital Domain. 


Huxley, director of numerous game launch spots, took Draftfcb's concept and approached it like a short film, shooting with live action and virtual cameras to bring greater humanity to Clarke's character and a sense of realism to the piece. Digital Domain, under the direction of VFX Supervisor Aladino Debert, used its sophisticated VFX pipeline to create the snowy world of the Necromorph planet and cinematic lighting.

Huxley shot Gunner Wright (the actor who plays Isaac Clarke in the Dead Space franchise) performing the role, and also did a traditional motion-capture shoot to capture body action. Together with Digital Domain, he then did a virtual camera shoot to capture the action of the character walking in snow, shooting the CG environment as if they were shooting in a real environment. 



All of the shoots took place at Digital Domain's in-house virtual production studio. Digital Domain then applied its expertise to create sophisticated, realistic looking snow, fog, and heavy particle effects to make it feel as if Clarke was in an icy windstorm, and then seamlessly blended the live-action face with the CG environments. 

They took Clarke's suit, a low-resolution game asset, and integrated it into the environment by adding ice and frost to the mask, and having the character breathe visible vapor.

Here, Mothership Director Neil Huxley and Digital Domain VFX Supervisor Aladino Debert detail the work.

How did the decision to use a short-film concept affect how the animators approached the project?

Huxley: We try an implement our virtual production pipeline on CG projects whenever it’s appropriate, treating them like live-action film productions. This more sophisticated approach means that instead of just producing a bunch of shots that don't really tie together, you end up with a cohesive visual piece that shares assets across the show, which lets us create a better end product. 



Creatively all of the artists have an investment in the project. The story here has three acts, and the characters have story arcs, the same as in a real film, so it engages everyone involved in the creation on a deeper level. I shoot using our virtual camera system, which takes the responsibility of camera away from the animators and gives it to the director, allowing us to get the desired shot faster, in real time, with minimal clean up required, rather than me sitting behind an animator noodling with chopsticks which can be very tedious. 

How much is CG vs. live action? And, what exactly is CG in the spot?

Debert: The body/suit was completely CG on all shots, so with the exception of Gunner's head on the end shots, everything is digital in some way. (Gunner Wright is the actor who plays protagonist Isaac Clark in the game.)  

For the environments we always try to start from photographic references simply because our end goal is realism. So we created digital matte paintings for the far backgrounds (Shannan Burkley was the artist), which were then projected onto 3D geometry and re-rendered in our lighting environments. The rest of the environments, particularly the mid-ground and close-ups, were fully 3D. 

We utilized a custom shader for the snow and rocks (perfected by lighter Casey Benn) that allowed our artists to art-direct the ratio between snow and bare rock by tweaking the normals of the geometry. It made for a very flexible pipeline. 



All of our effects were digitally simulated by Erik Ebling and Karl Rogovin.

What tools did you use?

Debert: Our pipeline consisted in Mudbox (modeling and texturing), Photoshop (texturing and matte paintings), Maya for rigging, animation, and effects simulations, Motion Builder in our virtual production pipeline as a conduit between motion-capture data and Maya, V-ray for rendering, and Nuke and Flame for compositing and final color correction.

What were your biggest challenges?

Huxley: Creating interesting snow scenes is tough, and the last thing we wanted was white scene after white scene without visual interest. We wanted a monochromatic look — that was a conscious design choice. Raking light across the scenes would create harder shadows and contrast. The grey was a good lighting reference for us. We didn't want bright blue skies. Tau Volantis is a hostile environment, and we wanted to represent that fully.

Integrating the live action head was also a challenge, but one we relish. When we have a live-action component, it means we get a chance to push the CG to that photo-feel level so everything sits nicely together in our world. These are the shows I love working on the most.

EA wanted the skies toward the end of the piece to be angrier, more stormy, more like the game reference we had, but we still wanted to keep it in our realistic, photographic world. The team did a great job in mixing both the game reference and our own photographic, real-world reference together to create a very believable climax to the environment. 



Our usual process consists of creating a series of storyboards and a board-o-matic to guide our motion-capture session, which is done here in DD’s virtual production studio by Gary Roberts and his team. Once that session is completed, an edit with selects is created. With that edit in hand, we use the virtual camera system here in our virtual production studio to create realistic cameras for each shot. 

From there, the production moves into Maya (where the initial model and rig were created). In Maya, our animators layer all the nuanced animation that motion capture is not capable of producing (weight, hands, fur, helmet animation in this case, etc.). Once the animation is approved, all lighting is done with our custom HDR-centric pipeline in V-ray. Concurrently all effects were created using traditional particle systems in Maya. At the end everything is put together in Nuke with the final touches added in Flame.