An Action-Packed Landing
Karen Moltenbrey
October 22, 2015

An Action-Packed Landing

A recent project by Axis Animations resulted in three and a half minutes of crazy free-fall explosive madness. The animation, an opening cinematic to Halo 5: Guardians, the next installment in the legendary saga of Master Chief, was produced for Microsoft and 343Industries. 

Here, three members of the Axis team – Debbie Ross, executive producer; Stu Aitken, creative director; and Sergio Caires, CG supervisor – take readers behind the scenes as they help set the tone for the new game.

 

Who approached you for the cinematic? 

Ross: We were approached by 343 Industries Cinematic Director Brien Goodrich about the project. We had developed a great relationship with the 343 Industries team over a number of different projects for the Halo franchise, and it was great to get another opportunity to work with the team.

What was the directive? 

Aitken: For the opening cinematic, the brief was a bit different from our usual projects. Brien and his team had been planning the sequence for a while and had already started working on some previs. Our job was to integrate into 343 Industries’ layout and animation pipeline and take their finished animation scenes and make them look amazing. We created the detailed and complex environments assets, shaded all the assets, created all the visual effects, and provided all the lighting and picture finishing.

How long did it take to do? 

Ross: The project was part of a bigger set of work that took around six months. 

How many people worked on this?

Ross: Our team size probably averaged 20 people across the whole timeframe of the project. 

Did you use any game footage or models directly?

Aitken: There is no game footage in the sequence, but we did get access to certain examples of in-game elements, such as effects, to make sure we tied the look of the game into the opening cinematic.

Most of the characters models were supplied by the 343 Industries modeling team, and their character pipeline is very similar to ours, so they were fairly straightforward to integrate into our shading process. 

Did you rework the models? If so, please detail the process. 

Caires: Our shaders support normal-mapped assets, so we can closely replicate the real-time shading if we want to. Naturally, we have to take things up a notch depending on how good the assets are and as the timescales allow. In this case, they were holding up very well, so we used some procedural shading tools to add detail/nuance where the maps start to run out, so the assets are a little more ‘lived in,’ with a bit of dirt gathering in corners, a few more chipped edges, and fine scratches that come into play in close-up shots.  

Of course, these characters need to have hair, and this is an area where typically there can be no direct re-use, but very luckily 343 Industries were using the same techniques we normally use for hairstyles, so we were able to extract key hair strands to input into our highly customized hair tools as usual. 

What software did you use for the content creation process?

Caires: At Axis, we use pretty much any software for modeling. However, the mainstays are The Foundry’s Modo, Autodesk’s Maya, and Pixologic’s ZBrush. We also use Maya for animation/rigging, and Side Effects’ Houdini for shading, lighting, effects, and rendering, with Blackmagic’s (formerly Eyeon’s) Fusion as our compositing tool.

Over the years, we have developed our Maya facial rigging tools to allow animators to deliver the nuance and quality we are constantly striving for. These tools are largely automated and very flexible and scalable – the more information/shapes we put in, the better the result, without modification to the tools. They also hook up perfectly with the various facial mocap suppliers we use.

For modeling, it’s basically a free for all. Use whatever you like, so long as it is able to export FBX files, which we then pass through a publishing process that converts it into an Alembic file and adds a few attributes our shaders expect to see (material assignments, rest position). The idea being that a Houdini asset can be immediately created, in parallel with other pipeline stages. Of course, we also automate the creation of shaders based on the assignment already present on the mesh. Texture publishing follows a similar process, so that we can largely automate the time-intensive texture assignment aspect of shading.

If the textures follow physical-based values, then shading can be essentially done as soon as the textures are loaded. Long ago, we came up with a similar shader parameterization to what is now well publicized by Disney, which is the Metal-based workflow. This is an extremely efficient way to shade because with as few as three textures (roughness, albedo, metal), a shader is often completely done as soon as the maps go in. 

Following a physically based approach is something we have found greatly helps achieve much better quality and is faster and easier. It is the exact opposite to the alternative of many wrong theories as to how the final output looks the way it does, which, of course, differ among non technical artists and would commonly cause consistency and rendering problems in the process.   

Houdini is used for all effects, lighting, and rendering in Houdini’s Mantra. We have a history of being very, very early adopters of Houdini technology for the purpose of creating hyper-real and photoreal fully animated projects – starting as far back as Houdini 8. I firmly believe there is currently no better fully integrated environment in which to render complex projects. Everything about it makes the process very flexible and controlled. Contrary to popular belief, despite the immense power and complexity you can optionally unravel, it’s a very easy software for lighters and shading artists to pick up. We generally find artists who have never used Houdini to be up and running fairly fluently within a week. 

Did you have any of the character actors digitally scanned? (

Ross: The characters were digitally scanned, however this was done by the 343 Industries team and their modeling department. 

Did you use motion capture?

Ross: The motion capture was shot at Profile Studios (formerly Giant Studios). Both 343 Industries and Axis have developed a great relationship with the teamat Profile over the years, and their new studio gives us everything we need. 

What were some of the major technical challenges you faced? 

Caires: One of main challenges was that all animation in this scene was being supplied by 343 Industries at regular intervals right up to our final stages. This is something we had not done before. Of course, this is not as easy to manage dependency-wise compared to our own in-house animation department. Our Pipeline TDs did a great job figuring out how to translate data from a completely different pipeline geared to game export, into our own caching pipeline. This process was in constant revision throughout, but it got figured out in the end.

Of course, it was also challenging due to the fact we jump out of a dropship and fly for miles with no camera cuts in an open environment where there are hundreds of characters, vehicles, and ships engaging in a massive battle. 

This level of complexity meant that we could not follow a traditional workflow. It's the kind of project I really like, where you need a radical approach for it to be even possible. I reasoned it would have to rely almost completely on a procedurally generated, integrated environment in order to have coverage over the gigantic area. 

We took delivery of a very low resolution ground geometry from 343 Industries, and I set about converting it into high-resolution VDB volumes, which were processed using custom procedural noises to add rocky details and overhangs. 

Naturally, VDB volumes could never be high resolution enough to do it all, so these procedural noises were also used to apply render time displacements. In the end, the terrain was 100% procedurally generated and textured other than masks to localize small changes, for example making sure characters aren't walking through rocks, moving the ground to meet their feet (since the contacts were to a low-res unsubdivided terrain).

It also posed challenges with regard to atmospherics and lighting. Luckily, prior to this project, I had recently done some R&D on a planetary atmospheric scattering shader. This R&D became a planet Earth tool/asset, where the camera can go from space, then fly through atmosphere and detailed clouds to ground level. The main cloud base follows a similar ethos to the terrain shader – an image is used as input (40K blue marble data) then various distortion and comped procedural noises are used where the close-up resolution breaks down. Further down, we used procedurally generated cloud volumes that we could place and shape in a more art-directed manner. 

All of this, the ground, the animated assets, and the FX, were lit by scattered light from the sun as it passes through the atmosphere. So the light being white and the sky being dark above the atmosphere smoothly transitioning to sunrise sky/sun color at ground level is a product of the atmospheric scattering sky shader. That, incidentally, is surprisingly very simple to make in an offline-rendering context! It's not a completely purist approach though, to make render times manageable; the assets and terrain are lit by an environment sky light rather than gather indirect light from the volumetric sky.

Then, of course, there were the effects! Obviously, there are fluid simulations all over the place and a ton of PBD particle work for snow interaction. We crammed everything that we could in the timescale we had. This was definitely a case of more is best. I have to congratulate our effects guys – Jayden Patterson, Ola Hamletsen and Hudson Martin – for the monumental effort they put into this.

What is so unique about this project? 

Aitken: I think the unbroken camera move is what makes this project unique. These types of unbroken takes can make things very challenging, indeed. But Axis has done a couple of this monster one-shots before, so I knew in advance that we had our work cut out for us.

You lose the easy structure that normal cuts and shots give you in terms of required frame ranges for all sorts of things, so you end up finding other methods of structurally dividing the work up. For example, a whip pan might take the camera off in a new direction, or we might go over a cliff edge or a vehicle wipes across the shot. These sorts of moments allow us to bookend various bits and pieces within the overall sequence and to split the work out to various people who can work on those in parallel.

The main issue was probably the iterations that were required to get good enough feedback back to 343 Industries. Generally, it's always ideal to finesse final animation based on properly lit renders, as they tend to lend a weight to the performances that is different from looking at simpler playblasts (even with the more advanced Viewport 2 tools in Maya). Repeatedly stitching all the separate previews of various lighting and effects work back together again at our end so they could see 'the whole thing together' was quite tough – especially as various elements would be at very different stages of completion in the middle of the production period. Rendering time for such a long sequence is also a factor here (8,000 frames, multiple layers), and there were a few cases of making a suitable change at one point in the sequence that would have some kind of adverse effect elsewhere, which we wouldn't discover until it came off the renderfarm. 

Lastly, I would say just the insane amount of action happening all the time was a challenge. Just to make sure the audience could read it properly – that we could balance the look of what was on screen, which changes every moment, so the important elements stood out, but you could still 'delve' into the frame to see other things in the background. You had to be able to fully grasp what was going on at every point, and that was quite a tricky balance. More than once Brien would point something out to me that wasn't quite where it should have been, and that would be the first time I had even noticed it! Some cool little moment that Greg had added in the last animation update – there was simply so much going on. I'd say it's definitely a piece that will reward repeat viewings! 

Any other details you would like to point out about the project?

Ross: This project proved to be a great collaboration between the 343 Industries team and everyone on the Axis crew. We love working closely with our clients, either developing creative or working through the challenges on the technical side of things.