Here, Kevin Rafferty, CafeFX visual effects supervisor for Red Cliff, discusses in detail the effects the studio created.
What type of work did CafeFX do for Red Cliff?
For John Woo’s epic film, CafeFX re-created the historic Chinese Battle of Chi Bi (Red Cliffs) on both land and water. We generated vast environments depicting General Cao’s massive fleet mobilizing along the Yangtze River. Only a handful of full-scale ships were used while filming the live-action plates. General Cao’s flagship, though actually built and shot, never left dock.
Our task was to generate his fleet of hundreds, sometimes thousands, of ships, fully crewed, and sail them down the Yangtze River behind his flagship. All live-action water was replaced with a computer-generated river flow, with wakes and oar splashes generated from the battleships’ motion, and flat dockside land was replaced with dramatic computer-generated caverns and cliffs. In many shots, General Cao’s ship and crew were the only thing left of the live-action plate, with everything else surrounding them digitally created.
Before the massive sea battle, General Cao’s forces made camp directly across the river from the Allied Forces encampment. To depict this epic scene, and also to show the viewer the incredible disparity in size of the warring fleets, CafeFX created a nearly-2000-frame shot, replete with Woo’s signature white dove. Though this dove starts as live action, as it flies toward camera, it is replaced by a CG dove. The camera is then the dove’s ‘wingman’ as it flies across the river to deliver a message to an Allied spy hidden within Cao’s camp, along the way showing how modest the Allied fleet is, while how massive Cao’s fleet is.
There were two different live-action plates lensed for this shot. First, the Allied camp plate was shot as a lockoff. We took that plate and created a 2.5 D environment out of it in order to be able to massively pan the camera and track the CG dove. As we pan with the dove, we enter a fully CG environment, revealing the Yangtze River and both fleets. Our CG world then tracks the camera motion of the second live-action plate (an aerial fly-through of Cao’s camp). Once onshore, we are back in the live-action world, with our CG dove still leading the way. The cavalry below was doubled in size, with CG cavalry both on foot and on horseback.
CafeFX also created a land-based battle sequence. We animated horses and soldiers on horseback, fighting alongside foot soldiers on the ground. We had three different army types to create, each with distinct battle styles and uniforms. Some of the most interesting parts of the battle scenes are the choreographed movements of the soldiers with shields. They animate from many traditional Chinese battle strategies based on the moves of animals, such as the tortoise. Creating historically accurate and exciting battle scenes that looked violent, yet beautiful, was a fun challenge.
You created some digital water. Describe the scenes and the work involved.
All of the shots of Cao’s fleet mobilizing down the Yangtze River had CG water generated for them. We used Autodesk’s Maya and Mental Images’ Mental Ray to create all the different water surfaces. As CG and live-action boats sailed down the river, we replaced any live-action water with CG water. In addition to the water surface, complete with current, we also added wakes at the head and tail of the boats, and oar splashes as the armies rowed down the river. The oar splashes were generated procedurally in Side Effects’ Houdini and rendered as a matte to apply to Mental Ray water renders.
You also created an immense fleet, correct? Tell me about that.
These were all the shots of General Cao’s fleet mobilizing down the river, which I have mentioned above.
We used Autodesk’s Softimage and its behavior/crowd simulation software to generate most of the fleet and its crew. We hand-placed hero ships, while procedurally placing the rest. We hand-animated characters in Maya, converted the animation to XSI, and instanced them with some procedural crowd logic to re-create their appropriate actions for their rank on the ship.
In one epic shot, we follow General Cao to the bow of his battleship as he gazes out over his massive fleet while his crew cheers on with a battle cry. All of our CG crew can be seen saluting and chanting in sync with the live-action crew.
How many shots did you work on?
We worked on about 30 shots, all of epic proportion. The dove shot, alone, was about 2000 frames.
Which tools did you use for the project?
We utilized Maya for modeling, camera, and hero animation. Mental Ray and Side Effects’ Mantra were used for rendering. Pixologic’s Zbrush and Autodesk’s Mudbox were used with Maya to create texture maps. We used Softimage for both our lighting interface and our crowd animation and simulation (Behavior). Houdini was used for FX/dust/oar splashes. We used E-on’s Vue for terrain generation in our powers-of-ten shot of Cao’s fleet. Both The Foundry’s Nuke and Eyeon’s Fusion were used for compositing.
Did you have to create any new software to handle this work?
There is always ‘glue’ code that needs to be written to help off-the-shelf packages fit into an existing pipeline. For instance, Behavior can be fairly primitive straight ‘out of the box.’ We had to customize it to fit our needs for fleet and crowd animation.
What was the biggest challenges you faced, technical or otherwise?
While we had assembled an incredibly talented team of artists, and established a workflow that could perform within the established CafeFX pipeline, our challenges at hand were still as epic as the shots we created.
It has been mentioned earlier that generating most of our fleet shots entailed creating a fully CG environment around a single ship, or a few ships. This brought our efforts to the border of live-action VFX and hybrid film.
We dealt with hundreds of CG ships holding thousands of CG soldiers floating along a CG river environment. We also dealt with hundreds to thousands of CG cavalry and foot soldiers interacting with live-action cavalry and foot soldiers in an enhanced, if not replaced, environment. Simulation time, render time, and, therefore, the feedback loop, became some of our biggest issues.
How did you overcome/meet those challenges?
We utilized all of the ‘usual suspects.’ We used low-resolution geometry for animation approvals up to, but not including, final animation approval. We used ‘pawns’ (ultra-low-resolution geometry) for early crowd simulation approval. We lit partial scenes for primary lighting approval. When it was necessary to render the full shot hi res, we started by rendering on 10s or 20s. We incorporated level-of-detail management whenever possible. We hit the problem with brute force by renting more and more render servers.
How many people at CafeFX worked on the film?
About 30 artists for three long months.
What are some other interesting bits you’d like to share about the work?
During our final crunch time, we had rented so many servers for rendering that they were showing up in every nook and cranny of the studio, taxing our air-conditioning system. Though it was mid-winter in central California, and quite wet and cold outside, we were showing up every day in shorts and flip-flops!
Is there anything else you want to add?
With any project, I always want to stress that without our team’s hard work and dedication to the craft and artistry of visual effects, no tool, pipeline, or renderfarm could have ever accomplished what we did.