The Power of the Pipeline
Issue: Volume: 28 Issue: 4 (April 2005)

The Power of the Pipeline

“We’ve reached a time when you can put your money into tape recorders for single users or, for the same price, have multiple users on the same disk storage,” says Shearer. “The kicker is that you can put any resolution on disk, and after it’s been edited, you can push it to any resolution you want.” This transition from resolution-dependent “islands” to resolution-independent, disk-based storage, he contends, represents the future direction of production houses.


Pacific Title & Art Studio has a long tradition of taking bold first steps. Back in the mid-1920s, the Hollywood-based facility performed titling work for the first talking picture, The Jazz Singer. In the late 1980s, the firm was among the pioneers that moved into the emerging computer-generated visual effects market. And since then, the now 86-year-old studio has transformed itself into an all-digital postproduction house that performs everything from titling to digital intermediate (DI) services for major motion picture productions, including, most recently, Elektra, Constantine, and War of the Worlds.

One of the biggest challenges the studio has faced in recent years has been to expand its network and storage systems to keep pace with the explosion of digital data it has been creating. Indeed, doing the math for the storage capacity required to digitize an average feature-length film begins to tell the story. When scanned at 2k resolution, each frame of a film requires some 13mb of storage; and when scanned at 4k, each frame consumes 53mb. When multiplied by the total number of frames in a feature-length film, the storage requirements quickly mount up to more than 2tb for a 2k film or 8tb for a 4k film.

But that’s not the end of the story. “When you work on a movie, you have multiple versions of it,” notes Andy Tran, Pacific Title’s chief technology officer. “Even at 2k, a movie could end up with 8tb or more when you account for all the versions, and a 4k movie could end up with more than 32tb.”

With that much data, Tran says, the first lesson one learns is to minimize the movement of data around the network. “If you try to move such large quantities from one file server to another,” he explains, “it could take days.”

In fact, minimizing data movement was the main reason the company moved to a clustered storage network architecture, based on SGI’s servers and storage arrays and running SGI’s CXFS distributed file system. With the 200tb storage area network (SAN), Pacific Title can now place scan data directly onto the network, which allows the studio’s artists to work directly from the SAN simultaneously, saving the time they might otherwise spend waiting for data.
Pacific Title & Art uses SAN storage from SGI for DI and effects work on films such as Elektra.




“As an early adopter of a SAN, we allowed people to work directly from it for compositing and 2D and 3D work,” says Tran. “Then about a year ago, when we migrated to a digital intermediate workflow, we just plugged that directly into the SAN as well.”

Key to allowing the DI workflow to take place on the SAN was to enable 2k film playback from the SAN in real time, a feat that would require 274MBps of data throughput. To meet the challenge, the studio had to add several storage controllers as well as perform the associated modifications and integration work.

Yet, despite enabling real-time playback from the SAN, the studio was unable to completely eliminate data movement around the network, as some viewing bays and DI tasks still require data to be transferred to workstations. To accommodate this type of data transmission, Tran and his IT team learned that it was necessary to add to the systems surrounding the storage “multi-dimensionally” to avoid network bottlenecks. “Every time we expand our SAN, we expand our network capacity and our servers at the same time,” Tran explains. “That way we don’t bog down our servers.”

What are the benefits of this SAN-centric approach? “The ability to play back directly from the SAN gives us a leg up in the industry,” says Tran. “Even though we’re a smaller company, the SAN now allows us to finish digital intermediates faster than some larger studios. The SAN is also faster for accessing data. The artists don’t have to spend time looking for their data now. They can just continue working.”


Back when work on the Miramax movie The Aviator was just getting off the ground, Oscar-winning visual effects supervisor Rob Legato realized he would have to use a lot of boutique effects houses with low overhead to stretch his budget. So when he needed a shop to produce some key shots for the film and set up a quality control station to preview work in progress, Legato and his producer, Ron Ames, placed a call to Venice, California-based Digital Neural Axis (DNA), whose president, Darius Fisher, was known for his ability to meet such challenges with off-the-shelf tools.

The bulk of DNA’s work centered on 53 shots recreating flight of the famous Howard Hughes aircraft, the Spruce Goose, over California’s Long Beach Harbor in 1947. To create the scene, DNA artists took film footage of the actors in a cockpit against greenscreen to simulate the historic flight. They then helped create the appropriate window views of the harbor from inside the airplane, adding sky, water, and boats to the scene outside. These would ultimately be combined with exterior views of the aircraft produced by Sony Pictures Imageworks, the film’s principal visual effects vendor.

Fisher was also tapped early on to help set up a quality control (QC) station at The Aviator Inc.’s Imageworks VFX headquarters. The station would allow Legato and Ames to evaluate work from different effects vendors in high-definition (HD) format. Having had success in designing DNA’s viewing station using Medéa’s dual-channel, 2Gbps Fibre Channel-based VideoRAID disk arrays, Fisher was asked by the production team to help set up its own QC station with a similar, SCSI-based version of the Medéa array.

In the Imageworks QC suite, the plan was to preview the effects work by loading preview renders from the various VFX vendors onto the storage array connected to an Apple Macintosh G5 workstation. Work-in-progress shots from each vendor would be combined using Apple’s Final Cut Pro HD editing software. The merged shots could then be previewed in HD QuickTime playing off the Medéa storage array via a PCI-based HD/SDI (high-definition/serial digital interface) CineWave card.

DNA used a Medéa VideoRAID array to store its visual effects work for 53 frames in the film The Aviator.




At DNA, the edit room setup was virtually identical to the ImageWorks-based QC station, except that a Blackmagic HD card was used instead of the CineWave card. “We wanted to mirror the same system that the production team had so we could see how our shots were working in sequence,” says Fisher. DNA intercut its VFX shots with HD footage of the exterior shots for its own review.

Since both the QC station and DNA’s edit room needed the ability to play back the frames in HD (24 frames per second, 8-bit, uncompressed) format, it was important that the storage system be able to sustain consistent data throughput rates of 85MBps to 95MBps, according to Fisher.

“In the storage realm, if you’re going to play back 10-bit, interlaced HD footage, you need a data rate of at least 140MBps to 150MBps. Our Medéas can read at sustained rates in excess of 350MBps and write in excess of 270MBps. Because we had fewer frames per second, it brought our data-rate requirements down to between 85MBps and 95MBps,” Fisher explains. “So we had no problem working at the lower data rate.”

Each morning, Fisher and his team gathered all the latest renders from the previous day and cut them into the timeline. They then reviewed them in QuickTime format in the DNA edit room.

Combining DNA’s shots with the Imageworks shots provided the creative team with a broader perspective of how the whole scene was shaping up. Says Fisher, “To really check the flow and pacing of the scene, it was invaluable to be able to see the progression of our interior Spruce Goose shots intercut with the external shots from Imageworks, and see it all played back in real time.”




In theory, one of the benefits of producing 35 years of material for the popular PBS series Sesame Street is that segments from so many past shows can be reused in new episodes. But in practice, it’s been a different story, as the show’s editors at Sesame Workshop have often found themselves struggling to locate and preview specific clips from previous seasons.

The problem stemmed from limitations of the Workshop’s legacy video-on-demand system, explains Stephen Miller, IT project manager for the nonprofit education organization. Indeed, Sesame Workshop has been storing the last few years of Sesame Street programming-totaling some 6000 clips-on hundreds of Ampex tapes, which required a lot of manual work to locate specific segments from past shows.

Using three large Ampex tape libraries, the system is capable of streaming high-resolution clips in real time from tape, which is practical when the editors know exactly where to look for the material. But when they need to peruse the content over several shows or seasons, the system has proven cumbersome. So to facilitate the search and retrieval process, Sesame Workshop is converting the files from tape to disk format.

The motivation for making this transition came from an unexpected quarter. In October 2004, Sesame Workshop signed a joint agreement with Comcast, PBS, and HIT Entertainment to launch a 24-hour children’s TV channel and accompanying video-on-demand programming.

As part of the deal, Sesame Workshop is responsible for transferring its initial 6000 Sesame Street segments-some 8tb of data-onto disk as well as transcoding each segment into both low-resolution and high-resolution video files for future viewing. According to Miller, accomplishing these tasks required replacing the Ampex tape library, connected to three servers (Irix, Linux, and openBSD), with a disk-based repository that would be capable of processing the huge volume of data.
Sesame Workshop uses a disk array from Nexsan to digitize Sesame Street clips from past seasons for rapid viewing and retrieval.




Miller and his team worked with IBM to determine the key elements involved in the new system. And once Sesame Workshop performs the June 2005 hand-off of its disk-based library, IBM will be responsible for managing and integrating the library into a co-located facility that will handle the video-on-demand service.

Storage for the new disk repository came in the form of a low-cost ATABeast disk array from Nexsan Technologies with nearly 10tb of capacity. The disk arrays include Deskstar ATA disk drives from Hitachi Global Storage Technologies. On the server side, the organization uses a IBM eServer BladeCenter to perform key functions in the tape-to-disk conversion process. The ATABeast storage subsystem was connected to each of the blades via a Fibre Channel SAN fabric.

The blades include an Ancept Media Server (AMS) from Stellent, which helps track and update metadata about each segment. The BladeCenter contains a TeleStream FlipFactory blade responsible for transcoding the 6000 clips from tape to disk.

Files were first copied over a Gigabit Ethernet LAN connection between the Ampex tape library and the AMS blade. The AMS blade then sent each file via FTP to the FlipFactory blade that first makes a high-resolution video copy in MP2 format of each clip and stores it on the ATABeast disk array. At the same time, the blade transcodes a lower-resolution MP1 video file of the same segment for later viewing in QuickTime via an Internet browser. The low-resolution version is also stored in another ATABeast partition.

Once the new disk-based repository comes online, Miller expects the system to improve Sesame Workshop’s current edit and production workflows by three to four orders of magnitude. “Our teams will be able to create a playlist and perform searches for all Elmo, the Count, and letter ‘W’ segments, for example,” he says. “Since it will all be digitized, we can quickly browse the metadata and low-resolution versions of segments from five years of programs.”

Looking forward, Miller plans to install several more ATABeast storage arrays to help tackle the next big job: digitizing the other 30 years of footage.


The workflow at Chicago-based Optimus studios, like that at many postproduction houses, would often begin when film footage was transferred to videotape via a telecine machine and then digitized onto disk storage so that digital artists could add content. Then, as creative work progressed, files would be transferred from disk to tape several times for key operations, such as when producing digital dailies and performing color-correction work.

While this approach enabled Optimus to produce a number of top TV commercials, including spots for the US Army and Dell, director of operations Knox McCormac decided that to maintain a competitive edge, the studio needed to streamline operations by moving to an all-digital workflow.

Of particular concern to McCormac was the color-correction process, as it required several manual steps that often severely restricted the workflow. Color correction had to be performed from a videotaped version, then manipulated in real time using a da Vinci 2k system. The color-corrected version was then fed back out to videotape again, requiring it to be re-digitized if editors and artists needed to perform further work. “If for some reason you needed to do final color correction,” he says, “you had to find the tape that had the scene on it, bring it upstairs to the color-correction suite, put it back on the telecine, do the color correction, lay it back off to tape, and take the tape downstairs and re-digitize it.”

In an effort to eliminate this kind of reliance on tape transfers, McCormac and one of Optimus’s colorists, Craig Leffel, investigated some of the newer digital workflows being used to color-correct feature films. They found that several shops had begun using various color-correction software, such as Discreet’s Lustre.

Optimus had already been using a variety of Discreet’s editing programs-such as Smoke, Flame, and Backdraft-on SGI Tezro workstations. So the studio decided to add Lustre to its suite of tools. With Lustre’s software-based color-correction methods, Leffel and his team can now show clients color-corrected changes to footage in near real time, thanks to the use of lower-resolution images stored on an IBM IntelliStation M Pro desktop system with about 4tb of storage capacity. The M Pro also uses a digital video card from DVS that allows Optimus to digitize footage from tape.

“With Lustre in place, instead of taking the telecine to tape, we can now take the telecine directly to a hard disk over a data pipe,” McCormac explains. “Data can then be copied into Lustre, where we perform color-correction work on it. The data can then be saved out to local hard drives.”

According to McCormac, software-based color correction is just the start in Optimus’s quest for an all-digital workflow. He is also installing a 20tb Discreet SAN that combines the fast 1.5GBps throughput speeds of Data Direct Networks’ storage controllers and disks with two SGI Origin 350 metadata servers and the CXFS shared file system.

With all SAN components running 2Gbps Fibre Channel, McCormac anticipates the SAN will allow Optimus artists to push three streams of 2k data off the SAN in real time. He expects to have the full system up and running next month.

“This is something we’ve wanted to do for more than 10 years,” says McCormac, referring to the ability to share data over a digital network. “But back then it would have cost millions of dollars to be able to pass around data like this. Today, that cost has dropped to an affordable level.”


DKP Studios is used to dealing with the challenges of sending large data sets over its network. In 1985, the Toronto-based postproduction house became the first in North America to go to full digital production for its rendering and compositing work, according to DKP vice president of production Terry Dale.

The company’s increasing workload in HD for TV, video, 3D animation, and special effects projects-including IMAX 3D films and animation work for the MTV Movie Awards-recently required DKP to triple its storage capacity just to keep up with the growing data files. “The big projects are the feature films and IMAX projects that chew through huge amounts of data,” Dale explains. “One of those productions can take up 30tb of storage very quickly.”

In the early days, DKP’s efforts to process such huge volumes of data (typically requiring 24Gbps of throughput) often resulted in servers going down, artists waiting for massive data pulls, and frames getting dropped when composited work was sent over the network. “Serving the data to and from users, or to and from the renderfarms, without bottlenecks was a real challenge,” says Dale.

About a year ago, the DKP team set out to find a system that could help them avoid these types of data storage and transmission problems. The equipment they chose included two Titan SiliconServers from BlueArc, which now support the rendering and compositing functions of the digital pipeline, says Dale, even with the renderfarm running at full capacity.

DKP uses a BlueArc system to store its animation work for productions such as the MTV Movie Awards.




One 20tb Titan SiliconServer disk array now assists DKP’s 400 to 500 dedicated CPU render nodes in storing renders. This data is then pulled from the first array for further compositing. Composites are written back to the second array, which is approaching 10tb of capacity.

According to Dale, the storage system’s ability to change data flow rates on the fly-which he refers to as “throttling I/O” to different departments-has been crucial, given the company’s growing need for rapid data transfers. I/O pipes on the back of the systems can be aggregated when needed, he adds, to essentially create one larger pipe.

“Within minutes, we can change the aggregation in order to give departments the bandwidth they need to get access to the data,” Dale explains. All of this results in faster iteration times and faster time to completion. For example, he says, load times for large files have dropped from 10 or 20 minutes to two or three.




Tippett Studio is no stranger to the storage and network requirements needed to support its creative efforts. During the visual effects facility’s 20-year history, it had to continually upgrade its systems to accommodate the growing size and complexity of the work it produces for television commercials and feature films.

In fact, work that the studio recently completed for the film Matrix Revolutions illustrates how much storage just one scene can now consume. According to Dan McNamara, Tippett’s director of operations, the shot required several terabytes of storage for just the rendered shadows to appear on a scene filled with complex machine creatures.

Unfortunately, backing up such large data sets to Tippett’s DLT tape library created bottlenecks on the company’s network. The backups consumed as much as 25 percent of total bandwidth, says McNamara, and began to impede the production of other tasks in the workflow.

To speed up the flow of data, McNamara decided to change the studio’s underlying storage architecture, which had consisted of a 5tb TP9400 storage array from SGI and a 1.2tb EMC Clariion array. “One requirement was to be able to easily integrate our existing storage into the new system,” explains McNamara. “We also wanted to tie it into our backup system so that our tape library could back up the main file systems off the network. So with these requirements, a SAN architecture was a necessity.”
Tippett Studio uses SGI storage systems for visual effects projects for TV commercials and films including Matrix Revolutions.




Tippett decided to use SGI’s InfiniteStorage solution and CXFS shared file system. The studio’s SAN architecture initially included a 10tb SGI TP9500 disk array that connects to the studio’s other storage devices and servers via two 16-port Fibre Channel SAN switches from Brocade. Since then, Tippett added an additional 2tb of storage and upgraded several of its SGI Origin 350 servers (or CXFS “nodes”) from four processors to eight. The studio is also in the process of adding another TP9500 storage array with an extra 10tb of capacity, along with another eight-processor Origin 350 server.

Using SGI’s FailSafe Cluster HA software, Tippett’s administrators can now migrate data from file system to file system, or from primary to secondary/backup storage systems without affecting users, says McNamara. Backups now run off the Fibre Channel SAN and do not impede the Ethernet network. Tippett uses an Origin 300 server with Legato Networker backup software to back up its files to a 600-slot Sony AIT tape library.

How has the new storage architecture improved workflows? “Before, we had to be frugal about the number of elements we kept around. Because we had a limited amount of disk storage capacity available, we had to quickly pull material offline,” says McNamara. “The added storage now allows the artists to be more creative in terms of the elements they have to choose from to create the final composite. There’s more opportunity to mix and match elements across different rendered outputs.”


FotoKem is a 40-year-old postproduction studio that often works on hundreds of projects simultaneously, ranging from creating digital masters to color correction. Recent credits include the digital processing of film trailers for the 2004 Oscar winner Monster and a variety of other motion pictures.

To address the requirements of its DI processes, FotoKem needed to upgrade its existing storage systems, which include a mix of SAN and Network Attached Storage (NAS) configurations. A typical DI project may include creating thousands of digital files-averaging 13mb each-that are accessed, processed, and rapidly moved throughout the workflow at 24 frames per second. FotoKem’s SAN-NAS environment supports Windows, Macintosh, Linux, and Irix servers and includes multiple digital asset management and creation tools.

“While our SAN is good at some tasks and applications, it did not scale well when it was loaded with multiple concurrent jobs,” says Paul Chapman, FotoKem’s senior vice president of technology. “We needed to handle more than 10,000 large files in a single directory.”

FotoKem opted for an Isilon IQ storage system, which Chapman says is “optimized for sequential, linear reads and writes and delivers the high performance that’s requir-ed in our digital content environment.”

The studio manages more than 25tb of digital content at any one time, with up to 8tb of content moving in and out of the 8.6tb Isilon IQ cluster in the editing process. The storage cluster handles the daily workflow of creating, processing, and editing film content and manages hundreds of thousands of files in the DI process.

According to Chapman, other advantages of the Isilon IQ storage system include the ability to scale capacity and throughput quickly to accommodate workload spikes, a single view for managing all content in the system regardless of the size of the storage pool, and support for heterogeneous platforms.


Michele Hope is a freelance writer. Her address is mhope@thestoragewriter.com.