Shared File Systems for Digital Postproduction
Issue: Volume: 28 Issue: 9 (September 2005)

Shared File Systems for Digital Postproduction

As the world evolves into an infinitely digital universe, film studios are breaking free of traditional processes and embracing digital intermediates (DIs) to increase efficiency and reduce costs. DIs give studios greater creative freedom, increase efficiency, and reduce costs by replacing film labs with digital alternatives that can match, or supersede, the quality of a film intermediate.

DI work is performed at high-definition, 2k, and 4 k resolutions; the larger the file size, the costlier the image. An uncompressed HD image, for example, requires about 8 mb of data, while a 2 k image requires approximately 12 mb of data per 10-bit log RGB frame. A 4 k image requires about 48 mb of data, quadrupling storage and networking bandwidth requirements.

The main task of a DI infrastructure is to move digital film images between various equipment in a DI facility. As high-resolution image files predominate, film sequences require extremely large amounts of data, from 200mb to 1.2 gb for every second (24 film frames). A DI facility is typically forced to use several types of data networking technology, applied to different areas, to achieve an efficient work flow to avoid bottlenecks.

To maintain this performance level, in addition to sophisticated networking technology, applications and storage systems must continuously handle data at the required rate and handle the demands on the network by other users. Therefore, choosing the correct infrastructure hardware and software components and using networking technology advantageously are imperative.


Storage area networks (SANs) with dedicated Fibre Channel networking are the primary method for providing high-performance shared storage in DI environments. That’s because SANs provide applications with direct access to files and provide faster access to large files. A shared file system is a critical component of a DI SAN infrastructure. Shared file systems are cross-platform software packages that support clients and applications on different operating systems to access and share the same storage.

By providing a single, centralized point of control for managing DI files and databases, shared file systems can simplify administration, reduce costs, and allow administrators to manage volumes, content replication, and point-in-time copies from the network. This capability provides a single point of control and management across multiple storage subsystems.

Shared file systems can accommodate both SAN and Gigabit Ethernet-based network-attached storage (NAS) clients side by side, to share and transfer content. Although NAS does not perform as well as SAN, it is easier to scale and manage, and is often used for lower-resolution projects.

Metadata servers are required to support the real-time demands of media applications using shared file systems. In large concurrent postproduction facilities, for example, thousands of file requests for video and audio files come from each application. In DI applications, requests could number as many as 24 file requests per second per user. Metadata servers and the networks that support shared file systems must be able to sustain these access demands. Out-of-band metadata networks can provide a significant advantage over in-band servers that share the same network link as the media content because metadata and content are not sharing the same bandwidth.

In a hardware-based RAID storage system, as the number of concurrent users increases, the stripe group must be increased to meet the total bandwidth demand and not drop frames. High-resolution files require significant increases in bandwidth for each additional user, forcing RAID expansion. As stripe groups increase, it becomes increasingly difficult to maintain data synchronization, calculate parity, drive ports, and maintain data integrity.

When concurrent high-resolution content users must rely on large file-based RAID arrays and large network switches, performance is difficult to maintain, and infrastructure problems arise. Often, when multiple users request the same content within a stripe group, available bandwidth is reduced, variable latencies are created, and the file system cannot deliver frame content accurately. If a RAID storage system becomes more than 50 percent full, content data fragments over time, storage performance drops, and users lose bandwidth. These infrastructure issues must be resolved before users can take full advantage of shared file systems in a high-resolution digital environment.

Shared file systems generally address the collaboration requirements of DI environments. Using shared file systems lets multiple users access DI content without time-consuming file transfers and data corruption. Shared file systems are good for sharing DI content, but several infrastructure challenges still remain, such as the high performance and reliable delivery of DI data, which will be the focus of next-generation DI storage networking infrastructures.

At the root of these emerging challenges is the requirement for end-to-end content delivery, from storage to DI application memory. This requirement means that image frames must be delivered at precisely controlled intervals-24 frames per second in the case of digital film. If delivery is not precisely controlled, the application can drop frames or have buffer overflow.

The fundamental problem with existing storage architectures deployed in DI environments is that the storage and delivery of digital video and film images are tightly coupled. To deliver 1.2gb/sec, every segment of the data path, from the storage through the data link, to the end workstation adapter, and finally to the application receiving buffers, must meet the necessary quality of delivery requirement at the same 1.2 gb/sec throughput.

The weakest link in the data path-most likely the storage system-determines overall system performance. Storage systems today are based on conventional disk drives, and the I/O performance is closely related to the rotational speed of the disk platter. Despite the rapid increase in disk drive capacity and reduction in costs, the overall I/O performance on disk drives has not been improving at the same rate as improvements in capacity and density. Also, disk drive-based storage systems often suffer performance degradation when multiple read/write requests are applied to data blocks concurrently, resulting in rapid thrashing of the drive’s read/write actuators. Performance is reduced by as much as 90 percent when large numbers of concurrent accesses hit the storage systems.

Located in Hollywood, EFilm LLC is a cutting-edge digital film laboratory that has been breaking new ground in the DI arena since it created the world’s first 100 percent, full 2k digitally mastered feature-length film in 2001-Paramount Pictures’ When We Were Soldiers, directed by Mel Gibson. EFilm’s most recent digital mastering breakthrough was the work on Spider-Man 2, which was the world’s first 4 k, high-definition, digitally mastered feature film.

EFilm uses an SGI CXFS-based environment to create digital intermediates that include high-resolution scanning, color correction, laser film recording, and video mastering to create high-resolution digital distribution masters for film output, digital cinema releases, and home video and DVD.

EFilm’s SGI CXFS environment is spread across six color-timing rooms and serves approximately 100 clients using a Fibre Channel SAN and Gigabit Ethernet LAN with more than 200tb of storage spread over multiple SGI TP9400 Fibre Channel and TP9500 Serial ATA storage arrays.

In addition to content on the SANs, EFilm has 20tb to 30 tb of local storage distributed across five color-timing rooms. Cinematographers view projected, digital 1 k copies of movie images and work with colorists in these rooms to correct each film sequence digitally. Images are typically at 2 k and 4 k resolutions.

Rounding out the configuration are four SGI 3800 servers with 16 processors each, and approximately 5tb of directly attached Fibre Channel storage in each color-timing room. When a film is being scanned into EFilm’s systems, the studio uses SGI’s CXFS shared file system software to transfer 1 k copies of each frame from the SAN to local storage in one of the color-timing rooms. Final reviews are done at 2 k resolution before the final film output.

EFilm uses its CXFS SAN for both 1k and 2 k playback in its color-timing rooms. However, because of other loads placed on the SAN, EFilm chose to implement both locally attached storage and SAN storage for 100 percent reliable real-time 1 k and 2 k playback-a must for any DI environment.

Over the next two years, EFilm anticipates adding many color-timing bays, each supporting 2k- and 4 k-resolution editing. This expansion will place an even greater demand on the company’s SAN performance and storage capacity requirements. One option EFilm is considering is to transfer to an infrastructure that allows editors and colorists in each color-coding bay to access SAN-based 2 k- and 4 k-resolution content directly through SGI’s guaranteed bandwidth product.

Rainmaker is a world-class postproduction and visual effects company based in Vancouver, BC, that has captured the attention of audiences worldwide with thousands of visual effects in commercial, episodic, telefilm, and feature-film projects. The studio has received many accolades, including Emmy nominations in 1998, 1999, 2000, and 2001, and a 2002 Leo Award for Best Visual Effects in a Dramatic Series.

Employing more than 150 operators, editors, colorists, and coordinators for digital video postproduction projects, Rainmaker offers its clients laboratory, telecine, digital postproduction, HDTV, visual effects, and new media services. With all that data moving in and out of the studio, a reliable storage solution is a critical component of the creative pipeline.

Rainmaker’s ADIC StorNext environment is spread across 29 Microsoft Windows 2000 systems and six SGI Origin servers connected via a Fibre Channel SAN to more than 4tb of media storage capacity.

Four of the Windows 2000 servers and one SGI Origin200 server have Alacritech Gigabit Ethernet TCP/IP offload engine adapters that act as “SAN routers.” This allows more than 100 non-Fibre Channel-equipped workstations and rendering nodes to easily access SAN-based DI content.

Rainmaker’s team of 3D and 2D artists work with various file formats and resolutions including HD, 2k, and 4 k resolution, depending on the project at hand-creating special effects and animation for motion pictures, television, or HDTV. With 35 artists working simultaneously, large amounts of graphic images are constantly being pushed and pulled to and from the Fibre Channel SAN, and ADIC’s StorNext shared file system plays a critical role in enabling transparent file sharing among Rainmaker’s artists.

Depending on media resolution and streaming performance requirements, content sharing may also require administrative processes as well as file transfers from the SAN to direct-attached storage. Specifically, due to SAN bandwidth constraints, informal policies serve to limit the number of concurrent users accessing 2k or 4 k content. Or, artists may transfer high-resolution content from SAN to local storage during off-hours.


Saqib Jang is founder and principal at Margalla Communications, a Woodside, CA-based firm focusing on the storage and server networking markets.