Storage Propels the Creative Process
Issue: Volume: 28 Issue: 9 (September 2005)

Storage Propels the Creative Process

For many digital entertainment studios, storage-centric IT networks are the invisible backbone of the creative pipeline. A vital work flow component that can help or hinder the efforts of artists and creators who are archiving, accessing, and sharing data for high-profile film, TV, DVD, and video projects, storage solutions can be easily adapted to meet the needs of any creative environment.

With high-capacity storage more affordable than ever before-6tb of networked storage can be purchased for approximately $14,000-studios are rapidly moving toward a more streamlined, all-digital work flow that relies on centrally accessible, shared disk storage systems to perform all facets of work in progress, including content creation, rendering, editing, color correction, and review. But the digital approach is not for everyone. Some studios still output to digital videotape, re-ingesting digital video footage back to disk for further editing.

“Studios really want to get out of the world of videotape,” says Tom Shearer, president and CEO of Los Angeles-based Talon Data Systems, a systems integrator that serves the broadcast and entertainment industries. “Everybody is pushing hard to come up with a work flow that lets them stay on disk throughout the production cycle.”

With an increased interest in centralized, networked storage technologies, such as network-attached storage (NAS), storage area networks (SANs), or a combination of the two architectures, shared file systems make it easier to distribute the same files among multiple users simultaneously.

Deciding on the right storage technology for production tasks can be a complex process. Studios and postproduction houses today can choose from a wide variety of NAS and SAN solutions, as well as shared file systems from vendors such as ADIC, Isilon, Network Appliance, Panasas, Pillar Data Systems, SGI, and others. Tiger Technology also offers a MetaSAN, which emulates a shared file system and works in both Fibre Channel and iSCSI SANs.

Studios must also choose from a wide range of disk-drive technologies that include both high-performance Fibre Channel drives and lower-cost, higher-capacity Serial ATA (SATA) drives.

File-based NAS systems are best suited for projects with many small, 1mb frames, like short-form work with visual effects, compositing, or frame-by-frame rendering. In contrast, block-based SANs work well when you need to quickly move large segments of non-sequential, uncompressed data, or perform real-time writing or playback from disk.

To meet evolving storage requirements, it’s now common for studios to have a combination of technologies, such as SAN and NAS, as well as Fibre Channel and SATA disk drives. “Often, studios should also have some type of shared file system,” says Shearer.

In the digital entertainment industry, success is viewed by how well artists can focus on what they do best: creating and editing content, as opposed to waiting for files to open, frames to render, or lengthy data transfers to complete. Sometimes, the right storage technology for studios depends on how well it integrates with existing processes.

At Reel FX Creative Studios, a Dallas-based creative group that focuses on film, DVD, and TV projects, including commercials such as “JCPenney Back to School,” creating a successful marriage of technology and process is what the company’s executive vice president Dale Carman calls “working creative at the speed of thought.”

To accommodate exponential growth and expansion of its services, Reel FX upgraded its storage from an initial SGI InfiniteStorage NAS 2000 system to what Carman estimates is now about 24tb of storage capacity on a SAN running SGI’s CXFS shared file system.


Reel FX Creative Studios uses an SGI-based SAN and a CXFS shared file system to facilitate work on commercials such as “JCPenney Back to School.”

Reel FX’s primary reason for the upgrade was to centralize its storage resources and provide seamless, simultaneous access to the same data by multiple users. “What it came down to was finding something with enough horsepower,” explains Carman. “We have 150 people accessing the data, plus 400 processors on a renderfarm accessing the data. The typical way to do that is to segment it out with different servers and storage for different users, but then you run into all sorts of management problems.”

The SGI-based SAN and CXFS shared file system solved Reel FX’s performance, content sharing, and storage management issues, and SGI’s guaranteed rate I/O, or GRIO, feature allows the studio to dedicate I/O to specific tasks, such as rendering.

India-based Pentamedia Graphics Ltd. focuses on feature films, visual effects, and animation features such as The Legend of Buddha, Ali Baba, and Son of Alladin. With four production groups-3D modeling and animation, 3D rendering, special effects, and digital editing and mixing-all delivering a wide array of digital content, Pentamedia moved away from a centralized storage network to create a segmented solution based on the individual needs of the groups.

Pentamedia assigned each of four 5.6tb Nexsan ATABoy2 storage systems to its own subnetwork (one per production group), using either 100Mb/sec Gigabit Ethernet or Fibre Channel connections. According to Riyaz Sheik, general manager of Pentamedia’s animation and production unit, this type of arrangement has allowed his teams to avoid much of the resource contention and throughput issues experienced by some other studios.

“To make the pipeline work better [and to avoid previous bottleneck problems], we had to break production groups and networks into a lot of subnetworks,” explains Sheik.


Pentamedia Graphics used four Nexsan ATABoy2 storage systems in the creation of Son of Alladin.

The Nexsan storage subsystems, which are based on ATA disk drives, were selected for a number of reasons, including pricing, support, and reliability, the latter of which has been tested under extreme conditions. “These products can work in any conditions, from freezing temperatures to hot temperatures and air-conditioning failures,” says Sheik. Because of this, Pentamedia plans to add up to 20tb to its existing 22 tb-plus Nexsan storage.

Digital Dimension, based in Montreal, knows what it’s like to almost “top out” your storage. The 3D animation, motion graphics, and visual effects studio recently had to juggle data storage for two projects simultaneously: Zathura, a full-length animated feature, and Magnificent Desolation, a 3D stereoscopic IMAX film. Digital Dimension also has been recently involved in other high-profile films, including Monster-In-Law and Mr. and Mrs. Smith.

Joe Boswell, a lead systems administrator for the studio, claims that work for Zathura alone required almost 7 tb of storage space to accommodate about 200 shots, many of them miniatures. With each shot consisting of 100 frames, 30 layers to a frame, at standard 2 k resolution of 12 mb per frame, the storage requirements for the project added up rapidly.

For Magnificent Desolation, the studio had to work with two separate plates (from two projectors shooting slightly offset for stereo), where each 6 k frame takes up about 100 mb of storage, multiplied by two. As the two projects came together earlier this year, the company anticipated peak usage and quickly moved to the Isilon storage system and Isilon’s OneFS shared file system.

To date, Boswell reports that the studio has been pleased with the system’s speed, as well as the low cost and reliability of the SATA drives compared to the more-expensive Fibre Channel components. The studio stores its content on approximately 16tb of disk capacity provided by an Isilon IQ 1920 clustered storage system that includes 160 gb SATA disk drives.


Digital Dimension relied on Isilon’s IQ 1920 clustered storage system and OneFS shared file system to help develop this mountain-climbing scene from the movie Mr. and Mrs. Smith.


The studio’s 2D rendering pipeline requires the most bandwidth. “Our 2D render nodes work on shots the artists have set up and sent to render. The render nodes are going pretty much all day and all night, pulling frames from, and writing frames to, the Isilon system all the time,” says Boswell, noting that Isilon’s clustered design provides automatic node balancing for clients across each of the system’s eight 2tb nodes.

“We can have eight nodes all pushing about 95mb/sec, with an aggregate of more than 700 mb/sec. I’ve tested it up to 400 mb/sec, where I was actually overrunning our switch trunks, which was phenomenal.” Isilon’s storage servers use high-speed InfiniBand interconnects.

Storage performance has also improved. The NAS array the studio previously used would often slow to a crawl, creating headaches for the creative team. “It used to get so bogged down that people couldn’t browse directories,” says Boswell. “There would be days when we’d have to send people home or ask artists to delete stuff. Before we installed the Isilon systems, storage was always the bottleneck.”

Meteor Studios, a visual effects studio with offices in Montreal and Los Angeles, knows what it’s like to have to send people home, or split artists into two shifts, to better manage the resource-contention issues that arise when a storage system is close to capacity and working overtime to process thousands of read/write requests per second.

Meteor, which is currently in production with the feature Alien Planet, performed complex visual effects work on one of the longest sequences in the Fantastic Four film. This process involved more than 100 artists working on 240 shots depicting just three to four seconds in the Brooklyn Bridge sequence of the film. With this much digital content to manage, and more on the way, Meteor began to explore its storage system options.

The requirements of the storage system were straightforward: It had to be able to handle very high I/O rates while allowing for rapid expansion in capacity. The studio considered storage systems from vendors such as BlueArc, Isilon, Maximum Throughput, SGI, and Terrascale before opting for BlueArc’s Titan Storage System.

Jami Levesque, Meteor’s director of technology, likes the Titan Storage System’s modular design, which allows the studio to grow quickly, adding bandwidth and capacity as needed, at a relatively low cost.

The ability to handle increased performance was another factor. During one job at Meteor, the Titan storage server clocked 140,000 I/Os per second-well above the studio’s typical peak throughput rate of 45,000 to 50,000 I/Os per second. Currently, the studio’s Titan system includes more than 7 tb of capacity on Fibre Channel disk drives and almost 3 tb on SATA disk drives.


To handle complex visual effects work, Meteor Studios upgraded to a BlueArc Titan Storage System, which features very high I/O rates and more than 7tb of capacity on Fibre Channel disk drives.


The Maine Public Broadcasting Network (MPBN), a nonprofit network that produces a number of TV shows, including the award-winning Quest series, has learned a thing or two about storage in its efforts to transform itself into a videotape-free operation. The transformation often required a video editor to spend up to 10 hours a week archiving video footage out to tape, or waiting to re-ingest a tape at another station before continuing. Editors at the station’s Bangor and Lewiston locations often used “sneakernet” to physically shuttle tapes between sites in order to share work.

According to MPBN systems integrator Kevin Pazera, the station’s use of Avid editing stations with non-shareable, direct-attached storage (DAS) creates inefficiencies at the network. As a result, the economically minded nonprofit station is moving away from proprietary systems with DAS to a more “open” SAN configuration.

MPBN plans to phase in Apple Mac G5s running Final Cut Pro at both its facilities. For back-end storage, MPBN will be using 30tb of storage capacity on two Fibre Channel SANs from Compellent (one in Bangor and one in Lewiston). The plan is to replicate data asynchronously between the two sites.


Video editors and producers at the Maine Public Broadcasting Network use two Compellent SANs at two locations to store raw and working footage used to create local shows like the award-winning science and nature series Quest.

According to Pazera, the Compellent SAN solution will make a huge difference for video editors, not to mention the station’s other business units whose storage needs will also be served by the SAN. MPBN is using Tiger Technology’s MetaSAN to handle resource contention issues. It allows each editing workstation to bypass the server, connecting directly to the 2gb/sec Fibre Channel SAN.

Now, MPBN editors can keep all the raw footage for each story on disk and work on the footage from any editing workstation. They can also do away with the sneakernet and, instead, directly access files on the SANs.

“This will be great for our editors because we want them to be editing all the time and not moving data back and forth,” says Pazera. And in the end, that’s the ultimate sign that a studio’s storage is doing its job.



Michele Hope is a freelance writer and can be reached at mhope@thestoragewriter.com.