The Hunger Game
Douglas King
Issue: Volume 35 Issue 5: (Aug/Sept 2012)

The Hunger Game

Tom Coughlin
Pete Schlatter
Jason Danielson
Many of us who have worked with computers for years can remember a time, not so very long ago, when having 100mb of storage on a disk was considered impressive; when we were amazed by the first 1gbstorage devices. Now, however, studios are creating that much data in one frame of animation or one scene of a game. Storage needs have grown exponentially with the advent of more realistic gaming systems, 3D stereoscopic films, not to mention 4kand now 6ksingle-film-frame resolutions and 48 frame-per-second (fps) film rates.

The storage needs of studios, large and small, no longer deal in megabytes or even gigabytes. We are now talking about daily production of terabytes and petabytes of data.

How are studios dealing with this increasing need and the various challenges that come with it, such as latency and accessibility? Computer Graphics Worldasked Tom Coughlin, Pete Schlatter, and Jason Danielson, all leaders in the storage industry, questions that are on everyone’s mind, to see what options studios of all sizes have today, and what they can expect in the future.

With more than 30 years of experience with magnetic recording engineering, flexible tapes, and floppy disk storage and rigid disks, Coughlin, founder of Coughlin Associates, has written market and technology analysis reports and articles, including the “Digital Entertainment” series focusing on data storage and the creation, distribution, and reception of entertainment content.

Schlatter is the product marketing executive/brand evangelist at G-Technology by Hitachi. He has been with G-Tech since 2005 and has worked to support the unique needs of the entertainment industry while gathering information for next-generation products.

Danielson, along with his product team at Digital F/X, won an Emmy for Technical Achieve-ment for the development of the video workstation. Prior to leading NetApp’s worldwide marketing for the media and entertainment industry, he launched Omneon’s broadcast-specific Media Grid storage system, and at Silicon Graphics he developed solutions for more than 100 of the leading editing, compositing, animation, and graphics application companies.

Percentage of various recording media in professional video cameras.
Let’s jump right in. In the near future, what will the storage requirements be for studios?

As the resolution and frame rate of video content increase, the total quantity of content storage as well as network bandwidth needs increase. Also, as multi-camera projects increase due to stereoscopic and even free viewpoint workflows, the total storage and bandwidth requirements will continue to expand. Movie projects can accumulate total con-tent storage of several petabytes in size, and this will increase with time.

Schlatter: When it comes to storage, larger film studios are really looking for capacity, performance, and sharing. The higher-resolution films produced nowadays are increasing the amount of space needed to store them, and in larger production studios, you have multiple people working on the same project at the same time, which means that being able to collaborate and share these files with one another quickly and easily is extremely important. Smaller studios are more about getting one workstation set up with faster storage that then can be shared to other users. [Apple’s] Thunderbolt benefits smaller production studios because it provides quick storage transfer rates using new Apple gear and direct-attached storage.

Danielson: The storage demands for media companies are increasing at an unprecedented rate, with the adoption of digital cameras and on-set, file-based workflows and the proliferation of delivery platforms. On the production side, video formats such as 3D stereoscopic and 48-fps shoots are driving up bandwidth and capacity needs, which is why we have focused on increased bandwidth and non-disruptive scalability for our production storage systems. Bandwidth drives productivity by reducing file transfer times and the number of file transfers needed to move content through the workflow or pipeline.

On the distribution side, with anywhere, anytime access, the strategy for monetizing older as-sets is becoming clearer. This opportunity, along with the evolution of object stores and the cost reduction for terabytes, is building the case for deeper digitization of library assets. As a result, global distributed content repositories are emerging. With this trend, object store technology will become more important. We will also see continuing interest in the analytics of consumer behavior for the Internet, broadcast media, film content, and other formats. Therefore, storage architectures that accelerate big data analytics will be another focus.

You mentioned Apple’s Thunderbolt high-speed I/O technology. What impact do you foresee this technology having on the industry?

Thunderbolt breaks performance bottlenecks and enables laptop expansion by bringing PCI Express (PCIe), the system bus found inside the computer, to the outside. This will allow Thunderbolt-enabled systems, like Mac laptops, iMacs, and Mac minis (and soon, PC systems), to engage in professional, uncompressed workflows by allowing fast RAID storage systems to be directly attached, supplying up to 1000mb/sec data rates! Thunderbolt also enables the direct connection of professional video I/O devices to these systems—all this with one small, high-performance connection.

Coughlin: Thunderbolt is the first in a se-ries of PCIe-based storage interfaces that will remake the industry over the next several years and allow aggregate BW of 64gb/sec or higher.

Danielson: Because Thunderbolt-enabled laptops are getting performance that previously only desksides could get, there will be wide adoption of the Thunderbolt standard, particularly for on-set use. Frankly, on-set, file-based workflows sorely need more bandwidth. The bottleneck between on-set storage and the shared storage in the facility will be stream-lined by various solutions, such as ATTO’s Desklink family. With Thunderbolt on-set storage, ingest streams will more fully saturate the SAS, Fibre Channel, and 10-gig Ethernet ingest paths into shared storage.

Which storage devices work best and within which type of video production environments? 

Coughlin: The storage hierarchy for video production and postproduction will include Flash memory, high-performance HDDs (hard disk drives), high-capacity/low-cost HDDs, and optical or magnetic tape storage for archiving. Note that Flash memory is be-coming dominant in professional video cameras, and this trend is expected to continue in future years.

Schlatter: RAID subsystems work best in large studio environments. Smaller studios will use direct-attached RAID and Thunderbolt. Thunderbolt will be used in the field, as well.

Danielson: Our customer feedback is that customers would like to get the most band-width possible in the least amount of rack space, so we focus on bandwidth per rack unit: for mixed read/write bandwidth, a 2RU enclosure with 1.6gb/sec and a 4RU enclosure with 2.8gb/sec. For production environments, there are many different sizes, so there is no perfect storage device. The best solution is an agile and intelligent architecture that does not burden the production facility with unnecessary costs, yet has the ability to scale infinitely to meet growing production needs without disrupting operations. We focus on modularity to allow scaling down to entry-level systems and to allow granular scaling in easily digestible increments along with non-disruptive upgrades so that users have constant access to existing data while upgrades are performed.

Mac Guff Studios (Paris, France), a division of Illumination Entertainment, was the primary facility that created Dr. Seuss’ The Lorax, an animated film released in 2012, which, according to box-office Mojo, generated nearly $200 million in its first 45 days of worldwide release. The Lorax was rendered on HNAS storage by Hitachi Data Systems.

In regard to animation and special effects creation, what are the requirements? 

Coughlin: Modern video rendering has similarities to HPC (high-performance computing) and engineering simulation and modeling, and tends to involve bursts of data trans-fer that the storage systems and network must accommodate. The use of clustered computer and storage, with many nodes for processing the rendered images, is common. Very-high-speed InfiniBand connectivity for computer nodes and storage in not uncommon for these applications. Of course, as the resolution of the finished product increases—and there are projects now using 6kor higher rendered resolution—both the bandwidth and storage requirements increase. The high costs of building a modern rendering facility have led to a strong market for outsourced rendering. Because much of this work can be done transferring the initial input and the final result through the Web, rendering could be seen as a good example of the use of cloud resources for professional video production.

Danielson: We have seen our customers [in large film animation and special effects facilities] find great value in non-disruptively scalable storage systems. With effects and animation rendering occurring 24/7, it’s imperative that these companies have a unified infrastructure in place to bring every ounce of efficiency out of their renderfarms. For instance, they need to be able to move their datasets, rebalance their network ports to any node in the cluster, add storage nodes, and apply software patches—all non-disruptively and without downtime—because downtime costs money.

The chart above offers an example of storage and bandwidth requirements from the Coughlin “2012 Digital Storage for Media and Entertainment Report.” Note that stereoscopic content can increase these numbers by 2X for raw content and approximately 1.5X for lossless compressed content.

What are the storage and bandwidth requirements for stereoscopic and high-definition content capture? 

Danielson: The storage and bandwidth requirements are exceptionally high, and, for years, the industry has wanted higher-band-width capture capabilities. I do not foresee this trend stopping. Producers and directors will continue to push the bounds of what’s possible—I think that is part of their job. We no longer have one number to depend on for bandwidth per stream. You have to review spreadsheets of formats, frame sizes, frame rates, and color depth to derive megabytes per second. So, 230mb/sec, 380mb/sec, 1.2gb/sec, and even 2.4gb/sec come to mind.

Schlatter: At the high end, many movies are being produced in 4k resolution. There’s an incredible amount of data in a 4kimage, especially uncompressed, where data rates are upwards of 1,000mb/sec at 24 fps—that’s 1gb a second! For 3D work, multiply that by a factor of two! That said, the amount of storage needed would be tremendous if you’re working in a 4k, uncompressed environment. In addition, some directors are now experimenting with shooting at higher frame rates, which further increase the amount of storage needed.

Can you name some of the solutions for dealing with latency of storage systems?

Coughlin: Fast-cache storage is providing ways of dealing with latency issues. Increasingly, systems are using Flash memory in addition to DRAM for these caches. Faster interfaces that can use Flash-memory storage bandwidth are another important element. Note also that fast metadata servers can help with faster access to stored content, and these are increasingly moving to Flash memory as an important storage system component. In modern postproduction workflows, cloud storage may be used for collaborative workflows that span time and space, even though the latency might be too long for direct video streaming (one example is the viewing of proxies and dailies through the Internet).

Danielson: Just below bandwidth and above capacity, latency is one of the most critical aspects of a storage system. There are many strategies for dealing with reducing latency. There is a choice of drives (SATA, NL-SAS, SAS, FC, SSD), a choice of storage interfaces (SAS, InfiniBand, Fibre Channel, iSCSI, Ethernet), as well as tiering strategies for moving less-used content off to cheaper drives. There are also staging strategies for bringing copies of most-used content ‘forward,’ closer to the CPU, for processing or delivery of content.

Are there trends you foresee in storage devices? 

Coughlin: Studios will need to look at faster direct-attached and storage network connections and speedier storage systems. Higher-speed Fibre Channel (up to 16gb/sec) might become common for FC-based SAN facilities. Also, lower-cost 1gb Ethernet and the avail-ability of 40gb Ethernet will increase file-based NAS connections as well as iSCSI and other protocol SANs. Flash memory devices, including PCIe, as well as SATA and SAS express interface devices will be more common in storage systems as caching layers and system performance enhancements. Direct-attached and internal connections using Thunderbolt, NVMe, SATA, and SAS express will become common. Basically, PCIe-based connection storage will enable many high-speed media and entertainment applications. HDD and even magnetic tape (especially LTO tape with the LTFS file system) will remain in the background as near-line and archive storage, respectively, for the large library of accumulated content.

Schlatter: In the large production studio setting, I see the need for more capacity and shareability. For the smaller studio setting, I see the need for technology like Thunderbolt.

Danielson: In addition to the trends I mention earlier, other trends I foresee are the digital acquisition of higher bit-rate formats and greater emphasis on media assets across the range of distribution channels/delivery platforms; global distributed content repositories; and analytics to drive better business decisions regarding release windows and release platforms. From a storage perspective, I see a consolidation of production workflows due to the greater scale-up and scale-out bandwidth of storage systems, larger distributed libraries to more efficiently repurpose content, as well as lower-latency and more secure storage systems for big data analytics.

ATTO’s Celerity 8gb/sec Fibre Channel HBAs allow Arc Productions to screen uncompressed film images in mono or stereo, which requires huge bandwidth and higher density storage with fast data transfers.

How will cloud storage be used? 

Coughlin: Cloud storage is assuming an important ancillary role in the M&E storage hierarchy.

Danielson: For the past decade, media companies have been figuring out the ‘make-versus-buy’ model for managing and storing media assets. The question for these companies is: Can a cloud provider offer storage that can scale more cheaply and more securely than if it is managed internally? However, the decision about whether to employ a public, private, or hybrid cloud is less important than making sure that the underlying storage architecture supports distributed content repositories and allowing everyone in the organization to see a federated view of the object store. A distributed content repository trumps the cur-rent capabilities of cloud storage because it can span dozens or hundreds of locations. Eventually, we will have cloud storage offerings sup-porting CDMI. This will make a hybrid cloud and multi-cloud offerings real.

Do you believe SSD usage will become a trend? 

Coughlin: SSDs and Flash memory in general will grow for usage in speeding content transfers and also for content capture.

Schlatter: While SSDs provide amazing performance, the cost per gigabyte is still significantly more than spinning disks, which makes them cost-prohibitive. In some applications, the advantages of SSDs—ruggedness, for example—may outweigh the cost. You can get some amazing data rates in a very small footprint (portability).

Danielson: SSDs are now a trend with both positive and negative attributes. SSDs were the panacea for a while. Now their value and where the data storage devices sit on the cost/benefit curve are better understood, and SSDs are not going to replace all the hard-drive technologies in play. However, some use SSDs as staged and tiered storage to reduce latency on most-used content, and this trend will continue and grow. Customers use SSDs for latency-critical usage cases, like film animation and effects rendering, to reduce the unused clock cycles in any given 24-hour period. SSDs for cached content are very effective in these instances and, therefore, are worth the cost.

What do you think of new developments, such as the Linear Tape File System (LTFS) in Linear Tape-Open (LTO)? 

Coughlin: File-based tape is assuming an important role in the professional M&E environment. Already cloud-based archive services, such as the Permivault solution from Fujifilm, point the way to creating low-cost and accessible media archives using LTO tapes with LTFS file systems.

Danielson: LTFS supports the tape argu-ment by providing a file system that applica-tion vendors can leverage for MAM (Media Access Management) functionality. So LTFS is great in that regard since the ecosystem can develop efficiency at the software application layer. But LTFS does not change the user’s perspective of tape in and of itself. The user still has the latency of accessing files, which means that tape remains a tier-two or tier-three storage alternative.

Looking at the latest developments for object-based workflows, what advantages do they present? 

Coughlin: Object-based workflows are the logical extension of file-based workflows as file metadata moves to even more granular levels—each frame can be a separate file. This allows more accurate indexing of the metadata and, as a consequence, can speed up digital workflows and make them more convenient.

Danielson: One advantage is that object stores support billions of objects and hundreds of sites. Global media and entertainment companies will build out true distributed content repositories, which the industry has been discussing since the mid-1990s. Another advantage is the Cloud Data Management Interface (CDMI), which enables object-based workflows to span tiers and brands of storage as well as the cloud. There are over 100 smart vendors, end users, and academic experts working to make this standard a reality. Distributed content repositories will be able to span on-premise infrastructures, privately hosted cloud infrastructures, and public infrastructures with security. Assets can be designated for geographic dispersion, quality of service, security, legal rights, retention, and, of course, next steps in the workflow. The benefits of object stores will tip the cost and benefit scale so that more companies will move to a consolidated enterprise-wide repository for all their media assets.

Is a 40TBhard drive possible? 

Coughlin: Within 10 years time, we will likely see common HDD capacities of 60tb or greater, SSDs with multiple terabyte capacities, and magnetic tapes with tens of terabytes of storage capacity.

Danielson: We are talking to a company, NanoScale, that is working on a prototype 10tbplatter that will be in a product a few years from now. Their non-magnetic technology has big upside potential, but it will take a while to get into production. So, yes, we will see 40tbdrives eventually, and with higher data transfer rates than we have today.

How much storage does our industry expect to use over the next several years? 

Coughlin: In the 2012 Coughlin Associates report, we stated that between 2012 and 2017, we expect about a 5.6xincrease in the required digital storage capacity used in the entertainment industry, and about a four-fold increase in storage capacity shipped per year (from 22,425pbto 87,152pb). Total media and entertainment storage revenue will grow more than 1.4xbetween 2012 and 2017 (from $5.6 billion to $7.8 billion).

Danielson: It’s clear that storage demands in the media and entertainment industry are rapidly rising. According to a 2012 survey by the Society of Motion Picture and Television Engineers members, within five years the digital storage needs of the media and entertainment industry will increase 5.6 times. However, all sectors of the industry—film, broadcast, cable/satellite/telco TV, Internet TV, and social media—are growing at varying rates. Each is driving unique demands for storage in bandwidth, latency, topology, reliability, application interoperability, and capacity. Film and broadcast TV have been growing at a steady rate, while cable TV is growing much more rapidly. Social media is expanding at an immeasurable rate. It is evident that the amount of storage we are going to use in the next three years will be more storage than we’ve used in the past 30 years. It’s estimated that between 2011 and 2016, the media and entertainment industry will see about a 7.7 times increase in digital storage.

Douglas King is a freelance writer and producer based in Dallas. He has worked in the entertainment industry for more than 20 years, including time spent as a creative director for a game developer, product development manager, and writer/director for film and television, currently writing/ producing for the Web comedy series “For Export Only.”