Digital Forecast: Cloudy
By Alex Herrera
Issue: Volume 37 Issue 6: (Nov/Dec 2014)

Digital Forecast: Cloudy

The cloud. It's an IT term that's quickly grown to ubiquity, but it's a concept as nebulous as the formation that inspires the name.

The term is either widely misused or so broad that it's acceptable to reference just about anything that leverages the network as a "cloud  solution." But, looking past the hype and marketing speak, the advent of the cloud truly does hold unique new benefits, ranging from the nice-to-have to the game-changing. And its merits are as applicable in the area of digital media and entertainment (DME) as any other.

Ironically, at its essence, the cloud is not a new computing model at all, but rather a renaissance of the industry's original model. It enables a range of usage models and services, but all exploit the advantages of a centralized computing topology, where the bulk of both data and computations exist in one place, accessible by many.


A look at an emerging trio of cloud solutions serving Digital media and entertainment production today.

You don't have to know how it gets the job done, nor should you typically care where it's getting it done. You just need effective access and, hopefully, the trust that it gets done accurately, securely, and reliably. It could be public or private, it could be outsourced to a third-party provider like Amazon or Google, or it could be built and maintained in-house to serve various clients within the enterprise.

So, what can the cloud do for DME professionals? It can provide a means to outsource the most compute-intensive chores to a resource capable of delivering faster, higher-quality results than a deskside machine. It can provide the means to manage modern digital content projects, coordinating the contributions of many - scattered across the globe - in an efficient, streamlined workflow. Or, increasingly, it can go further and actually host the entire desktop - the data, the processing, and the rendering - delivering only the visual representation of that desktop.

These three usage models - call them the Compute Cloud, Workspace Cloud, and Graphics Desktop Hosting Cloud - all boast compelling benefits today. But, we're still only looking at a snapshot of a rapidly evolving space. Looking forward, the synergistic combination of this trio of cloud models - and, most likely, a couple more nobody's even thought of yet - will in all likelihood transform the DME workflow forever.

The Compute Cloud

Call it a revolutionary new application in compute outsourcing. Or, call it simply the latest incarnation of the oldest paradigm in computing history - create a job, upload a job, wait a bit, then download results and review. It's basically the same batch-job processing model from the days of mainframes and minicomputers. In the age of the cloud, it's more transparent and agnostic (in terms of applications and platforms), but the fundamental model really hasn't changed.


Producing frame-to-frame quality imagery are a natural fit for the Compute Cloud Model.

What kind of batch-type compute outsourcing is the cloud presently handling for creators? High-quality 3D rendering and video transcoding are two of the most common uses. Given the incredible range of video content consumption devices available, with displays that fit in the palm of a hand to the wall of a living room, quality transcoding isn't just a nicety, it's an absolute necessity. It is also a perfect candidate for the compute cloud, with providers like Amazon Web Services (AWS) jumping in to fill demand with its Elastic Transcoder. Charging by the minute (of transcode time, not content duration), AWS provides flexible and accessible transcoding in the cloud, with tools to manage processing options, security, and delivery, with a link to AWS cloud storage services.

Renderfarms producing frame-to-frame quality imagery are a natural fit for the Compute Cloud Model, a fit that did not go unnoticed by the small group of visual effects veterans who founded Zync, Inc. The company created Zync Render, a seamless, full-featured and platform-agnostic cloud-based renderer capable of both traditional scan-based rendering as well as raytracing. Creators kick off a render natively, from directly within the application, just as they would if rendering on their local machines.

With Zync Render, creators need to upload assets only once, upfront, and in advance. From there, the application tracks changes to assets in the background, allowing for subsequent renders to initiate without delay.

Supported applications include Autodesk's Maya; The Foundry's Nuke, Furnace, and Ocula (with Modo expected); and raytrace renderers Solid Angle's Arnold, Chaos Group's V-Ray for Maya, and Nvidia's Mental Ray. Zync Render has helped power big Hollywood releases, including Star Trek: Into Darkness, American Hustle, and Looper. That kind of high-profile work got Zync noticed, leading to the firm's recent acquisition by Google, the IT goliath with designs on becoming the dominant cloud provider of the future. 

An effective cloud solution will tend to keep the user blissfully ignorant of the wheres and hows of the processing. Seamless, "black box" operation is one of its draws, if not its primary appeal. But that doesn't mean it's better to hide everything. In the case of the compute cloud, users will often want to specify both constraints and performance goals, as well as manage assets and monitor progress.

Under the gun to get results fast? Select 16 processing cores paired to 60gb of memory. Working at a comfortable pace with other tasks to juggle in parallel? Choose a capable but more modest eight cores and 30gb (both supported options once Zync becomes available on the Google Cloud Platform). The more resources requested, the higher the cost, with Google's Zync Render expected to charge by the minute (with a 10-minute minimum). For completed and in-progress jobs, Zync Render's Web-based front end gives users the render process controls to track progress, review results, and manage all cloud-based assets.

The Workspace Cloud

In exploring the promised benefits of cloud-centric approaches, it's not difficult to see why the technology piques the interest of so many: access what you want, when you want, and from where you want, tapping and sharing a singular, secure (or at least securable) database. With a centralized computing model, users don't have to be in their offices or even on the same continent.

By storing models and footage in one place and avoiding costly copying, the "big data" problem becomes far less burdensome. And since the source content doesn't leave the pre-defined cloud boundaries, it's far more secure.

The fundamental premise of the cloud makes it a natural fit for DME, a space where big data is commonplace and workforces are anything but conventional. A few minutes of a Hollywood-caliber scene shot in 4k can now easily exceed 100gb - a magnitude far too large to be copying frequently, be it across the office or across the country. Meanwhile, DME workflows are getting more complex and distributed. Productions of all budgets have become the product of not one site, but multiple studios, postproduction houses, and visual effects contractors scattered across the globe, with head counts in constant flux tracking the peaks and valleys of typical production workflows.

That double whammy of logistical challenges is where the Workspace Cloud comes in. Its basic premise? Upload files to one central, shareable, cloud-based repository. Then rather than have every staff member and project stakeholder download the source data files - creating delays and superfluous copies - stream the pixels instead. The repository becomes a virtual workspace, where project members can contribute, review, and even mark up and edit others' content.


The Weather Company relied on Ci’s workspace to streamline collaboration among reporters scattered around Sochi, Russia.

A great example of the Cloud Workspace comes in the form of Ci, an innovative cloud-based solution from Sony Media Cloud Services, a business offshoot leveraging the resources of both Sony Electronics and Sony Pictures. A browser-accessible production and collaboration platform, Ci lets project staff upload and share their content (for instance, the dailies) with a director or other team members with permissions to review. Once the content is uploaded to the cloud, reviewers can stream the content, either in original or smaller, transcoded proxy video formats.

Particularly in the age of 4k, uploading big files is unavoidable, but Ci keeps transfer time to an absolute minimum. SonyMS built a concurrent multi-part HTTP upload to break up one big video file into many smaller pieces, uploaded in parallel and distributed across as many servers as is necessary. For the truly gargantuan projects, Ci also made Aspera's high-speed plug-in available, along with the bandwidth to make it fly.

A cloud-based workspace provides a unique and bandwidth-efficient way for production teams to collaborate, whether they are made up of employees or contractors, and regardless of where they're located. The Weather Company exploited Ci's location-agnostic support to unite its geographically dispersed staff at the recent Winter Games in Sochi, Russia. The team's Ci cloud workspace allowed for dailies review and remote access across the globe.

Of course, content data can only stay in the cloud if it doesn't have to be brought down to edit. Ci's got that covered as well, integrating a small suite of applications, such as VideoReview and Roughcut, with features to create cut-lists, allow annotation directly on video frames, and facilitate real-time group discussions. Throw in postproduction publishing and distribution capabilities to handle the essential to-dos - like checking, captioning, language, and tools - and you've got a robust, end-to-end video contribution, review, editing, and authoring platform, fully contained in the cloud.

A workspace in the cloud offers value, even if all goes according to plan. But how many productions ever go according to plan? There's always a hiccup, and a cloud-managed workflow is equipped to adapt more quickly, without impacting everything else in the pipeline. Can't find an editor to get on-site, or someone unexpectedly gets ill? It's easy to bring another into the project virtually, no plane ticket necessary.

The Hosted Graphics Desktop Cloud

The workspace cloud works especially well for natural images and video - upload once, then share, edit, and distribute from the cloud. But that model breaks down when it comes to synthetic imagery, rendered 3D graphics where there's typically going to be more than just one upload. In all likelihood, animators, modelers, and effects artists will update fairly often, for example, when incorporating changes in response to a director's review.

Now, given traditional IT environments, multiple uploads sound unavoidable. I mean, how else can content developed on a local desktop or mobile workstation get to the cloud? To answer that question with another, what if creators weren't developing content on local machines at all, but on machines up in that cloud in the first place? That's precisely where the third - the most disruptive -cloud model comes into play: the Graphics Desktop Hosting Cloud.


Graphics desktops hosted in the cloud enable access anywhere, anytime.

In today's conventional workstation- and PC-centric environments, everything is local to the desktop: the visual content, the rendering of that content, and the resulting pixel stream that shows up on the user's display. But in the cloud-hosted model, the user's virtual machine - the content, the processing, and the storage - resides in a server somewhere, in the studio's data center, for example, or outsourced to a cloud provider.

In the last couple of years, leading IT vendors, including Nvidia, Intel, AMD, Teradici, HP, and Dell, have bootstrapped the technology to make that high-performance graphics workstation-in-the-cloud a reality.

Rework an animation sequence, tweak a texture, and re-render. Now go review that updated sequence with a colleague or director who doesn't happen to be sitting in the next office. With a conventional deskside workstation approach, that would likely mean a lengthy delay copying over the network to somewhere, or perhaps a shorter delay but with a compromise in quality. With a cloud-hosted virtual workstation, however, the source data is modified in place, in the cloud, ready to stream to wherever needed.

The Network: A Make or Break

No doubt, studios like Jellyfish (see "A Case Study: Jellyfish Studios," this page) are at the forefront in the adoption of cloud-based technologies. They're hungry, nimble, and technically savvy, making them more likely to be willing to take the plunge on a fundamentally different computing paradigm. And more importantly, they're more likely to pay attention to the implementation details, details that could make or break the prospects of the entire proposition. Specifically, hopeful adopters need to take the time to assess and carefully engineer the network upon which they expect to create virtually hosted workstations that can match the experience a desktop machine can deliver.

A Case Study: Jellyfish Studios

The dramatic improvement in the performance and benefits of cloud-hosted desktops hasn't been lost on those at the forefront of digital media production. Studios and effects houses have been bumping into problems of unwieldy data sets, physically scattered staff, and security issues for years. Emerging cloud and data center solutions have found a welcome audience with this group, including Jelllyfish Studios.

For several years, the London-based media company (Line of Duty, Dr. Who: Day of the Doctor) had found the provisioning and support of the traditional IT environment - built around client-side PC and workstations - increasingly problematic. Upon starting a production, the company would have to buy and configure all the machines necessary to support employees and external contractors through all the peaks and valleys of a modern DME workflow.

That's tough enough for a company that can predictably amortize the investment. But a production house like Jellyfish might have to ramp up on a big project on a short schedule, with no guarantees all that investment will be leveraged once the project has been completed.

Now, factor in that big data problem virtually all creators are now facing. Jellyfish archives a once-inconceivable 300tb of current and archived content. Storing that on dedicated servers with subsets scattered on local creators' machines is a storage and bandwidth burden; keeping it up-to-date is daunting.

A year and a half ago, Jellyfish CTO Jeremy Smith planted the seeds for an eventual migration away from deskbound machines to those that are virtualized and cloud-hosted across its dual-site enterprise. Smith approached UK cloud provider Exponential-e about bringing its DaaS-GPU (Desktop-as-a-Service for Graphic Processing Units) to Jellyfish, along with a high-performance network to run it on.

Built on a foundation of Nvidia's GRID technology and VMware's Horizon DaaS cloud-hosting software, DaaS-GPU gives Jellyfish a pool of cloud-accessible, GPU-powered servers from which to host a flexible and dynamic number of virtual high-performance graphics machines. Each user gets a desktop client machine that can drive two full-HD (1920x1080) screens capable of displaying downscaled 4k content as necessary. And that massive 300tb of company data? It's now all stored in the same cloud, accessible by all contributors - anywhere, anytime.

With workstations served up from the cloud, projects now ramp up and down with minimal delay. Jellyfish can quickly wind down one contractor and provision another. By tapping a dynamic pool of virtual machines that rise and fall with head count, IT budgets aren't wasted on hardware that isn't being used effectively, or even used at all.

The number of deployed virtual workstations is dynamic and configurable, as are the capabilities of each machine, which can be dialed to the user. For example, the IT manager can set an animator's machine to deliver a guaranteed frame rate first, skimping on image quality if throughput is constrained. Conversely, the machine for a texture painter working in The Foundry's Mari can be tuned for maximum image fidelity at the possible sacrifice of frame rate.  - Alex Herrera

Clouds that serve up data today tend to be good at two things: latency-insensitive operations, like video streaming, and latency-tolerant operations, in which a half-second delay, for example, won't ruin the experience. Unfortunately, the jobs at the heart of content creation - interactive, iterative processes like modeling, animation, and visual effects - are not at all latency-tolerant. Rather, it's quite the opposite, and interactive, high-resolution 3D graphics demands a network that combines high bandwidth and short round-trip response times.

Excessive latency doesn't just dampen productivity, it's downright counterproductive.  We've all experienced the frustration of talking over each other on a phone or videoconference, with long lags from speaking to being heard. Now imagine the same delay from when your cursor rotates a character model to the time you see it move on screen. It won't just slow up your day, it will thoroughly waste it.

The burden of all that pixel bandwidth placed on networks only complicates matters. High-resolution, high-complexity CG imagery is a notorious bandwidth hog. Consider that one raw 1080p file with nominal pixel precision (that is, no HDR) can consume roughly 2gb/sec of network bandwidth. Compress those, you say? For sure, but do so carelessly with just any lossy codec, and you've immediately made the solution a non-starter for all kinds of content creation applications.


Jellyfish’s carefully engineered and plentifully resourced cloud infrastructure – not just any old network will do.

The bottom line is that a cloud-hosted computing topology like Jellyfish's can't be just thrown together on any old network. Do that, and that promise of the cloud won't lead to shorter schedules and improved productivity, but instead will turn to an exercise in frustration. The good news? Implementations like Jellyfish's are a solid proof of concept, and they show that while low-latency, high-bandwidth networks are not ubiquitous, they can be secured with confidence and reliability without breaking the bank.

Jellyfish invested in a 100gb Ethernet LAN at each of its two offices in the UK (Soho and Brixton) along with a dedicated 1gb line connecting the 30 km between the two. And the company chose Exponential-e's Layer 2 low-latency network to deliver round-trip response times fast enough that users can't tell their computer is not at their desk, but rather miles away in some backroom server. Cloud-hosted desktops can deliver breakthroughs in efficiency and workflows, but only with a network that can handle it. Hand-wave the details, and it will deliver creators an unusable experience they'll never forget.

Tomorrow's Optimal DME Cloud Solution

Despite the incredible advances in productivity afforded by the digital age, meaningful and efficient collaboration in the digital media and entertainment industry remains a daunting challenge. More elaborate workflows, mushrooming data size, and dynamic, global staffing have conspired to build roadblocks that waste time and money, pushing up budgets and sliding back completion dates.


Today’s discrete cloud solutions will inevitably evolve into one.

Should the industry plod down the same, beaten IT path it has been on, that challenge will only grow. The new paradigm that is DME creation needs new IT solutions, and the cloud represents several compelling ones. Workspaces and compute-acceleration in the cloud are here, and the alternative of hosting creators' machines in the cloud is coming on strong, now supported by a capable infrastructure of products and technology.

But the cloud continues to evolve, and the somewhat disparate models of today aren't likely to remain as distinct moving forward. There's simply too much synergy in work­spaces, computation, and creation in the cloud, and eventually all are likely to converge into one. Natural video will stream straight from cameras to the cloud, and creators will develop synthetic content in the cloud, and there will be no need to ever bring data back down. Conversely, there will be plenty of good reasons - terabytes worth, actually - to keep it all up there.


The not-too-distant future? Upload once, create in the cloud, and stream the final product.

Create CGI and impart visual effects without leaving it, leverage more capable and scalable compute and storage resources to speed workflows on the same size projects of the past, or use it to provide a feasible path to deal with the terabytes and eventual petabytes that will come with the inexorable growth in resolution (Is 4k already passé?) and color precision. Stream natural video directly from cameras up to the cloud, eliminating much of the waiting in upload.

In a way, the computing norm is coming full circle - from the mainframes and dumb terminals of years ago, to heavy-lifting workstations and PC clients on desks and laps, and now back to another centralized server-side approach in the form of private data centers and outsourced clouds. No, deskside work­stations are not going away anytime soon, and there will likely always be a use for capable horsepower on the desk. But thanks to advancements in silicon and network infrastructure, the digital media cloud is now ready for prime time.