Section 3.1

A NEW CLOUD FOUNDATION

The first group of principles deals with a New Cloud Foundation for the industry. It is important at this point that we define what we mean by cloud as it can have numerous interpretations. We define cloud here as not just the hyperscale cloud providers1 known today, but as any internet-accessible storage platform that can be used as a common space for collaboration and exchange of data. This may indeed be provided by a hyperscale cloud service, a niche cloud provider serving specific use cases, a corporate data center with firewall access or even a near-set storage system used as a staging point to a larger cloud system. The fundamental point is that the nearly unlimited storage and compute, the pay-as-you-go (opex instead of capex) business model, and the workflow benefits of a “single source of truth” will make the cloud an integral part of our industry in the future and remove the duplication of assets seen today.

PRINCIPLE 1: ALL ASSETS ARE CREATED OR INGESTED STRAIGHT INTO THE CLOUD AND DO NOT NEED TO BE MOVED

OVERVIEW

In our vision of the 2030 creation process, all assets, from the first script to every captured file, every computer-generated asset and all associated metadata, will be stored immediately upon creation in the cloud.

Acquisition devices (cameras, microphones, sensors, script supervisor systems) could directly connect to the cloud, transferring assets seamlessly. These envisioned “cloud-native” acquisition devices would send encrypted, uncompressed files, and simultaneously a proxy, straight to cloud storage. That process would change production workflows at a fundamental level.

It is probably worth clarifying that the definition of cloud could extend to on-set uses. Many cloud operators offer small footprint local instances of their storage and compute which could be used on set or on location. By designing workflows using microservices operating in containers, they can work consistently regardless of whether they are operating on local infrastructure or thousands of miles away in a large data center. This local instantiation of the cloud can handle services, like fast playback of video for review, for IP acceleration and control of upload traffic to the main cloud (especially in areas where bandwidth is unpredictable or unreliable).

EXAMPLES

Currently production “dailies” are available next-day for directors, producers and executives to review. But if capture devices can stream directly from the set, even in remote locations, a number of processes can start immediately—dailies become “immediates”—offering live and remote viewing experiences that could be viewed on a desktop in real time without the need to travel to stages or remote shoots.

Editors and colorists could do a first-pass edit or color grade during photography from anywhere in the world and quickly provide feedback to directors on-set before they move on to shooting or setting up the next shot or location.

Uncompressed camera files could be placed in the cloud, where they are available to VFX providers who need the original plate. Later, those files are used for final conforming and compositing. Ultimately, they become part of the archive record. Meanwhile, proxy files, whose smaller size makes them more practical to use for many production processes, can be made available in the cloud, allowing those processes to begin immediately after capture.

The cloud files represent a “single source of truth” for reference purposes and enable everyone in production to see and understand the latest version of an asset, which helps with version control issues. An audit or changelog can track how and by whom that asset has changed over time.

IMPLICATIONS

Sufficient bandwidth to enable camera-to-cloud capture is an issue in 2019, especially in remote locations, and there will continue to be a balance going forward between the ever-increasing size of files and the speed of compression innovation and expansion in wired/wireless access technology. And yet the benefits of having all assets stored in the cloud from creation means the studios will continue to push the boundaries of cloud-ingestion technologies.

For this principle to be realized, the industry needs hardware, software and cloud vendors to work together to design cloud-integrated systems that can securely create, encrypt, validate and store captured assets in cloud object storage over advanced data communications networks.

For this Principle to be realized, the industry needs hardware, software and cloud vendors to work together to design cloud-integrated systems that can securely create, encrypt, validate and store captured assets in cloud object storage over advanced data communications networks.

Although ubiquitous connectivity (wired and wireless) continues to grow, it will be strained to keep up with the enormous on-set data requirement. Forecasts indicate that 5G will allow speeds of up to 1.4 Gbit/s and Wi-Fi will continue to increase in speed with the latest 802.11ax specification at up to 11 Gbit/s – both enough for a 1080p video feed, but insufficient on their own for the expected increase in file sizes. We can expect the volume of data captured to continue to increase with increases in resolution (4k to 8k and beyond) and higher frame rate capture. New on-set capture technologies, including volumetric capture, point clouds and light fields, could see an even more dramatic growth in the volume of data coming from set. For example, a light field generated with an array of ninety-six 2K cameras would generate hundreds of gigabits per second of data, beyond what any foreseeable network can handle. These data would undoubtably need to be compressed or preprocessed in some way before uploading to the cloud.

PRINCIPLE 2: APPLICATIONS COME TO THE MEDIA

OVERVIEW

While today huge digital media files need to be moved between facilities, by 2030 we can envision a high-bandwidth, low-latency, cloud-enabled production world where these files do not move and software tools come to the content instead of the other way around. Ultimately, this requires fast cloud storage and processing, which can serve perhaps hundreds or thousands of virtualized workstations where artists’ software tools can access assets without the need to move them. The artist will need only a screen, an input device and sufficient internet connectivity to stream a virtual desktop; the heavy lifting of compute processes will be handled in the cloud and revisions to files handled remotely.

While cloud technology might move files within its infrastructure (e.g., temporary caching of specific files at the edge closest to the artist to ensure minimal latency), from the user’s perspective, files are not moved. That is, files are moved only for convenience, not necessity.

EXAMPLES

In the future, a globally distributed team of VFX artists can be allocated multiple shots from a major movie and each one assigned a step in the process, as if they were all working locally. Instead of each user copying the master plates and the CGI asset libraries, they would instead be provisioned access to a virtual workstation, already preloaded with their required applications, software licenses and the media files they will need to access. As the compute can be co-located with the storage, there is no egress of the master files; they remain in the cloud, and only streamed desktops leave the cloud.

Likewise, an audio mix could be created, with a mixer receiving streamed access to the proxy video and full resolution audio elements to create a final mix, even though none of those files need to be locally resident. The output of the mixer’s work would be a metadata file that describes the final mix, ready for a final render/composite/packaging step.

Most creative processes could be achieved in a similar fashion with a remote desktop connection generating descriptive metadata. The most extreme case will be remote color sessions and DIs since the assets used are typically uncompressed final camera files that need to be displayed on color-critical monitors at native resolution, in 16-bit depth, with a wide color gamut and without any compression artefacts.

If the media is cloud-native and can be streamed anywhere, talent is no longer tied to the production locale, which can also be anywhere in the world.

IMPLICATIONS

Currently, creative talent often needs to live and work in cities close to the productions and their media files. If the media is cloud-native and can be streamed anywhere, talent is no longer tied to the production locale, which can also be anywhere in the world. Remote production talent in currently inaccessible markets can be called upon, perhaps for extremely specialized tasks, opening new avenues for creativity and a new pool of talent to address current shortfalls in specialist talent. And the move of applications to the cloud greatly expands the set of devices and locations from which a creative can access content and tools.

Studios can have confidence in their data security as workflows will no longer require external vendors to maintain copies of these files. Issuing and revoking access to assets can be done online by producers quickly and with instant effect. There will be no “shadow” or “rogue” copies of files, alleviating concerns about version control, where different versions of files have different edits. In addition, the vendors need not invest in on-site storage equipment and compute infrastructure; they require only large bandwidth pipes to access the internet. Hopefully, these changes will lower the barriers to entry for new vendors and increase competition for services.

PRINCIPLE 3: PROPAGATION AND DISTRIBUTION OF ASSETS IS A “PUBLISH” FUNCTION

OVERVIEW

Traditionally, propagating assets to the next stage in production or distribution of finished assets involves packaging and delivering files to another party, often using the internet as a distribution path. In our model, however, the files are already resident in the cloud and therefore do not need to be moved. A core tenet of our approach is that anything from production assets to the final finished content can be shared via the cloud – but the decision to do so rests with the current controller of the asset. During production, it may be the director who decides a shot or cut is ready to be seen by others, or the VFX artist who decides a scene is ready to share with the VFX supervisor. In distribution, it may be the executive who approves a release to consumer distribution. For the system to operate, all users must have the confidence to trust that even though storing files in the cloud means all assets could be visible to others, the controller’s action to actively publish the content down the line is what actually makes it visible and accessible to the recipients. Also, the workflow security system could limit access to specific uses, e.g., viewing, editing, etc., based on which cloud applications are allowed access via this “publishing” workflow.

Content protection systems have been developed over the last 20 years to protect media from internet piracy, but in our future vision, delivery will be predominantly via the internet and even unfinished production masters will be stored in the cloud already. For final delivery to consumers or distributors, the finished production can be made available in the cloud in a distributor staging area and a “dynamic package” created with a manifest containing the media that the distributor is licensed to receive. Future AI applications could even automatically interpret bilateral agreements and prepopulate these manifest files using smart contracts.

EXAMPLES

With content captured directly to the cloud, the director or DP can decide when, to whom and for what purpose (e.g., viewing or editing) the dailies (or what could be called “immediates”) are published.

The “VFX pull,” a process in which VFX providers traditionally “pull” to their local storage the full resolution plates they need to work on, could be redefined in the future workflows. We may see a “VFX push” of permissions, which would enable the VFX provider’s artists and applications to access and work on the master plates stored in the cloud. The plates would not move, but they would be available to that vendor for the length of their agreement.

Content owners could publish all finished video and audio elements to their cloud “staging area” along with a manifest file that references how they all link together. Each distributor can access appropriate parts of the package in accordance with permissions associated with their contracts. For example, a Spanish pay-TV operator may receive permissions for a high bit rate encode with Spanish audio, an edit that meets the local broadcast regulations and an optional English subtitling track. A digital cinema operator may get permissions for a DCP package and specific trailers optimized for that time/date playback license. Meanwhile, a global premium movie retail store could retrieve multiple regional edits from the consumer HDR grade with all audio options and a package of bonus features and interactive menus.

Content owners could go one step further and provide pre-encoded VBR files, which could be pre-cached on distributors’ CDNs, ready for the initial spike in traffic on release day when the media is “released to the web.”

By providing dynamic packages of content, the studios can mix and match assets and update and augment elements within delivery packages as other related content is made available. If there was ever an issue with rights clearances, for example, on a music asset in a specific territory, it could be instantly changed by adjusting the manifest file, where an audit log would clearly show which distributors have access to that media.

In addition, an automatic update notification can be sent down the line to recipients in case an item has changed. In the case of production, these could be updated versions of shots or files that need to be automatically rippled down to all users who have access to that variant and need to know it has changed.

IMPLICATIONS

This principle requires producers, executives and content owners to have established a basic level of trust in the cloud and its security protections. We have more work to do, as an industry, to alleviate any lingering concerns about placing unreleased content assets in the publicly accessible cloud and all stakeholders need to work together to ensure any concerns are addressed.

The requirement on the industry will be to work together to extend the work done in IMF, and potentially, other interchange formats such as DCPs and improved descriptive metadata, such as that found in the MovieLabs Digital Distribution Framework (MDDF), will be required to enable a much more efficient global digital supply chain. We may need future packaging formats that can contain real-time engine elements that can be rendered at home, light field or point cloud 3D asset packages and/or standardized XR formats for delivery of immersive narrative pieces.

PRINCIPLE 4: ARCHIVES ARE DEEP LIBRARIES WITH ACCESS POLICIES MATCHING SPEED, AVAILABILITY AND SECURITY TO THE ECONOMICS OF THE CLOUD

Want More Context?

Part 1 of our 3-part archiving series breaks down why cloud economics and access policies are changing archiving forever.

Read: Part 1 – A New Approach to Archiving

OVERVIEW

Content archives contain intellectual property potentially supporting billions in future revenue streams. Therefore, archiving and preservation of media assets is a vital, although often underappreciated, function.

Creating and storing media assets in the cloud obviates the need to support legacy archiving hardware (e.g., continuously obsoleted tape formats) and standard media formats to preserve files. Those issues will fall to the cloud service providers themselves in the future.

This principle recognizes another key advantage of the cloud for storage: subject to economics and cost viability, the media need not “go anywhere” when it is archived. Currently, archiving can be seen as putting assets in a place where unauthorized people cannot access them and nobody can destroy them. The drawback is that it is difficult to retrieve and repurpose those assets. In the future, a cloud-based archive can be indexed and made readily available to authorized users for monetization, cross-referencing for future productions, remastering and education (of both people and algorithms).
This Principle recognizes another key advantage of the cloud for storage: subject to economics and cost viability, the media need not “go anywhere” when it is archived.

Accessibility will no longer be about who physically has access, but who has been assigned access through policy. These policies will vary between content owners based on how they wish the asset to be used. Policy decisions in a cloud future will include factors such as speed of recovery, costs associated with deep asset retrieval and what is kept in fast “online” storage (e.g., proxies and metadata) versus the deep archived master files.

Future AI/ML bots could crawl massive data depositories looking for reference material to use for inference…

At the end of a production, the content owner could archive the media with the push of a button, assigning archive policies to that media. Behind the scenes, the cloud might demote files to a lower cost and slower performance cloud storage tier, with all assets remaining indefinitely online and available for future productions, remastering or redistribution at a later date.

Future AI/ML bots could crawl massive data depositories looking for reference material to use for inference – for example, highly specialized data for learning the distortion effects of a legacy lens.

New productions wanting access to reference material used on earlier movies or others in the same canon can quickly search and find original concept designs, actual shots or VFX models used on old titles.

Understanding and controlling which assets are kept in which cloud storage tier can be complex now and will likely get more complex as millions of new files are added weekly. However, cloud storage tiers are likely to evolve to become “self-optimizing” by automatically moving less used files to deeper and deeper archive tiers to maximize cost versus access. Machine learning tools in the future could use pattern analysis to predicatively retrieve and preposition deeply archived files so they can be ready for reuse as they are required.

IMPLICATIONS

As archive access will likely be slower and more expensive relative to active projects, a robust proxy that can be easily accessed in fast-access storage tiers will be required, not just for video files, but all elements of the archive, including 3D assets, audio elements and volumetrics. For example, a new specification for a 3D asset proxy could specify a high resolution, high dynamic range rotating turntable video of the asset that shows the asset from every angle and explodes to include how the elements, including mesh, textures and rigs, were applied to create the final asset.

Before any studios trust a third party to store more than 100 years of archives, they need to get assurances that their content will be secure; retained in bit-perfect form; protected from hostile action, system failures and natural disasters; and accessible indefinitely. This means trusting foundational elements of cloud infrastructure such as the underlying at-rest encryption, the key management system (so content owners may never lose access to their encryption keys) and protection from future digital attacks (rogue actors, viruses or even electromagnetic pulses) that could corrupt the archives.
Before any studios trust a third party to store more than 100 years of archives, they need to get assurances that their content will be secure; retained in bit-perfect form; protected from hostile action, system failures and natural disasters; and accessible indefinitely.

In addition, there are still issues with cloud economics that mean it can be hard to predict future costs based on current estimations for how often those archived files may be retrieved, re-indexed or moved to faster tier storage, all of which will need to be resolved before mass migrations of data can occur.

PRINCIPLE 5: PRESERVATION OF DIGITAL ASSETS INCLUDES THE FUTURE MEANS TO ACCESS AND EDIT THEM

OVERVIEW

Celluloid/polyester film has a number of enviable traits, not least of which is that it will always be readable by future generations by simply holding it up to a light source. The same is not true of digital storage media created even a short time ago. It might require discontinued devices capable of reading that data. Data might be stored in a file system that is no longer supported. It might use a file format no longer maintained. Or it might require an application that is now defunct or orphaned by progress. Therefore, as principle 4 addresses access to future media files, this principle addresses the need to ensure that we can continue to open and use those files in the future.

The essence of an archive is storing for perpetuity that which cannot be recreated ever again.

The essence of an archive is storing for perpetuity that which cannot be recreated ever again. Storage in the cloud (Principle 1) solves many problems of devices and file systems. But it may also be necessary to protect against defunct applications by seeking out interoperable file formats, open standards and potentially open source code to ensure we can continue to open and edit files in the future. Future archiving standards may include not just archiving the assets created by an application, but also archiving the application itself to provide perpetual access, and even archiving the virtual machine that was running the software (with its specific I/O and interface requirements). In such a way, future emulators could “rehydrate” the exact machine and allow a future user to open and edit the media or asset. However, that may still not be enough to upgrade that asset to something that can be used by modern applications. Achieving that goal likely requires industry support for both open file standards and basic open source software to read those files.

EXAMPLES

Camera RAW files are used on some productions as the “master” from principal photography, but those file types are proprietary to each camera manufacturer and need to be DeBayered using proprietary algorithms. Rather than archive just RAW files that may be hard to edit in the future, it may make sense to also archive copies of any proprietary software required to access the files and to use an open file format (e.g., OpenEXR or a yet-to-be-created standardized RAW format) as an archival format for all camera files that can contain the full resolution, dynamic range, frame rate and color space from the originals.

In addition to archiving each element in the workflow, the workflow itself could be archived – including every application, file and metadata used throughout. This sort of comprehensive archive could include too much information to be efficient as an archive, but may be useful for future workflow optimizations, as a way to look for opportunities to improve efficiencies in future productions.

IMPLICATIONS

As innovation occurs around new tools, equipment and techniques that sometimes use customized data and metadata to enable rapid innovation, it will be important to bear in mind the ongoing need to protect vital media assets for future generations of filmmakers, consumers and studios.

[1] We use the term hyperscale cloud providers, but they are often referred to as “public” cloud providers; however, there is nothing inherently “public” in their offering and the word may get in the way of understanding the role they fulfill.