Mathematic Accelerates Productions, Reduces Costs and Goes Green with Hammerspace

How Productions Achieve Multi-site and Hybrid Cloud Workflows with Legacy Applications Using a Unified Global Data Environment
Hammerspace and Mathematic

ZERO

file copies sent between sites to enable multi-site, multi-cloud collaborative workflows

>2X

increase in production capacity using customer’s existing infrastructure

~80%

render decarbonization by repatriating renders to Paris

Summary

Media creation pipelines rely increasingly on collaborative multi-site workflows, which often get bogged down bridging storage silos across multiple locations (public cloud, private cloud and hybrids of them both). Mathematic Studios is a Paris-based creative VFX, animation, and motion design studio with more than 350 artists across four locations in France, the U.S., and Canada. Mathematic worked with Hammerspace to effectively transform all four sites into a single, unified online production environment. Despite each site having different storage types and infrastructure, Hammerspace software was able to unify all data in existing storage locations in a cross-platform, multi-site global file system. The Hammerspace deployment enabled Mathematic to implement new collaborative workflows quickly to increase production capacity. In addition, the ability to repatriate render jobs transparently back to Paris created significant financial advantages, enabling Mathematic to decarbonize 80% or more of its renders. These advantages demonstrate the efficiencies to be gained from solutions that apply the principles of the MovieLabs® 2030 Vision to production workflows.1

MovieLabs 2030 Vision Principle 1
MovieLabs 2030 Vision Principle 8
MovieLabs 2030 Vision Principle 2
MovieLabs 2030 Vision Principle 10

[1] Hammerspace have posted an endorsement and assessment of their platform and how it aligns to the 2030 Vision here: Hammerspace 2023 – Committed Progress Towards the MovieLabs 2030 Vision.

Background

With four production offices in France, the U.S., and Canada, Mathematic faced multiple problems caused by siloed, distributed infrastructure and disconnected production teams:

  • Remote teams were isolated from each other, limiting their ability to collaborate effectively.
  • Each team referenced and stored files differently, meaning copies were duplicated with conflicting metadata between offices.
  • Projects might originate in Montreal, but need participation from artists in Montpellier and/or Paris as elements were created, edited and rewritten. Rendering needed to be done in Paris, but additional finishing or other tasks were best handled in other locations.

This resulted in delays, additional CAPEX and OPEX costs, and significant complexity as copies of project data (and file trees) had to be continually shuttled back and forth between sites either via file transfer via fiber connections or on physical media. More importantly, the friction this added to production workflows limited the number of new projects the teams could take on, which was limiting Mathematic’s ability to grow its business and react to rapidly changing demand.

Hammerspace Case Study Figure 1

Figure 1: Before deployment of Hammerspace software, Mathematic shipped copies of camera masters and other files between sites either by file transfer applications on fiber connections or via physical media.

Solution

Mathematic selected Hammerspace software to transform all four sites into a single, unified online production environment. Even though each site had different storage types and infrastructure, with Hammerspace all data on any storage across all locations was unified in a cross-platform, multi-site global file system.

According to Clement Germain, VFX Supervisor at Mathematic: “We could just plug all our storage arrays into Hammerspace, and then all that data was immediately available to all our users in other offices in Canada and the US.”

Hammerspace Case Study Figure 2

Figure 2: Mathematic deployed Hammerspace software on commodity servers in each of its locations, assimilating file metadata in the cloud instead of from existing storage in each site and tying all sites together into a
multi-site high-performance global file system with a unified global namespace.

Of key importance for Mathematic was the ability to create this collaborative environment without altering the existing storage, installing client software, or requiring users and applications to change their workflows or learn new applications.

Hammerspace accomplished this by “assimilating” file system metadata from data in place on existing storage, creating a Parallel Global File System that spans all storage silos and sites. This one-off process happens within minutes, without the need to migrate the underlying data. All standard file metadata is assimilated (i.e., ingested in its existing form) such that Hammerspace becomes the shared global file system seen by all users and their applications, spanning any underlying storage type from any vendor across all sites.

Hammerspace Case Study Figure 3

Figure 3: Upon Installation, Hammerspace assimilates file system metadata while leaving data in place on existing storage.

Each of Mathematic’s four sites performed the same assimilation using local instances of Hammerspace running on servers at each site. Even before metadata assimilation was complete, users could mount the Hammerspace file system within minutes and begin their work, with the assimilation continuing transparently in the background.

For multi-site data environments such as at Mathematic, each Hammerspace instance connects to the others, continually synchronizing metadata globally to extend the Parallel Global File System across them all. All existing security and file permissions are maintained globally.

The only change needed was for users to remount the new Hammerspace Parallel Global File System via SMB (Server Message Block), NFS (Network File System) and/or 3 protocols. No client software was needed, so existing creative applications and workflows remained the same. The only difference was that instead of seeing only their local files, users could now see a global namespace with all the project’s data across all sites, based upon the permissions granted by their existing AD credentials.

No conversion from old to new was required, nor any alterations to the existing storage or client infrastructure. Hammerspace creates a cross-platform global data environment to break down silos and does so without creating its own proprietary vendor lock in the process.

The Hammerspace architecture leverages the parallel file system capabilities of Linux standard pNFS (parallel NFS) v4.2 with Flex Files, which brings performance improvements and other benefits. This enables Hammerspace to create new instantiations of files in real time on another storage type or location, even while the file is actively in-use, being read from or written to. This is done as a transparent background operation, without interruption to users or applications.

Hammerspace does not create a new file copy (which would not align with Principle 1 of the 2030 Vision), but instead utilizes a local instantiation of the same logical file that shares the same file metadata globally. All users still see the same logical file, not different file copies, which makes background data orchestration imperceptible to the user even on live files that are actively in use.

This capability means that file instantiations can be automatically orchestrated in the background based upon workflow requirements or as a result of user actions. This may be a workflow automation that places data on high-performance storage close to compute resources to eliminate any latency issues that would cause problems for local users of remote data. All of this is automated in the background.

The file metadata is continually synchronized between all Hammerspace clusters globally, so everyone in all locations, is always looking at the latest version of the same logical file. File collisions and other such issues that sometimes arise in distributed file architectures are prevented by combining explicit file reservations, workflow automation (to pre-stage large volumes of data), plus file versioning and other automation tools leveraging custom metadata tags.

This global capability also means that users don’t suffer from latency issues, as might be the case if they were working on a file remotely stored many thousands of miles away. From the user or application perspective, the files appear and behave as local to everyone, regardless of where they are.

ARCHITECTURE

Hammerspace is a fully integrated software-defined solution that is installed as a single software load on commodity bare-metal servers, on type-two hypervisors, and/or on cloud-based machine instances. As such, it can be scaled up or out to accommodate extreme performance requirements. This includes support for as many as 16 Hammerspace locations, either on-premises and/or in the cloud, enabling customers to unify disparate storage types and infrastructures into a multi-site global namespace even at very large scales.

Hammerspace Case Study Figure 4

Figure 4: An architectural view of a multi-site Hammerspace global data environment comprising multiple storage types and compute resources. The common denominator is the Hammerspace Parallel Global File System, which acts as a unified metadata control plane and access point for users and applications.

Hammerspace technology is standards-based, with tight integration into multiple areas within the Linux kernel. This includes pNFS v4.2 with Flex Files, which enables even extreme parallel file system performance on standard Ethernet and commodity storage. In another customer use case Hammerspace is feeding 24,000 GPUs in an AI Research SuperCluster from 1,000 NVMe (Non-Volatile Memory Express) storage servers at 12.5TB/s to power large language models (LLMs) training, using standard Ethernet and commodity servers.

For customers like Mathematic, it means that rendering or other use cases that require high performance storage and compute are now unified across sites as part of the same global data environment. The emphasis on open standards means Hammerspace supports existing or new on-prem or cloud-based storage of any type from any vendor without requiring customers to install agents or proprietary clients. Users access the system via standard NFS, SMB and/or S3 mounts with no alteration to their existing applications or workstations.

This is important, since Mathematic uses a broad cross section of standard applications for VFX, 2D animation, post-production, etc. According to Germain, “At Mathematic we don’t want to limit artistic creativity by choosing just one application or another. So, we give artists all the options they may need.”

From an administrator’s perspective, the entire data environment can be controlled globally across locations with automation tools including a suite of data services to automate data orchestration for workflows, versioning, copy management and data protection. Users are granted access based upon their UID/GID in Active Directory. No separate authentication is needed, since Hammerspace passes through whatever they already have. This is part of the metadata assimilation process.

The flexibility of the architecture means that companies can stand up Hammerspace instances rapidly in remote sites or in the cloud, and quickly extend the global file system to include all sites, even when bursting to the cloud only temporarily, as for example when deploying a render cluster which requires access to the same assets and file tree, without replicating the data on every cloud region/instance. This ability also enables renders to begin more quickly, since the system essentially acts as a pre-staged cache. A collateral benefit for cloud-based jobs is reduction in costs, since resources have immediate access to the assets they need at a file-granular level. Hammerspace instances across all sites continually synchronize the file system metadata between them, so everyone everywhere is collaborating on the same project files in the global namespace.

Although applications can rely on Hammerspace without customization, some applications (such as Autodesk Flame, Shotgrid, and others) have integrated directly in order to extend functionality even further. For example, users in multiple locations can collaborate on Flame finishing workflows in real time as though they were local. Artists in multiple sites each have their own on-premises or cloud-based Flame instances but can work together in real time on the same file in a single online collaborative environment.

Hammerspace Case Study Figure 5

Figure 5: A screenshot showing Autodesk Flame running natively in AWS, with a user on the East Coast collaborating live with a West Coast user.

Render jobs can be invoked by operators using Autodesk Shotgrid, which triggers Hammerspace in the background to orchestrate file locations for rendering, and does so imperceptibly to users.

In the case of Mathematic, this means renders from any of the offices can be automatically repatriated to their Paris data center, which provides significant financial advantages. In addition to other benefits, Mathematic can decarbonize about 80% or more of its renders. The heat generated by the render farm is captured and fed into a private initiative that uses the heat for swimming pools throughout Paris.

Hammerspace software can be installed with a single software load that includes everything including the Linux OS. Support for open standards is an essential design principle, and tight integration with the Linux community is a key component of Hammerspace DNA. Trond Myklebust, the Hammerspace CTO, has been the Linux NFS Kernel maintainer for more than 20 years, and has ensured that Hammerspace adheres to the standards maintained in the community. Hammerspace Senior Principal Software Engineer Mike Schnitzer is also a Linux kernel maintainer, responsible for the upstream Linux kernel’s Device Mapper (DM) subsystem. And Tom Haynes, another Hammerspace software engineer, is a co-author of RFC 8435, which defined the pNFS Flexible File Layout in 2018, the key feature of pNFS that was adopted into the v4.2 spec the following year.

Because of its close integration with Linux, many of the core elements of Hammerspace have been pushed upstream into standard Linux distributions and are non-proprietary. This is a key reason why Hammerspace does not rely on proprietary agents, clients, or other hooks.

Hammerspace Case Study Figure 6

Figure 6: A logical view of Hammerspace software functionality.

BENEFITS

Mathematic saw immediate improvement after installing Hammerspace, more than doubling its production capacity with the same resources. This enabled it to take on additional projects that would have been impossible before.

According to Clement Germain from Mathematic: “Before Hammerspace, one project was on one main site, and we were just sharing some assets or some shots with the other sites (Montreal or LA). Now, talent at any location can collaborate on the same project wherever they are, leveraging resources that may be anywhere”.

Additionally, because Hammerspace does not require client software and is compatible with existing infrastructure, the collaborative benefits were immediately available to users, without installing new software or changing workflows. No changes to applications, tools, or use cases, nor any training, was needed for artists to take immediate advantage of the platform.

Financially, Mathematic saw improvements across multiple areas. Enhanced collaboration between sites enabled increased production volume and realization of greater efficiencies with existing infrastructure. Hammerspace also enabled Mathematic to automatically repatriate render jobs from any locations back to Paris to take advantage of incentives and allow offloading of the heat generated by the render farm into a Paris heating network. This offsets nearly all of the greenhouse gases such renders would otherwise create.

Alignment with MovieLabs 2030 Vision Principles

PRINCIPLE 1

The single global filesystem enables cloud-based workflows without the need to shuttle and manage multiple copies of the same assets. Each logical file provides a single shared source of truth.

MovieLabs 2030 Vision Principle 1

PRINCIPLE 2

The architecture supports a range of applications and workflows coming to the assets. Multiple, infrastructure-agnostic storage protocols support the use of a range of application infrastructures. And the ability to integrate existing storage means that existing workflows can still use assets directly from on-prem or cloud storage. Lastly, the ability to scale up metadata and storage caches means that performance can be scaled for a range of workflows and that cached copies can be automatically pre-provisioned to where and when they are needed for performance reasons.

MovieLabs 2030 Vision Principle 2

PRINCIPLE 8

The single global namespace supports referencing and access. It provides a single name for each asset, based on a POSIX file path that can be used to maintain references. And any application running on a client with the filesystem mounted can use that name to access the file.

MovieLabs 2030 Vision Principle 8

PRINCIPLE 10

The single shared logical file system enables multi-participant workflows without any delays for manually propagating changed files to the next task in a workflow. Modifications are automatically and immediately visible to all participants. Also, the logical name can be used in notifications and access control changes needed to “publish” assets to the next task.

MovieLabs 2030 Vision Principle 10

PARTNERS

Close collaboration with partners is essential to the Hammerspace mission of extending its functionality into the existing customer environment with true vendor neutrality. As a case in point, Hammerspace partners with Autodesk to enable Flame and ShotGrid to offer additional streamlined functionality based on the open standards built into the Hammerspace system. In both cases, creative talent use their existing Autodesk tools, with Hammerspace operating in the background to provide global access and automated data orchestration. The Flame example also includes close partnership with AWS. Hammerspace is unique in enabling live finishing workflows with Flame users in multiple sites collaborating on the same files, particularly with environments that are hybrid or fully based in the public cloud. The system is designed to synchronize in the background, eliminating problems of latency. Each artist works on their own Flame instance, which may be local and/or installed in the cloud. The background Flame integration with the Hammerspace Parallel Global File System enables artists to do their work without risk of stepping on others, even when working on the same clips.

Lessons Learned

Partners and customers rarely have the freedom to start with a blank sheet of paper to design an environment from the ground up as a single unified solution. They have existing infrastructure and need to satisfy ongoing business. Meanwhile, expanding with new hardware, adding new sites or new cloud infrastructure often exacerbates silo problems and causes more interruptions. Change management in an active operating business is extremely difficult, so tackling these problems by shutting down functionality to install one monolith storage platform can be problematic.

Hammerspace have learned that change needs to be as transparent as possible to the users and IT administrators are much more successful in implementing workflow optimization when its done gradually with little or no interruption to ongoing user workflows.

This realization has informed the development of Hammerspace software including minimizing changes for day-to-day users, a reliance on standards and maintenance of compatibility with existing infrastructure of all types. It also has led to disaggregation of backend storage from the frontend user space, allowing backend changes that do not disrupt end users and their applications.

Next Steps

Future development plans for Hammerspace software are focused on continual optimization to increase performance, enabling concurrent support for more sites, adding more intelligence to the data orchestration automation engines (such as expanding use of custom metadata to trigger workflows), and increasing integration with user-space applications, as has been done with Autodesk and others.

Currently, Hammerspace supports up to 16 cloud-based and/or on-prem sites in a single multi-site global file system, each of which can have a Hammerspace cluster from two to 60 nodes. The number of sites and nodes are not hard limits, however; expansion from 16 concurrent sites in the global file system to 32 is expected in the near future.

Finally, compatibility with open standards is a paramount ongoing consideration for Hammerspace. A significant portion of the Hammerspace near-term roadmap includes more enhancements to the pNFS specification, with numerous features that will be submitted to the Internet Engineering Task Force (IETF) for possible inclusion in future pNFS releases and standard Linux distributions.

MovieLabs Perspective

Principles 1 (assets go straight to the cloud) and 2 (applications come to the assets) of the MovieLabs 2030 Vision sound simple, and yet the considerable complexity underlying both principles makes them perhaps the hardest to deploy at scale. This case study demonstrates one approach to that problem and shows the value of a policy-driven distributed file system providing scalable access to assets without creating unmanaged copies. Because Hammerspace can integrate assets from a range of existing storage types, pre-existing workflows using on-prem or other cloud storage are able to share the same logical assets. This expansion allows legacy workflows using legacy applications to migrate to the benefits of cloud storage without complex redevelopment. It enables a near-term “lift and shift”, even when realizing the full efficiency of cloud-based workflows requires a longer-term effort to assess, analyze, and optimize infrastructure and workflows more broadly.

Hammerspace’s policy-driven approach enables pre-provisioning of cached copies where and when they are needed to improve user performance without file copying. The single global POSIX-based namespace also establishes a shared, logical name for each asset that supports the referencing and access of Principle 8. This approach doesn’t solve the problem of collaboration across all organizations and workflows where files and work need to span security and organizational boundaries and where different creative tools and pipelines often require different organization of file hierarchies. In those cases, separate, unmanaged copies of assets may still be needed. In the long term, we envision more use of identifiers, rather than file paths, to determine asset locations and to track and interrelate assets even between multiple organizations. But that’s a heavier lift. In the meantime, the Hammerspace distributed file system provides a much-needed bridge to that future.

Get the Case Study

Download a free PDF of this case study