Adding Lustre Storage to the HPC Equation

For organizations that need extreme scalability in high-performance computing systems, Lustre is often the file system of choice — for a lot of good reasons.

When it comes to high-performance computing applications, there is basically no such thing as too much data storage. Who doesn’t need more storage? Everywhere you look, HPC applications are ballooning in size.

A few examples:

  • AccuWeather, the world’s largest source of weather forecasts and warnings, responds to more than 30 billion data requests daily.[1]
  • The wave of medical data washing over the global healthcare industry is expected to swell to 2,314 exabytes by 2020.[2]
  • If you were to print out a map of a human genome, the stack of paper would be 300 feet high, which is about as tall as the Statue of Liberty.[2]

The years ahead will only bring more of the same. IDC forecasts that by 2025, the global datasphere will grow to 163 zettabytes a year, or a trillion gigabytes. That’s 10 times the 16 ZB of data generated in 2016.[3]

For organizations running data-intensive HPC and AI applications, the implications are pretty clear: Application performance will increasingly depend on extremely scalable, high-performance storage architectures that can keep pace with an ever-growing deluge of data. And this is where Lustre storage really shines.

The Lustre edge

Lustre is a parallel file system built for the challenges of high performance computing. For organizations that require extreme storage scalability without performance degradation, the Lustre file system can be a great solution. It enables the ability to scale storage up and down to suit the needs of the application, while maintaining the performance required for HPC and other data-intensive workloads.

While Lustre has been widely deployed for HPC-driven research workloads in academic settings, it has been making steady inroads into enterprise environments. Lustre has been deployed in thousands of data centers in industries ranging from healthcare and energy to manufacturing and financial services, and it is consistently recognized as the file system of choice for the world’s fastest computers.[4]

Ready Solutions for HPC Lustre Storage

Dell offers a wide range of solutions and supported products for organizations that want to leverage the Lustre file system. These offerings include Dell Ready Solutions for HPC Lustre Storage. This solution is designed for those who want to deploy a fully supported, easy-to-use, high-throughput, scale-out and cost-effective parallel file system storage solution.

Using an intelligent, extensive and intuitive management interface — the Integrated Manager for Lustre  — Dell Ready Solutions simplify deploying, managing and monitoring hardware and file system components. They’re designed to be easy to scale in terms of both capacity and performance, which equates to a convenient path for future growth.

The updated Ready Solutions for HPC Lustre Storage include Dell’s refreshed PowerEdge servers, Dell Networking and high-density Dell PowerVault ME storage to deliver improved capacity, density and performance compared to previous generation storage. In addition, these Ready Solutions are available in additional Lustre sizing options — configurations are available in scalable building blocks for 4-, 8-, 10- and 12-TB of estimated usable storage. And for a complete package, the solution can be delivered with full hardware and software support from Dell and Whamcloud.

A customer story

Swinburne University of Technology in Australia is among the organizations benefiting from Dell HPC Storage with the Lustre file system. This combination of technologies is on the job today in the university’s OzSTAR supercomputer.

OzSTAR is built on Dell PowerEdge servers, a high-speed, low-latency Dell H-Series networking fabric, and Dell Ready Solutions for HPC storage with the Lustre ZFS file system. With all this goodness under the hood, the OzSTAR system delivers a peak performance of 1.2 petaflops.

OzSTAR is primarily used by the Swinburne-based Australian Research Council Centre of Excellence for Gravitational Wave Discovery (OzGrav) to search for gravitational waves and study the extreme physics of black holes and warped space-time. In a single second, OzSTAR can perform 10,000 calculations for every one of the 100 billion stars in our galaxy, according to OzGrav’s director, Professor Matthew Bailes.[5]

Looking ahead, the university expects the OzSTAR supercomputer to be one of the keys to enabling Swinburne’s Data Science Research Institute to tackle new data science challenges, including those involving machine learning, deep learning, database interrogation and data visualization.

The bottom line

The big data explosion, coupled with accelerated technology, has made it possible to make new discoveries, and create AI algorithms for a number of automation use cases. As data sets continue to grow exponentially, it’s vital to have a scalable storage solution. When it’s incorporated into the right architecture, the Lustre file system provides an ideal solution to this need.

To learn more

For a closer look at Dell Ready Solutions for HPC Lustre Storage, visit dellemc.com/hpc. And to learn more about the capabilities of the OzSTAR supercomputer at the Swinburne University of Technology, read the Dell case study.

[1] AccuWeather, “AccuWeather Exceeds Record Milestone in Big Data Demand, Answering More than 30 Billion Requests Daily,” October 2017.

[2] Dell ebook, “Making digital transformation in healthcare a reality,” February 2018.

[3] IDC, “Data Age 2025: The Evolution of Data to Life-Critical,” April 2017.

[4] DataDirect Networks news release, “Storage Leader DDN Acquires Lustre File System Capability From Intel,” June 25, 2018.

[5] Swinburne University news release, “Swinburne supercomputer to be one of the most powerful in Australia,” March 7, 2018.

About the Author: Janet Morss

Janet Morss previously worked at Dell Technologies, specializing in  machine learning (ML) and high performance computing (HPC) product marketing.