Course/Event Essentials
Training Content and Scope
Other Information
What if you could avoid installing a broad range of scientific software from scratch on every supercomputer, cloud instance, or laptop you use or maintain…without compromising on performance?
Installing scientific software is known to be a tedious and time-consuming task. The software stack continues to deepen as computational science expands rapidly, the diversity of system architectures increases, and interest in public cloud infrastructures is surging. Providing access to optimised software installations in a reliable, user-friendly, and reproducible way is a highly nontrivial task that affects application developers, HPC user support teams, and the users themselves.
Although scientific research on supercomputers is fundamentally software-driven, setting up and managing a software stack remains challenging. Parallel filesystems like GPFS and Lustre are usually ill-suited for hosting software installations that involve a large number of small files, which can lead to slow software startup, and may even negatively impact overall system performance. While workarounds such as using container images are prevalent, they come with caveats, such as large image sizes, required compatibility with the system MPI, and issues with accessing GPUs.
This tutorial aims to address these challenges by introducing
- CernVM-FS, a distributed read-only filesystem designed to efficiently stream software installations on-demand, and
- the European Environment for Scientific Software Installations (EESSI), a shared repository of optimised scientific software installations (not recipes) that can be used on a variety of systems, regardless of which flavor/version of Linux distribution or processor architecture is used, or whether it’s a full size HPC cluster, a cloud environment, or a personal workstation.
It covers installing and configuring CernVM-FS, the usage of EESSI, installing software into and on top of EESSI, and advanced topics like GPU support and performance tuning.