
Program
Running a job on an HPC system is typically not that hard, but executing a non-trivial workflow that involves both data movement and various computations is not so trivial. In this training, you will get an overview of tools that can help you to efficiently and conveniently run workflows on supercomputers.
Learning Objectives
When you complete this training you will:
- be able to schedule tasks that run on a login node at given time(s);
- be able to run SLURM jobs at given time(s);
- know how to break down a computation that consists of multiple tasks that require specific resources into a set of SLURM jobs using job dependencies;
- understand how to use job dependencies to restart jobs that checkpoint their state;
- understand how to SLURM job arrays to run multiple jobs with similar resource requirements;
- be able to use either worker-ng or atools to simplify managing such as parallel workflows;
- use a workflow manager to run a workflow that consists of multiple tasks that require specific resources.
Fees
Free
Pre-required logistics
KU Leuven staff, Association KU Leuven staff, students, externals (VSC)
Anyone who wants to run non-trivial workflows on HPC systems
You should be familiar with the basics of running jobs on an HPC system and know your way around the command line. You should also be familiar with the basics of the SLURM job scheduler.