This course will take place as an on-site and in-person event. It is not possible to attend online.
Contents:
An introduction to the parallel programming of supercomputers is given. The focus is on the usage of the Message Passing Interface (MPI), the most often used programming model for systems with distributed memory. Furthermore, OpenMP will be presented, which is often used on shared-memory architectures.
The first four days of the course consist of lectures and short exercises. A fifth day is devoted to demonstrating the use of MPI and OpenMP in a larger context. To this end, starting from a simple but representative serial algorithm, a parallel version will be designed and implemented using the techniques presented in the course.
Topics covered:
- Fundamentals of Parallel Computing
- HPC system architectures
- shared and distributed memory concepts
- OpenMP
- basics
- parallel construct
- data sharing
- loop work sharing
- task work sharing
- MPI
- basics
- point-to-point communication
- collective communication
- blocking and non-blocking communication
- data types
- I/O
- communicators
- Hybrid programming
- Tools
Event page can be found here.
Agenda The preliminary agenda can be found here.
Prerequisites:
Knowledge of either C, C++, Python, or Fortran, basic knowledge of UNIX/Linux (incl. command line, Linux shell) and a UNIX standard editor (e.g. vi, emacs)