Skip to main content

The focus is on advanced programming with MPI and OpenMP. The course is aimed at participants who have already some experience with C/C++ or Fortran and MPI and OpenMP, the most popular programming models in High Performance Computing (HPC).

The course will teach newest methods in MPI-3.0/3.1/4.0/4.1 and OpenMP-4.5 and 5.0, which were developed for the efficient use of current HPC hardware. MPI-related topics are the group and communicator concept, process topologies, derived data types, the new MPI-3.0 Fortran language binding, one-sided communication and the new MPI-3.0 shared memory programming model within MPI. OpenMP-related topics are the OpenMP-4.0/4.5/5.0 extensions, as the vectorization directives, thread affinity and OpenMP places. (The GPU programming with OpenMP directives is not part of this course.) The course also contains performance and best practice considerations.

Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the taught constructs of the Message Passing Interface (MPI) and the shared memory directives of OpenMP. Most MPI exercises will (in addition to C and Fortran) also be available for Python + mpi4py + numpy.

This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. It is organised by JSC in cooperation with HLRS. 

Fees

The course is free of charge.

Pre-required logistics

Unix / C or Fortran / familiar with the principles of MPI, e.g., to the extent of the introductory course MPI and OpenMP, i.e., at least the MPI process model, blocking point-to-point message passing and collective communication, and the single program concept of parallelising applications, and for the afternoon session of the last day, to be familiar with OpenMP 3.0.

To be able to do the hands-on exercises of this course, you need a computer with an OpenMP capable C/C++ or Fortran compiler and a corresponding, up-to-date MPI library (in case of Fortran, the mpi_f08 module is required). Please note that the course organisers will not grant you access to an HPC system nor any other compute environment. Therefore, please make sure to have a functioning working environment / access to an HPC cluster prior to the course.

In addition, you can perform most MPI exercises in Python with mpi4py + numpy. In this case, an appropriate installation on your system is required (together with a C/C++ or Fortran installation for the other exercises).