Course/Event Essentials
Training Content and Scope
Other Information
An introduction to the parallel programming of supercomputers is given. The focus is on the usage of the Message Passing Interface (MPI), the most often used programming model for systems with distributed memory. Furthermore, OpenMP will be presented, which is often used on shared-memory architectures.
The first four days of the course consist of lectures and short exercises. A fifth day is devoted to demonstrating the use of MPI and OpenMP in a larger context. To this end, starting from a simple but representative serial algorithm, a parallel version will be designed and implemented using the techniques presented in the course.
Topics covered:
Fundamentals of Parallel Computing
HPC system architectures
shared and distributed memory concepts
OpenMP
basics
parallel construct
data sharing
loop work sharing
task work sharing
MPI
basics
point-to-point communication
collective communication
blocking and non-blocking communication
data types
I/O
communicators
Hybrid programming
Tools