Course/Event Essentials
Training Content and Scope
Other Information
Would you like to learn how to parallelize effectively with MPI and OpenMP and get to know some tricks from the experts?
This advanced MPI/OpenMP course describes different everyday challenges that developers of parallel code have to face in everyday work, and provides working solutions for them. Here you will see how to deal with parallel profiling and explore the knobs and dials that make your code exploit the best possible performance, just like domain decomposition techniques and parallel I/O. Each of these sessions includes hands-on exercises to facilitate the understanding of the different constructs. Moreover, you will also obtain some insight on useful parallel libraries and routines for scientific code development.
What?
In this course you will:
- Understand how to work with MPI and OpenMP with many examples from scientific applications
- Learn when and how to apply different parallelization strategies
- Experience how to develop and optimize code step by step for its use on a supercomputer
Who?
Everyone interested in learning how to make efficient use of MPI and OpenMP for different scientific applications
Requirements:
- Basic knowledge of Linux
- Basic knowledge of programming, particularly with C/C++ or Fortran
- Basic knowledge of parallel computing. No specific experience with supercomputing systems is necessary.
- Basic knowledge of MPI and OpenMP constructs (provided in the basic course)
You should have:
Your own laptop with an up-to-date browser and a terminal emulator. The use of the operating systems Linux and macOS is preferred, but not mandatory. For Windows users we recommend to download MobaXterm (portable version) as terminal emulator.
Acknowledgments
Part of the materials from this course are kindly provided by the collaboration between PRACE and HLRS.
Basic course
If you are not familiar with MPI/OpenMP, you may get the necessary knowledge right on time in the basic course.