OpenMP Programming Workshop

Course/Event Essentials

Event/Course Start
Event/Course End
Event/Course Format
Online
Live (synchronous)

Venue Information

Country: Germany
Venue Details: Click here

Training Content and Scope

Scientific Domain
Level of Instruction
Beginner
Intermediate
Advanced
Sector of the Target Audience
Research and Academia
Industry
Public Sector
Language of Instruction

Other Information

Organiser
Supporting Project(s)
PRACE
Event/Course Description

With the increasing prevalence of multicore processors, shared-memory programming models are essential. OpenMP is a popular, portable, widely supported, and easy-to-use shared-memory model.

Since its advent in 1997, the OpenMP programming model has proved to be a key driver behind parallel programming for shared-memory architectures.  Its powerful and flexible programming model has allowed researchers from various domains to enable parallelism in their applications.  Over the more than two decades of its existence, OpenMP has tracked the evolution of hardware and the complexities of software to ensure that it stays as relevant to today’s high performance computing community as it was in 1997.

This workshop will cover a wide range of  topics, reaching from the basics of OpenMP programming using the "OpenMP Common Core" to really advanced topics. During each day lectures will be mixed with hands-on sessions.

Day 1

The first day will cover basic parallel programming with OpenMP.

Most OpenMP programmers use only around 21 items from the specification. We call these the “OpenMP Common Core”. By focusing on the common core on the first day, we make OpenMP what it was always meant to be: a simple API for parallel application programmers.

In this hands-on tutorial, students use active learning through a carefully selected set of exercises, to master the Common Core and learn to apply it to their own problems.

Day 2 and 3

Day 2 and 3 will cover advanced topics like:

  • Mastering Tasking with OpenMP, Taskloops, Dependencies and Cancellation
  • Host Performance: SIMD / Vectorization
  • Host Performance: NUMA Aware Programming, Memory Access, Task Affinity, Memory Management
  • Tool Support for Performance and Correctness, VI-HPS Tools
  • Offloading to Accelerators
  • Other Advanced Features of OpenMP 5.1
  • Future Roadmap of OpenMP

Developers usually find OpenMP easy to learn. However, they are often disappointed with the performance and scalability of the resulting code. This disappointment stems not from shortcomings of OpenMP but rather with the lack of depth with which it is employed. The lectures on Day 2 and Day 3 will address this critical need by exploring the implications of possible OpenMP parallelization strategies, both in terms of correctness and performance.

We cover tasking with OpenMP and host performance, putting a focus on performance aspects, such as data and thread locality on NUMA architectures, false sharing, and exploitation of vector units. Also tools for performance and correctness will be presented.

Current trends in hardware bring co-processors such as GPUs into the fold. A modern platform is often a heterogeneous system with CPU cores, GPU cores, and other specialized accelerators. OpenMP has responded by adding directives that map code and data onto a device, the target directives. We will also explore these directives as they apply to programming GPUs.

OpenMP 5.0 features will be highlighted and the future roadmap of OpenMP will be presented.

All topics are accompanied with extensive case studies and we discuss the corresponding language features in-depth.

Topics might be still subject to change.

The course is organized as a PRACE training event by LRZ in collaboration with the OpenMP ARB and RWTH Aachen.