
Overview
Graphics Processing Units (GPUs) are the workhorses of many high-performance computing (HPC) systems worldwide. Today, the majority of HPC computing power available to researchers and engineers comes from GPUs or other accelerators. As a result, programming GPUs has become increasingly important for developers working on HPC software.
At the same time, the GPU ecosystem is complex. Multiple vendors compete in the high-end GPU market, each offering their own software stack and development tools. Beyond that, there is a wide variety of programming languages, libraries, and frameworks for writing GPU code. This makes it challenging for developers and project leaders to navigate the landscape and select the most appropriate GPU programming approach for a given project and its technical requirements.
This workshop is a follow-up event of the webinars in the previous week. We will provide a comprehensive description of GPU programming concepts and models, including
- Directive-based models (OpenACC, OpenMP)
- Non-portable kernel-based models (CUDA, HIP)
- Portable kernel-based models (Kokkos, alpaka, OpenCL, SYCL, etc.)
- High-level language support (Python, Julia)
- Multi-GPU programming with MPI
- Hands-on examples implemented using several models
- Notes on preparing code for GPU porting
Who is this workshop for?
This workshop is most relevant to researchers and engineers who already develop software which runs on CPUs in workstations or supercomputers. Familiarity with one or more programming languages like C/C++, Fortran, Python or Julia is recommended.
If you are not yet familiar with the basics of GPU programming concepts and models, we recommend attending the introductory webinar series offered the week before. These sessions provide the necessary background to help you get the most out of this workshop.
Key takeaways
By the end of this workshop, the participants will:
- Understand the range of GPU programming models and their use cases, from directive-based approaches (OpenACC, OpenMP) to kernel-based models (CUDA, HIP, SYCL, Kokkos, Alpaka, and others).
- Gain insight into the trade-offs between non-portable and portable programming models when targeting different GPU vendors and platforms.
- Learn how high-level languages such as Python and Julia provide accessible pathways to GPU programming.
- Understand strategies for scaling applications across multiple GPUs using MPI.
- Apply learned concepts through hands-on exercises to reinforce practical skills.
- Know how to assess and prepare existing CPU-based code for GPU porting.
- Be equipped to make informed decisions on selecting GPU programming models that best fit their project requirements.
Tentative Agenda
Day 1 (Nov. 25)
Time Contents 09:00-09:10 Welcome 09:10-10:30 Directive-based models (OpenACC, OpenMP) 10:30-10:40 Break 10:40-12:00 Non-portable kernel-based models (CUDA, HIP) 12:00-12:30 Q/A sessionDay 2 (Nov. 26)
Time Contents 09:00-09:10 Welcome and Recap 09:10-11:00 Portable kernel-based models(Kokkos, OpenCL, SYCL, C++ stdpar, alpaka, etc.) 11:00-11:10 Break 11:10-12:00 High-level language support 12:00-12:30 Q/A session
Day 3 (Nov. 27)
Time Contents 09:00-09:10 Welcome and Recap 09:10-10:30 Multi-GPU programming with MPI 10:30-10:40 Break 10:40-11:30 Example problem: stencil computation 11:30-11:40 Break 11:40-12:20 Preparing code for GPU portingRecommendations, Q/A 12:20-12:30 Summary
Disclaimer
Due to EuroCC2 regulations, we CAN NOT ACCEPT generic or private email addresses. Please use your official university or company email address for registration.
This training is for users that live and work in the European Union or a country associated with Horizon 2020. You can read more about the countries associated with Horizon2020 HERE.