Skip to main content

Overview

Graphics Processing Units (GPUs) power many of the world’s high-performance computing (HPC) systems. Today, most of the computing capacity available to researchers and engineers in HPC comes from GPUs or similar accelerators. As a result, learning how to program GPUs has become increasingly important for developers working on HPC software.

At the same time, the GPU ecosystem is complex. Several vendors compete in the high-end GPU market, each with its own software stack and development tools. On top of that, there are many programming languages, libraries, and frameworks for writing GPU code. This variety makes it challenging for developers and project leads to choose the right tools and frameworks for their specific projects, especially when balancing technical requirements with existing codebases.

In this webinar series, we provides a practical introduction to GPU programming, designed for developers, researchers, and engineers who are curious about leveraging GPUs for high-performance computing and data-intensive applications. Through three focused webinars, participants will gain a solid understanding of why GPUs matter, the fundamentals of GPU architectures, what kinds of problems they are best suited for, and how to begin using GPU programming models.

  • The first part covers why to use GPU and the description of GPU architecture and software ecosystem.
    • We begin by exploring the motivation for GPU programming. Participants will learn how GPUs differ from CPUs in design philosophy and parallel execution.
    • We’ll break down the basics of GPU hardware and the software ecosystem that supports GPU computing (CUDA, ROCm, OpenCL, libraries, and frameworks).
  • The second part covers the GPU programming concepts and what problems fit GPU programming.
    • This session introduces the key concepts that underpin GPU programming, such as threads, warps, blocks, and grids.
    • We’ll discuss how data parallelism drives performance on GPUs, and examine real-world examples of problems that benefit from GPU acceleration, along with cases where GPUs are not a good fit.
  • The third part concludes with a general introduction to major GPU programming models.
    • We’ll compare directive-based, portable and non-portable kernel-based models, and high-level abstractions such as Julia/Python or domain-specific libraries.
    • Participants will see how to structure basic GPU programs and gain an understanding of the programming patterns that unlock GPU performance.

By the end of this webinar series, attendees will understand the strengths of GPUs, the fundamentals of GPU programming, and the pathways to begin applying these concepts in their own work.

Who is this webinar for?

This webinar series are especially relevant for early-career researchers and engineers who develop software running on CPUs in workstations or supercomputers, as well as for decision-makers and project leaders who may not write code but oversee software projects in academia, industry, or the public sector.

Key takeaways

By the end of this webinar, the participants will understand:

  • The role of GPUs in modern high-performance computing and why they are widely used.
  • Key differences between CPUs and GPUs, including architecture, memory hierarchy, and execution models.
  • The basics of the GPU software ecosystem (CUDA, ROCm, OpenCL, and higher-level frameworks).
  • Core GPU programming concepts such as threads, warps, blocks, and grids.
  • How to identify problems that are well-suited for GPU acceleration, and those that are not.
  • An overview of major GPU programming models and their trade-offs.
  • How to structure simple GPU programs and understand common parallel programming patterns.
  • Practical pathways for getting started with GPU development in research or industry projects.

Tentative Agenda

We have scheduled a follow-up workshop next week (Nov. 25–27) that will provide a comprehensive overview of GPU programming. The agenda for the follow-up workshop can be found HERE.

For this session, we will use the LUMI supercomputer for hands-on exercises. If you have registered for the workshop, we will guide you through accessing the LUMI machine during the onboarding session.

Day 1 (Nov. 18)

Time Contents 10:00-10:15 Welcome 10:15-10:30 Why GPUs? 10:30-11:00 GPU hardware and software ecosystem 11:00-11:20 Q/A 11:20-12:00 On-boarding session (login to LUMI)

Day 2 (Nov. 19)

Time Contents 10:00-10:10 Welcome and Recap 10:10-10:40 GPU programming concepts 10:40-11:00 What problems fit GPU programming? 11:00-11:20 Q/A 11:20-12:00 On-boarding session (login to LUMI)

Day 3 (Nov. 20)

Time Contents 10:00-10:10 Welcome and Recap 10:10-11:00 Introduction to GPU programming models 11:00-11:20 Q/A 11:20-12:00 On-boarding session (login to LUMI)

More events & contact

Check out more upcoming events from ENCCS and our European network at https://enccs.se/events.

For questions regarding this workshop or general questions about ENNCS training events, please contact training@enccs.se

Schedules can change!

To ensure that everyone has the opportunity to participate, we kindly request that you let us know as soon as possible if you are unable to attend an event after registering.

Please send us an email at training@enccs.se to cancel your attendance.

We understand things can change, but repeated cancellations without notice may unfortunately result in your name being removed from future event registration lists.