NVIDIA Deep Learning Institute (DLI) offers hands-on training for developers, data scientists, and researchers looking to solve challenging problems with deep learning.
Learn how to train and deploy a neural network to solve real-world problems, and how to effectively parallelize training of deep neural networks on Multi-GPUs.
The workshop combines the following DLI courses:
- Fundamentals of Deep Learning
- Data Parallelism: How To Train Deep Learning Models on Multiple GPUs
- Model Parallelism: Building and Deploying Large Neural Networks.
The lectures are interleaved with many hands-on sessions using Jupyter Notebooks. In addition, a short introduction to multi-GPU programming will provide a first look at fundamental concepts for parallelizing machine learning and deep learning across multiple GPUs.
This course is organized in cooperation with Leibniz Supercomputing Centre (Germany). The instructor is an NVIDIA certified University Ambassador.
For more information, see the official course website:
https://www.hlrs.de/training/2026/DL-Multinode
Fees
The course is free but only open to academic participants.
Pre-required logistics
For day one, you need basic experience with C/C++ or Fortran. Suggested resources to satisfy prerequisites: the learn-c.org interactive tutorial, https://www.learn-c.org/. Familiarity with MPI is a plus.
On day two, you need an understanding of fundamental programming concepts in Python 3, such as functions, loops, dictionaries, and arrays; familiarity with Pandas data structures; and an understanding of how to compute a regression line.
Suggested resources to satisfy prerequisites: Python Beginner’s Guide. Familiarity with PyTorch will be a plus as it will be used in the hands-on sessions.
Experience with Deep Learning using Python 3 and, in particular, gradient descent model training will be needed on day three and four. Further, experience with PyTorch will be helpful, see https://pytorch.org/tutorials/ for instance.