Course/Event Essentials
Event/Course Start
Event/Course End
Event/Course Format
Mixed
Live (synchronous)
Primary Event/Course URL
Training Content and Scope
Scientific Domain
Technical Domain
Level of Instruction
Beginner
Intermediate
Sector of the Target Audience
Research and Academia
Language of Instruction
Other Information
Organiser
Event/Course Description
Accelerated Machine Learning with Intel (Morning Sessions)
- Introduction
- Welcome & presentation of the day's agenda and speakers.
- Hardware acceleration for AI and Intel® oneAPI AI Analytics Toolkit
In this session, we will first introduce the hardware features that power and accelerate AI on Intel hardware.
We will then take a first look at the software stack that leverages them, namely the Intel® oneAPI AI Analytics Toolkit. - Intel Developer Cloud (IDC) - A sandbox for AI & Software development
Intel Developer Cloud is a development environment that gives access to cutting-edge Intel hardware and software innovations to build and test AI, machine learning, HPC, and security applications for cloud, enterprise, client, and edge deployments. Learn about Intel's advanced CPUs, GPUs, and accelerators, along with open software tools, to optimize your AI products and solutions. Discovery session with vouchers for all participants. - How to accelerate Classical Machine Learning on Intel Architecture
In this session, we will cover Intel-optimized libraries for Machine Learning. Python* is currently ranked the most popular programming language and is widely used for Data Science and Machine Learning. We will first introduce the Intel® distribution for Python and its optimizations. We will then cover optimizations for ML Python packages such as Modin, Intel® Extension for Scikit-learn, and XGBoost.
Accelerated Deep Learning with Intel (Afternoon Sessions)
- Generative AI Powered by Intel
In this session, we present to you the most recent advancements in Generative AI, covering Large Language Models and Diffusion Models. We will explore how Intel plays a crucial role in powering this technology, from training and fine-tuning to inference across a spectrum of Intel hardware platforms. - Easily speed up Deep Learning inference – Write once deploy anywhere!
In this session, we will showcase the Intel® Distribution of OpenVINO™ Toolkit that allows you to optimize for high-performance inference models that you trained with TensorFlow* or with PyTorch*. We will demonstrate how to use it to write once and deploy on multiple Intel hardware.
The presentations will be accompanied by demos to demonstrate the performance improvements.
The course is organised by LRZ in cooperation with Intel.