The emergence of new technologies—particularly Artificial Intelligence—within the business landscape brings significant opportunities and benefits for organizational ecosystems, while also introducing a set of important and non-trivial challenges. For example, for many SMEs, the challenge of adopting Artificial Intelligence is not access to GPUs or advanced infrastructures, but the risk of investing time and budget into AI initiatives that never move beyond experimentation.
For this reason, EuroCC Italy is organizing a new training course aimed at introducing a test-before-invest approach to the design and operation of AI workloads in HPC and cloud environments: “A test-before-invest approach for scalable and controllable AI”, scheduled for 26 February at the BI-REX premises in Bologna.
A test-before-invest approach for scalable and controllable AI: a new EuroCC Italy course
When: 26nd February 2026, h 9.00 a.m. – 1.00 p.m.
Where: BI-REX, Via Paolo Nanni Costa 14 (Bologna)
The objective of the course—designed and developed within the EuroCC Italy project—is not to focus on specific platforms or tools, but rather to provide practical design patterns and operational principles. In doing so, organizations will be able to:
- validate AI workloads at an early stage
- keep execution costs under control
- make informed decisions before scaling their investments
Participants will learn how to structure AI workloads so they can be tested under real conditions – long-running jobs, growing datasets, and limited computational budgets – while remaining reproducible and observable. A guided design exercise will help participants translate these concepts into their own use cases, identifying technical and operational trade-offs and defining clear criteria to decide when an AI workload is ready to scale—and when it is not.
Why attends
By the end of the training, participants will be able to:
- Understand why many AI workloads fail to scale beyond the proof-of-concept phase in HPC and cloud environments
- Distinguish between different types of AI workloads (exploratory, batch, production-like) and select appropriate execution environments
- Design hybrid HPC–cloud architectures using reproducible and container-based execution models
- Execute and manage long-running AI jobs, including fault tolerance and resource management considerations
- Identify key observability signals for AI workloads, covering jobs, data, models, and outputs
- Understand the principles of AI model lifecycle management, including versioning, monitoring, and re-training strategies
- Evaluate architectural and operational trade-offs in real-world AI workflows
Topic
The training programme covered by the course focuses on the following key topics:
- Hybrid HPC–cloud architectures
- Execution strategies for GPU-intensive workloads
- The fundamentals of observability and model lifecycle management, enabling teams to understand not only how models run, but how well they perform over time.
- A guided design exercise to translate these concepts into use cases
Target
This training is designed for a mixed technical and applied audience, including:
- Researchers and PhD students working on AI-driven scientific workflows on HPC infrastructures
- AI and data science practitioners who need to operationalize models beyond experimentation
- Technical staff and system engineers involved in supporting AI workloads on HPC or hybrid infrastructures
- Innovation managers and R&D coordinators seeking to better understand architectural and operational implications of AI projects
- SMEs and applied research teams using EuroHPC or national HPC resources for AI and data-intensive workloads
Prerequisites
A basic familiarity with AI or data-driven workflows is recommended; deep expertise in HPC systems is not required.
Instructors:
The course will be delivered by Alessandro Chiarini, Senior Consultant – Life Science.
Registration
Register now: complete the registration form at the dedicated link.