1. Parallel architectures and parallel programming
2. Characteristics of parallel algorithms and parallel problems
3. Conditions of parallelizability, Flynn's taxonomy, Amdahl’s and Gustafson’s laws
4. Methodology for designing parallel programs – decomposition, communication, synchronization, data dependency
5. Parallel programming models, threading model, message-passing model
6. Threading methods – explicit (POSIX Threads, Java, C++) and implicit (OpenMP)
7. Distributed memory systems - MPI
8. MPI (MPICH) – data types, communicators, barriers, semaphores
9. Managing MPI group communication
10. Analytical modelling of parallel systems, analysis of complexity and performance, complexity classes Polylog and P-complete
11. Parallel programming patterns (sorting, searching, graph algorithms, dived and conquer method)
12. Programming multicore graphics processing units
Type of methodology: Combination of lecture and hands-on
Participants receive the certificate of attendance: Yes
Paid training activity for participants: Yes, for all
Participants prerequisite knowledge: Numerical methods (linear algebra, statistics) Domain-specific background knowledge