21-25 October 2019
UTC timezone

Corso Formazione: Parallel Computing

Corso di Formazione: Parallel Computing

Teacher: Fabio Pitari


The aim of this course is to give an introduction to parallel programming for distributed and shared memory system architectures. The basic functionalities of two of the widely used parallel programming libraries will be presented: the MPI (Message Passing Interface) library for distributed architectures and OpenMP system for shared memory and multicore architectures. The combination of these two parallel paradigms allows to write programs capable of efficiently running on a variety of high performance systems: from up-to-date workstation to supercomputing systems equipped with thousands of processors.

MPI is a library specification which provides a powerful and portable way for writing parallel programs exploiting distributed computing nodes via the system interconnect. OpenMP is a portable and scalable model that gives programmers a simple and flexible interface for developing multithreaded applications based on a fork/join mechanism.

Implementations of both MPI and OpenMP are available for all modern computing architectures. Programs can be written in C/C++ or FORTRAN. Part of course will be devoted to practical sessions where students will use the concepts just learned.


By the end of the course the student will be able to:

  • understand distributed parallel programming
    • manage Point-to-Point communications in MPI
    • manage Collective communications in MPI
  • understand shared memory parallel programming
    • OpenMP Compiler directives, Parallel regions, Data scope, Worksharing
    • Fork & Join OpenMP model
    • OpenMP Environment variables and Runtime library

Target audience:
Students and researchers interested in developing or optimizing parallel programs, either inshared or distributed memory computing environments.

Good knowledge and experience of C or FORTRAN. Good experience with UNIX operating systems.