MPI is here to stay: future developments and novel features

Prof. George Bosilca
Innovative Computing Laboratory at the University of Tennessee, Knoxville, USA


Abstract:

For the last three decades, MPI was the communication layer used by most HPC simulation codes, often described as the 'assembly language of parallel programming'. The MPI standard evolves together with the hardware generations, trying to provide users with a stable, yet performant, API to efficiently access most of the hardware capability. However, the increased heterogeneity and complexity of systems and applications have created a fracture between the fast evolving hardware (many-cores, accelerators, smart networks) and what becomes available to users via the MPI API. The MPI Forum focuses explicitly in updating the MPI standard, and defining what the next generation MPI API will provide. This lecture will go over a few topics (interoperability with other programming standards, architectural capabilities, new collective communications, sessions and resilience) that are under investigation for inclusion on the next MPI standard, and the impact these extensions will have on how we program parallel applications as well as their efficiency current and future platforms.