Institute for Digital Research and Education
MPI (Message Passing Interface) is a standardized interface for portable distributed-memory scientific parallel computing. The portability ensures that a properly-written, standard-conforming MPI program can work the same way on different platforms ranging from laptop computers to massively parallel supercomputers. MPI has been widely used in advanced simulations, data analysis and visualization in the last two decades. A typical MPI program would launch as a set of processes distributed across multiple CPU cores or compute nodes. Each process would perform a part of the computations; the processes communicate with each other as needed (by making MPI function calls). The communication is transparently controlled by the user code, and the processes are managed by the MPI runtime system, which can also be controlled by the user. This series of workshop aims to introduceing MPI for scientific computing from a user’s perspective:
– Part 1 (January 21, 2022), running MPI programs, explores various aspects of the MPI runtime/process management system, and how it interacts with the job scheduler of a HPC cluster. We will cover both Intel MPI/MPICH and OpenMPI libraries, and use Hoffman2 cluster as a target machine. Given a MPI program (either your own code, or a community/research code), what are the things that you can adjust/control in order to run the code “optimally” on the target machine?
– Part 2 (January 28, 2022), MPI programming, focuses on how to write (basic) MPI programs. We will discuss the basic send/receive MPI communication mechanisms and explore their connections to select problems in scientific computing. We will show examples of calling MPI from different languages, including Fortran, C/C++, Python and Julia.
– Part 3 (February 11, 2022), Introduction to PETSc, will discuss the PETSc library, which is built on top of MPI, among other things, as a way to simplify MPI programming for scientific computing. We will explore the built-in data structures and solvers in PETSc and show how to build MPI/PETSc programs that are easier to maintain and develop than “plain” MPI programs.
This is Part 1 of a 3-part series.
Registration: https://ucla.zoom.us/meeting/register/tJArdOivrTwvG9fq8niOtdDXf3xQqXvT3sVi