Institute for Digital Research and Education
MPI (Message Passing Interface) is a standardized interface for writing portable distributed-memory parallel scientific code. The portability of MPI ensures the same MPI program works the same way on different platforms, ranging from laptop computers to massively parallel supercomputers. MPI has been widely used in advanced simulations, data analysis and visualization in the last three decades. An MPI program typically launches a set of processes distributed across multiple CPU cores or compute nodes. Each process would perform a part of the computations and the processes communicate with each other as needed. The communication is transparently controlled by the user code (which makes MPI calls), and the processes are managed by the MPI runtime system, which can also be controlled by the user. This workshop series introduces MPI for scientific computing from a user’s perspective:
– 1. Running MPI programs: This session explains the practical aspects of using the MPI runtime/process management system, and how it interacts with the job scheduler of a HPC cluster using Hoffman2 Cluster as an example.
Any questions about this workshop can be emailed to Shao-Ching Huang at sch@ucla.edu.
Presented by the Office of Advanced Research Computing (OARC).