Institute for Digital Research and Education
MPI (Message Passing Interface) is a standardized interface for writing portable distributed-memory parallel scientific code. The portability of MPI ensures the same MPI program works the same way on different platforms, ranging from laptop computers to massively parallel supercomputers. MPI has been widely used in advanced simulations, data analysis and visualization in the last three decades. An MPI program typically launches a set of processes distributed across multiple CPU cores or compute nodes. Each process would perform a part of the computations and the processes communicate with each other as needed. The communication is transparently controlled by the user code (which makes MPI calls), and the processes are managed by the MPI runtime system, which can also be controlled by the user. This workshop series introduces MPI for scientific computing from a user’s perspective:
– 2. MPI programming: This session demonstrates the basic use of MPI in a program. The examples include calling MPI from Fortran, C/C++, Python and Julia languages to send and receive data across multiple processes.
To register: https://ucla.zoom.us/meeting/register/tJwqcOGupzgrEt3QJUPLfOV-IerYwH9X_DDD
Any questions about this workshop can be emailed to Shao-Ching Huang at sch@ucla.edu.
Presented by the Office of Advanced Research Computing (OARC).