Institute for Digital Research and Education
MPI (Message Passing Interface) is a standardized interface for writing portable distributed-memory parallel scientific code. The portability of MPI ensures the same MPI program works the same way on different platforms, ranging from laptop computers to massively parallel supercomputers. MPI has been widely used in advanced simulations, data analysis and visualization in the last three decades. An MPI program typically launches a set of processes distributed across multiple CPU cores or compute nodes. Each process would perform a part of the computations and the processes communicate with each other as needed. The communication is transparently controlled by the user code (which makes MPI calls), and the processes are managed by the MPI runtime system, which can also be controlled by the user. This workshop series introduces MPI for scientific computing from a user’s perspective:
-3. Introduction to PETSc: This session explains the use of PETSc library, built on top of MPI, as a way to simplify MPI programming for scientific computing. The examples include constructing various types of parallel vectors and matrices for use cases in scientific computing.
Any questions about this workshop can be emailed to Shao-Ching Huang at sch@ucla.edu.
Presented by the Office of Advanced Research Computing (OARC).