5628 Math Sciences
May 9 2013 – 2:00pm – 4:00pm
We will discuss how to run existing parallel programs (also known as “code” or “apps”) efficiently on parallel computers, such as the Hoffman2 cluster. We will explain in details the techniques to map threads (shared memory programming) or processes (distributed memory programming) to physical CPU resources and how to apply them when interacting with a job scheduler. Proper mapping of hardware resources to computational tasks can significantly improve performance “for free”. Specific examples of running MPI and OpenMP programs (either a user’s own code or pre-built packages) on Hoffman2 cluster will be given and explained; the same principles may be applied to other computer clusters. This class is complementary to “Introduction to MPI” and “Introduction to OpenMP”.