Institute for Digital Research and Education
Optimization and parallel scalability of software are essential for High performance computing. The Department of Energy, National Science Foundation, and the National Aeronautical Space Agency support an ecosystem of “leadership class” computing facilities that include several of the world’s most advanced supercomputers as well as high-end visualization and data analysis resources. Access to these resources is freely available to those who can demonstrate that their research topic requires these resources and that their software can utilize these resources.
The IDRE Pipeline program aims to transition UCLA researchers from local resources such as Hoffman2 and take advantage of the free magnificent “leadership class” computing facilities. A first step in optimizing and improving the parallel scalability of software so it can run effectively on leadership class facilities is profiling the software whereby one identifies how much computing time is spent within various subroutines, how well the code runs on one computing core, and how well it scales on many computing cores.
The purpose of this session is to describe few basic profiling techniques and help researchers at UCLA in gaining more insight into how well their application code runs on Hoffman2 or even larger systems.
In particular we focus on:
Learning profiling techniques that are available on Hoffman2.
Providing hands on examples – bring your own software.
Prerequisite: User account on hoffman2 – apply here if not a user already, own laptop to access the Hoffman2 cluster, and your software on Hoffman2.
RSVP: http://cfapps.ats.ucla.edu/cfapps/events/rsvp/RSVPNow.cfm?EveID=3414&SecID=3403