Overview
Advanced research has moved beyond the capacity of a single computer for detailed multi-level simulations, data analysis, and large-scale computations. Because of the increasing complexity and requirement for sheer computational horsepower, these needs are increasingly met with tightly integrated cluster systems consisting of hundreds to hundreds of thousands of processors, terabytes to petabytes of high-performance storage, and high bandwidth/low latency interconnects consuming megawatts of power. In addition to the need for powerful hardware, the software that runs on these systems must be written to take advantage of the computational power available on a particular system. This is the domain of High-Performance Computing (HPC).
IDRE-HPC Group
The IDRE-HPC group is a strong team of experienced researchers in High Performance Computing. The group provides its expertise and support to empower scholars in their abilities to compute on high-end computer systems. IDRE-HPC also supports the Hoffman2 shared cluster and manages the IDRE Cluster Hosting Program for UCLA researchers. These resources meet campus needs for small- to medium-sized cluster computing and may provide a starting point to resources at national computing centers.
The IDRE-HPC group offers consulting to members of the UCLA community. To learn more, please visit this page.
Hoffman2 Cluster
UCLA’s Shared Hoffman2 Cluster has 64-bit nodes, currently 774 and 7,508 cores, with an Ethernet network and Infiniband interconnect, that includes a scheduler, GCC and the best performing compiler for C, C++, Fortran 77, 90 and 95 on the current Shared Cluster architecture, applications and software libraries that offer languages, compilers and software specific to Chemistry, Chemical Engineering, Engineering, Mathematics, Visualization, Programming and an array of miscellaneous software. The current peak performance of the cluster is in order of 75 Trillion Floating Point, double precision, operations per second (TFLOPS). Hoffman2 is currently the largest and most power cluster in the University of California system.
Hoffman2 additional resources for researchers include complete system administration for contributed cores, cluster access through a 10Gb network interconnect to the campus backbone, high performance home and scratch storage space, capability to run large parallel jobs that can take advantage of the cluster’s InfiniBand interconnect, and web access to the Hoffman2 Cluster through the UCLA Grid Portal, and access to the BlueArc and Panasas storage systems.
The cluster is also an end point on Globus online service using 10Gb network interconnect backbone, thus providing researchers a facility for fast and reliable data movement between Hoffman2 and most leadership class facilities across USA.
To learn more about the Hoffman2 Cluster, please visit the Hoffman2 Cluster website. On the website you will find a wealth of information, such as:
For general assistance with any HPC-related topics please send an email to hpc@ucla.edu.
Dawson2 Cluster
UCLA’s Dawson2 GPU CLuster, ranked 148 in the Top500, comprises 96 HP ProLiant SL390 G7 systems, each having dual socket Intel Xeon X5650 processors, 3 Nvidia M2070 Graphics processors, and 48 GB of main memory giving peak performance of 1.66 double precision Trillion Floating Point operations per second (TFLOPS). The cluster uses QDR Infiniband networking and 160 Terabytes of high performance common disk space from Panasas for communication and storage respectively.
Pipeline to National Leadership Class Facilities
IDRE is part of the NSF XSEDE Campus Champion Program, which provides information about national high performance computing opportunities and resources and assists researchers by:
• Providing information about high performance computing and XSEDE resources
• Assisting in getting researchers access to allocations of high performance computing resources
• Facilitating workshops about the use of high performance computing resources and services
• Providing contacts within the high performance computing community for quick problem resolution
IDRE is part of the San Diego Supercomputer Center’s Triton Affiliates and Partners Program (TAPP) and can assist with scaling issues and students that can help predict the timing on large computing resources. IDRE also has strong relationships with NSF and DOE centers, including NERSC, NASA, ALCC and INCITE.
Researchers may join the UCLA IDRE Pipeline group or contact: hpc@ucla.edu