Shared Cluster Hardware System – Standards for Compute Nodes
In order to maximize the effective management of the Shared Cluster System and to provide the highest level computing services to Shared Cluster customers, compute nodes added to the Hoffman2 Cluster must meet minimum standards and be purchased from our current preferred vendor. The current node configuration is described on the Service Pricing & Ordering Information page.
These standards will be evaluated periodically and updated based on best price/performance.
If you are interested in adding additional memory, more cores, larger hard drives/SSDs, or GPUs to your compute nodes please let us know.
Campus General Purpose Cluster Nodes
Nodes on the Campus General Purpose Cluster are configured to the same specifications as the rest of the Shared Cluster. This is done to allow, by prior special agreement, the use of the entire cluster by large compute runs.
Network and Interconnect
The Hoffman2 Cluster has both an InfiniBand interconnect and a gigabit Ethernet network. The Ethernet network is dedicated to traffic in and out of the storage system and also handles various administrative functions. InfiniBand is used for inter-node, MPI-type communication. Using these two interconnects maximizes performance within the cluster.
Multiple Cisco 6509-E 288 port GigE switches with a redundant 10Gb uplink to the campus backbone.
Multiple Cisco and Q-Logic Infiniband switches.
A high-performance, fault-tolerant NetApp storage system, using multiple V6290 heads with ES400 storage arrays, provides storage for the Hoffman2 Cluster System. On the Hoffman2 Cluster home directories are served from the NetApp. Users of the General Purpose Cluster Nodes currently have a 20GB quota on their home directories. The current storage rate is provided on the Service Pricing & Ordering Information page.