Shared Cluster Hardware System – Standards for Compute Nodes
In order to maximize the effective management of the Shared Cluster System and to provide the highest level computing services to Shared Cluster customers, compute nodes added to the Hoffman2 Cluster must meet minimum standards and be purchased from our current preferred vendor. As of September 2016 the standards are:
- Current preferred vendor and model: Silicon Mechanics SYS-F618R3-FT+
- Half-width “dual” nodes mounted in an 8 node chassis
- Dual-twelve-core 2.2GHz Intel ES-2650v4 CPUs
- 128GB of memory per node, 8 x 16GB DDR4-2400 ECC
- One Intel 800GB DC S3510 Series MLC (6Gb/s, 0.3 DWPD) 2.5″ SATA SSD per node
- Gigabit Ethernet port
- FDR InfiniBand interconnect card available at extra cost if required.
- 5 year warranty
- For a quote please contact Bill Labate, email@example.com, X67323.
These standards will be evaluated periodically and updated based on best price/performance.
If you are interested in adding additional memory, more cores, larger hard drives/SSDs, or GPUs to your compute nodes please let us know.
Campus General Purpose Cluster Nodes
The Campus General Purpose Cluster consists of 256 cores on nodes configured to the same specifications as the rest of the Shared Cluster. This is done to allow, by prior special agreement, the use of the entire cluster by large compute runs.
Network and Interconnect
The Hoffman2 Cluster has both an InfiniBand interconnect and a gigabit Ethernet network. The Ethernet network is dedicated to traffic in and out of the storage system and also handles various administrative functions. InfiniBand is used for inter-node, MPI-type communication. Using these two interconnects maximizes performance within the cluster.
Multiple Cisco 6509-E 288 port GigE switches with a redundant 10Gb uplink to the campus backbone.
Multiple Cisco and Q-Logic Infiniband switches.
On the Hoffman2 Cluster home directories are served from the NetApp. Users of the General Purpose Cluster Nodes currently have a 20GB quota on their home directories. Additional storage for users of the Shared Cluster System can be purchased at the current (April 2014) rate of $350 per year, per terabyte.
See Panasas Storage System: redundancy and usable space for additional hardware details.