The HPC² provides substantial high performance computing resources for use by its member centers.
Orion is a Dell/EMC PowerEdge C6420 cluster composed of 1,800 compute nodes, each with dual 20-core Xeon Gold 6148 processors (with a base processor frequency of 2.4 GHz), and a total of 345 TB of RAM. Eight of these nodes are large-memory nodes with 384 GB of RAM with the remaining 1792 nodes containing 192 GB of RAM. Each node is interconnected via Mellanox HDR100 InfiniBand (100 Gb/s). Orion has a peak performance of nearly 5.5 PetaFLOPS and debuted on the June 2019 TOP500 Supercomputer Sites list as the 62th most powerful computer in the world and 4th fastest computer in U.S. academia. However, due to technical reasons the initial rankings were accomplished using only 85% of the available nodes. Later benchmarking efforts utilizing nearly all of the nodes resulted in a ranking on the November 2019 TOP500 Supercomputer Sites list as the 60th most powerful in the world and the 5th fastest computer in U.S. academia.
Shadow is a Cray CS300-LC cluster with 4,800 Intel Ivy Bridge processor cores and 28,800 Intel Xeon Phi cores. Each node has either 512 GB of RAM (45%), 128 GB of RAM (45%) or 64 GB of RAM (10%) for a system total of 70 TB of RAM. Furthermore, each node is interconnected via FDR InfiniBand (56 Gb/s). Shadow has a peak performance of 593 teraFLOPS (trillion calculations per second). Shadow is the first production system of its kind and uses an innovative cooling system called direct, warm-water cooling, which allows the system to be cooled with water as warm as 104 degrees Fahrenheit (40 degrees Celsius). On the June 2015 TOP500 Supercomputer Sites list, Shadow was ranked as the 143rd fastest computer in the world and the 11th most powerful computer at any academic site in the United States. It was also the 16th most energy efficient supercomputer in the world according to the June 2015 Green500 list.
Scout is a Dell C8220 cluster with 2,688 Intel Sandy Bridge processor cores. Each of the 168 nodes contain dual Xeon cores (E5-2680 2.7GHz, turbo, 3.5GHz, 8 core) and 32 GB of memory for a system total memory of 5 TB. Each node is interconnected via FDR InfiniBand (56 Gb/s). The system has 8 nodes with 1 Nvidia K20 GPU each.
Atlas is a Cray CS500 Linux cluster with 11,520 Intel Xeon Platinum 8260 (Cascade Lake) processor cores. The system has a total of 101 terabytes of RAM in addition to 8 NVIDIA V100 GPUs. Each node is interconnected via Mellanox HDR100 Infiniband (100 Gb/s). The peak performance is 565 TeraFLOPS. Atlas is a collaboration between Mississippi State University and the U.S. Department of Agriculture's Agriculutural Research Service (ARS).
Hercules is a Dell PowerEdge C6520 Linux Cluster with 512 compute nodes, totaling 40,960 2.3GHz Intel Xeon Platinum 8380 processor coreswith 256 terabytes of RAM. Each node is connected via NVIDIA Mellanox InfiniBand NDR interconnect. Hercules has a peak performance of 3.01 PetaFLOPS, and debuted at #369 overall on the November 2022 TOP500 list. In addition to these systems, the HPC² has many smaller x86 Windows and Linux based servers that provide computational services for various groups and individual projects.