◉ 73 node heterogeneous HPC cluster ◉ ~200 TFlops performance for scientific applications ◉ 2 GPU (NVIDIA Tesla V100) development nodes as an experimental platform for GPU computing ◉ 3 DGX-1 nodes (NVIDIA Tesla V100) for AI and ML applications ◉ 750 TB of parallel storage ◉ High-performance 100Gbps Infiniband interconnect
Specs | TARA Compute node | TARA High memory node | TARA GPU node | TARA DGX node |
---|---|---|---|---|
Model: | Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz | Intel(R) Xeon(R) Platinum 8160 CPU @ 2.10GHz | Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz | Intel Xeon E5-2698 v4 2.20 GHz |
Number of Nodes: | 60 | 10 | 2 | 1 |
Number of socket(s): | 2 | 8 | 2 | 2 |
Cores per socket(s): | 20 | 24 | 20 | 20 |
Total cores per TARA Compute node: | 2 x 20 = 40 | 8 x 24 = 192 | 2 x 20 = 40 | 2 x 20 = 40 |
Hardware threads per core: | 1 | 1 | 1 | 2 |
Hardware threads per node: | 40 x 1 = 40 | 192 x 1 = 192 | 40 x 1 = 40 | 40 x 2 = 80 |
RAM: | 192G DIMM DDR4 | 3TB DIMM DDR4 | 384G DIMM DDR4 | 512G DIMM DDR4 |
L3 Cache: | 27.5 MB | 33 MB | 27.5 MB | 50 MB Smart cache |
GPU: | - | - | 2x Nvidia Tesla V100 for PCIe | 8x Nvidia Tesla V100 for NVLink |
Mellanox’s Infiniband EDR 100 Gbps + low latency interconnect
IBM Spectrum Scale: 750 TB Fast SSD disk for scratch space + High capacity SAS data