Tetralith is NSC’s largest HPC cluster. It replaced NSC’s previous HPC cluster Triolith in 2018. Tetralith was funded by SNIC and uses for research by Swedish research groups.
On January 1st, 2023, SNIC ended and NAISS is now funding Tetralith.
Access to Tetralith is granted by NAISS.
Tetralith consists of 1908 compute nodes each with two Intel Xeon Gold 6130 cpus with 16 CPU cores each, giving a total of 61056 CPU cores. The performance of the complete system is around 3 Pflop/s (LINPACK Rmax).
In June 2019, Tetralith was placed 74th on the TOP500 List.
Number of nodes | CPU type | CPU cores | Memory (RAM) | GPU | Local disk, type | Local disk, usable area for jobs |
---|---|---|---|---|---|---|
1674 | 2x Intel Xeon Gold 6130 | 32 | 96 GiB | - | SSD 240GB | 210 GiB |
170 | 2x Intel Xeon Gold 6130 | 32 | 96 GiB | NVIDIA® T4 | NVMe 2TB | 1844 GiB |
60 | 2x Intel Xeon Gold 6130 | 32 | 384 GiB | - | SSSD 960GB | 874 GiB |
4 | 2x Intel Xeon Gold 6130 | 32 | 384 GiB | - | SSSD 240GB | 210 GiB |
All nodes have a local disk where applications can store temporary
files. The size of this disk (available to jobs as /scratch/local
) is
shown above. This disk space is shared between all jobs using the
node. If you want to ensure you have access to all the local disk
space in a node, you need to use the Slurm option --exclusive
.
To request a node with a specific disk size for your job, use the
Slurm option -C diskS
(for 240GB SSD), -C diskM
(960GB SSD) or -C
diskL
(for 2TB NVMe).
To request compute nodes with a certain amount of RAM, you can use -C
thin --exclusive
(96GB) or -C fat --exclusive
(384 GiB).
These options can also be combined, e.g -C 'fat&diskM'
to request a
node with 384 GB RAM and a 960GB local disk.
All Tetralith nodes are interconnected with a 100 Gbps Intel Omni-Path network which is also used to connect to the existing disk storage. The Omni-Path network is similar to the FDR Infiniband network in Triolith (e.g still a fat-tree topology).
The hardware was delivered by ClusterVision B.V.
The servers used are Intel HNS2600BPB compute nodes, hosted in the 2U Intel H2204XXLRE chassis and equipped with Intel Xeon Gold 6130 for a total of 32 CPU cores per compute node.
There are 170 nodes in Tetralith equipped with one NVIDIA Tesla T4 GPU each as well as an updated, high-performance NVMe SSD scratch disk of 2TB. The nodes are regular Tetralith thin nodes which have been retrofitted with the GPUs and disks, and are accessible to all of Tetralith’s users. For details of this addition, see the Tetralith GPU User Guide.
GPU nodes are also available on other NAISS and NSC systems, e.g Berzelius, Sigma, Dardel-GPU and Alvis.
Guides, documentation and FAQ.
Applying for projects and login accounts.