NSC's director Patrick Norman in front of Triolith. Photo by Göran Billeson.
Triolith was NSC’s flagship system from 2012 to 2018. It was named after two previous systems with similar architectures: Monolith and Neolith.
Triolith was equipped with fast interconnect network (Mellanox Infiniband FDR) for high performance when running massively parallel applications, but its cost-efficient design also made it suitable for smaller jobs.
Triolith had (when retired in 2018) a combined peak performance of 260 Teraflop/sec, 16,368 compute cores, 35 Terabytes of memory, and 56 Terabit/s aggregated network performance.
The system was based on HP’s Cluster platform 3000 with SL230s Gen8 compute nodes, and was delivered in 2012 (1200 nodes) and 2013 (expanded to 1600 nodes) by GoVirtual AB. In 2017 it was reduced in size to 1017 nodes.
Hardware | HP Cluster Platform 3000 with SL230s Gen8 and DL980 Gen 7 (“huge” nodes) compute nodes |
Processors | 8-core Intel Xeon E5-2660 “Sandy Bridge” processors at 2.2GHz |
8-core Intel Xeon CPU E7-2830 at 2.13GHz (“huge” nodes) | |
Number of compute nodes | 1017 |
Compute node (thin) | 2 sockets (16 cores) with 32 GB DDR3 1600 MHz memory (896 nodes) |
Compute node (fat) | 2 sockets (16 cores) with 128 GB DDR3 1600 MHz memory (48 nodes) |
Compute node (gpu,phi) | 2 sockets (16 cores) with 64 GB DDR3 1600 MHz memory (7 nodes, 4 with GPUs, 3 with Intel Xeon Phi) |
Compute node (huge) | 8 sockets (64 cores) with 1 TB memory (2 nodes) |
Analysis nodes (DCS) | 2 sockets (16 cores) with 256 GB DDR3 1600 MHz memory (12 nodes) |
WLCG grid nodes | 2 sockets (16 cores) with 64 or 32 GB DDR3 1600 MHz memory (52 nodes) |
Login nodes | 2 HP ProLiant DL380p Gen8 servers, accessible using SSH and ThinLinc |
High speed interconnect | Mellanox Infiniband FDR high-speed interconnect (1 us PI latency, ~7 GB/sec MPI bandwidth) |
Node scratch storage | 500 TB (500 GB per node) |
Global file system | Triolith used NSC’s Centre Storage system (shared between Triolith and Gamma) |
Operating system | CentOS Linux 6 |
Batch queue system | Slurm |
Triolith was open for users within Swedish academia. You applied for computer time through SNIC. In general, most Triolith use was for simulations within materials science, fluid dynamics, and quantum chemistry.
The total budget for the Triolith project (Phase I-III) was approx. 60MSEK for the first 4 years.
The average power consumption of Triolith with a normal utilization (95% of nodes in use) and an average application mix was around 266 kW (260W per compute node, 16.25 Wh/core hour)
The maximum power consumption (only ever achieved during the TOP500 Linpack run using 1600 nodes) was 519 kW.
At idle (which did not happen very often), a single compute node used around 80W.
Guides, documentation and FAQ.
Applying for projects and login accounts.