Actions

Difference between revisions of "Overview of the cluster"

From ALICE Documentation

(Overview of the cluster)
Line 10: Line 10:
  
 
In summary: 604 TFlops, 816 cores (1632 threads), 14.4 TB RAM.
 
In summary: 604 TFlops, 816 cores (1632 threads), 14.4 TB RAM.
 +
 +
ALICE has a second high memory. This node is not included above as it is only available to the research group which purchased the node.
  
 
Below you will find a more comprehensive description of the individual components. Also [[Hardware photo gallery|see a photo gallery of the hardware]].
 
Below you will find a more comprehensive description of the individual components. Also [[Hardware photo gallery|see a photo gallery of the hardware]].

Revision as of 14:57, 23 February 2021

Overview of the cluster

Conceptual View of ALICE

The ALICE cluster is a hybrid cluster consisting of

  • 2 login nodes (4 TFlops)
  • 20 CPU nodes (40 TFlops)
  • 10 GPU nodes (40 GPU, 20 TFlops CPU + 536 TFlops GPU)
  • 1 High Memory CPU node (4 TFlops)
  • Storage Device (31 * 15 + 70 = 535 TB)

In summary: 604 TFlops, 816 cores (1632 threads), 14.4 TB RAM.

ALICE has a second high memory. This node is not included above as it is only available to the research group which purchased the node.

Below you will find a more comprehensive description of the individual components. Also see a photo gallery of the hardware.

ALICE is a pre-configuration system for the university to gain experience with managing, supporting and operating a university-wide HPC system. Once the system and governance have proven to be a functional research asset, it will be extended and continued for the coming years.

The descriptions are for the configuration which is housed partly in the data centre at LMUY and the data centre at Leiden University Medical Center (LUMC).