Actions

Difference between revisions of "Overview of the cluster"

From ALICE Documentation

(Created page with "thumb|600x600px|Conceptual View of ALICE The ALICE cluster is a hybrid cluster consisting of *2 login nodes (4 TFlops) *20 CPU nodes (40...")
 
(Overview of the cluster)
 
(One intermediate revision by the same user not shown)
Line 1: Line 1:
 +
==Overview of the cluster==
 
[[File:CFER-fase-2-concept-A-1.jpg|thumb|600x600px|Conceptual View of ALICE]]
 
[[File:CFER-fase-2-concept-A-1.jpg|thumb|600x600px|Conceptual View of ALICE]]
 
The ALICE cluster is a hybrid cluster consisting of
 
The ALICE cluster is a hybrid cluster consisting of
Line 9: Line 10:
  
 
In summary: 604 TFlops, 816 cores (1632 threads), 14.4 TB RAM.
 
In summary: 604 TFlops, 816 cores (1632 threads), 14.4 TB RAM.
 +
 +
ALICE has a second high memory. This node is not included above as it is only available to the research group which purchased the node.
  
 
Below you will find a more comprehensive description of the individual components. Also [[Hardware photo gallery|see a photo gallery of the hardware]].
 
Below you will find a more comprehensive description of the individual components. Also [[Hardware photo gallery|see a photo gallery of the hardware]].

Latest revision as of 14:57, 23 February 2021

Overview of the cluster

Conceptual View of ALICE

The ALICE cluster is a hybrid cluster consisting of

  • 2 login nodes (4 TFlops)
  • 20 CPU nodes (40 TFlops)
  • 10 GPU nodes (40 GPU, 20 TFlops CPU + 536 TFlops GPU)
  • 1 High Memory CPU node (4 TFlops)
  • Storage Device (31 * 15 + 70 = 535 TB)

In summary: 604 TFlops, 816 cores (1632 threads), 14.4 TB RAM.

ALICE has a second high memory. This node is not included above as it is only available to the research group which purchased the node.

Below you will find a more comprehensive description of the individual components. Also see a photo gallery of the hardware.

ALICE is a pre-configuration system for the university to gain experience with managing, supporting and operating a university-wide HPC system. Once the system and governance have proven to be a functional research asset, it will be extended and continued for the coming years.

The descriptions are for the configuration which is housed partly in the data centre at LMUY and the data centre at Leiden University Medical Center (LUMC).