ALICE User Documentation Wiki

From ALICE Documentation

Revision as of 18:19, 8 September 2020 by Schulzrf (talk | contribs)
Off to research computing Wonderland

ALICE is the computing facility for excellent research of Leiden University. With ALICE you have the world of computing at your fingertips. On this wiki, you can find the information you'll need to get started and become more skilled in using computing to support your research.

For background information, read the About ALICE page.

Please know that this wiki is currently a work in process. We appreciate any questions or comments on the contents so that we can improve the range of information that we supply here.

IMPORTANT NOTE: ALICE is still in a build-up phase. Configurations are still subject to change. You may, therefore, experience unexpected behaviour for the time being.

Use of the ALICE cluster must be acknowledged in any and all publications. For more info see: About ALICE


This section is used to announce upcoming maintenance and provide information before, during and after it. For general information about our maintenance policy, please have a look here: To maintenance policy

Next Maintenance

System Maintenance on ALICE will take place on 22 Aug 2022 between 09:00 and 18:00 CEST (See the Maintenance Announcement)

We will perform system maintenance on the ALICE HPC cluster on Monday 22 August 2022 between 09:00 and 18:00.

On this day, it will not be possible to run any jobs and access data on ALICE. Until maintenance starts, you can continue to use ALICE as usual and submit jobs. Slurm will also continue to run your job if the requested running time will allow it to finish before the maintenance starts.

Our primary focus will be the high-availability set up of ALICE in addition to other maintenance tasks.

We understand that this represents an inconvenience for you. If you have any questions, please contact the ALICE Helpdesk.

Previous Maintenance days

ALICE node status

Gateway: UP
Head node: UP
Login nodes: UP
GPU nodes: UP
CPU nodes: Up
High memory nodes: UP
Storage: UP
Network: UP

Current Issues

  • Copying data to the shared scratch via sftp:
    • There is currently an issue on the sftp gateway which does prevents users from copying data to their shared scratch directory, i.e., /home/<username>/data
    • A current work-around is to use scp or sftp via the ssh gateway and the login nodes.
    • Status: Work in Progress
    • Last Updated: 30 Nov 2021, 14:56 CET

See here for other recently solved issues: Solved Issues

Getting Started

If you're new to ALICE, please check out the Getting Started page.

Gaining access

Access to the cluster and file transfer are done via SSH and SCP/SFTP. Select one of the below links for more detail or click on the heading of the paragraph for a full overview.

Access Policy

Access needs to be granted actively (by the creation of an account on the cluster by the ALICE Cluster workgroup. Use of resources is limited by the scheduler. Depending on the availability of queues ('partitions') granted to a user, priority to the system's resources is regulated on the basis of Faculty/institute/PI levels.


Cluster Monitoring Software and Scheduler

ALICE uses Bright Cluster Manager software for overall cluster management and Slurm as the scheduler.

Installed software

Globally installed software, modules

Cluster configuration

Find here a hardware description of the ALICE cluster.

Frequently Asked Questions

Find here a list of frequently asked questions for the ALICE cluster.