Difference between revisions of "Site structure"
From ALICE Documentation
Line 29: | Line 29: | ||
{{:News}} | {{:News}} | ||
</div></div> | </div></div> | ||
+ | __NOTOC__ |
Latest revision as of 11:59, 6 October 2020
This wiki is the main documentation source for information about the Academic Leiden Interdisciplinary Cluster Environment (ALICE) cluster. ALICE is the research computing facility of the partnership Leiden University and Leiden University Medical Center. It is available to any researcher from both partners. In this wiki, we will introduce the ALICE cluster in detail.
Below you find a collapsible tree structure of our documentation set which gives you a quick overview of the full navigation tree. The root tree items are also directory accessible from the top navigation bar.
!!!! New location for ALICE wiki !!!!
The wiki of the ALICE HPC cluster has moved to a new location: https://pubappslu.atlassian.net/wiki/spaces/HPCWIKI/.
So far, there have been separate user wikis for ALICE HPC cluster and the SHARK HPC cluster at LUMC. However, there is a great deal of overlap in terms of information that you as a user need to work on ALICE or SHARK. Therefore, the support teams of both clusters have launched a new joined HPC user wiki. The new wiki provides information specific to each cluster in addition to a user guide and tutorials which apply to both clusters. There is also a news section, a calendar where we publish events, information about user meetings and workshops
This wiki is frozen and should no longer be use. If you encounter any issues, please contact the ALICE Helpdesk.
This user guide will help you get started if you are new to ALICE and working on an HPC in general.
!!!! New location for ALICE wiki !!!!
The wiki of the ALICE HPC cluster has moved to a new location: https://pubappslu.atlassian.net/wiki/spaces/HPCWIKI/.
So far, there have been separate user wikis for ALICE HPC cluster and the SHARK HPC cluster at LUMC. However, there is a great deal of overlap in terms of information that you as a user need to work on ALICE or SHARK. Therefore, the support teams of both clusters have launched a new joined HPC user wiki. The new wiki provides information specific to each cluster in addition to a user guide and tutorials which apply to both clusters. There is also a news section, a calendar where we publish events, information about user meetings and workshops
This wiki is frozen and should no longer be use. If you encounter any issues, please contact the ALICE Helpdesk.
- General documentation
- About ALICE
- System description
- HPC Software
- Policies
!!!! New location for ALICE wiki !!!!
The wiki of the ALICE HPC cluster has moved to a new location: https://pubappslu.atlassian.net/wiki/spaces/HPCWIKI/.
So far, there have been separate user wikis for ALICE HPC cluster and the SHARK HPC cluster at LUMC. However, there is a great deal of overlap in terms of information that you as a user need to work on ALICE or SHARK. Therefore, the support teams of both clusters have launched a new joined HPC user wiki. The new wiki provides information specific to each cluster in addition to a user guide and tutorials which apply to both clusters. There is also a news section, a calendar where we publish events, information about user meetings and workshops
This wiki is frozen and should no longer be use. If you encounter any issues, please contact the ALICE Helpdesk.
- More in-depth
- Security and privacy
- More on login nodes
- Using Node802
- Running jobs
- Other
- Software
- Available software
- Develop your own code
- Running in parallel
- Software packages
!!!! New location for ALICE wiki !!!!
The wiki of the ALICE HPC cluster has moved to a new location: https://pubappslu.atlassian.net/wiki/spaces/HPCWIKI/.
So far, there have been separate user wikis for ALICE HPC cluster and the SHARK HPC cluster at LUMC. However, there is a great deal of overlap in terms of information that you as a user need to work on ALICE or SHARK. Therefore, the support teams of both clusters have launched a new joined HPC user wiki. The new wiki provides information specific to each cluster in addition to a user guide and tutorials which apply to both clusters. There is also a news section, a calendar where we publish events, information about user meetings and workshops
This wiki is frozen and should no longer be use. If you encounter any issues, please contact the ALICE Helpdesk.
!!!! New location for ALICE wiki !!!!
The wiki of the ALICE HPC cluster has moved to a new location: https://pubappslu.atlassian.net/wiki/spaces/HPCWIKI/.
So far, there have been separate user wikis for ALICE HPC cluster and the SHARK HPC cluster at LUMC. However, there is a great deal of overlap in terms of information that you as a user need to work on ALICE or SHARK. Therefore, the support teams of both clusters have launched a new joined HPC user wiki. The new wiki provides information specific to each cluster in addition to a user guide and tutorials which apply to both clusters. There is also a news section, a calendar where we publish events, information about user meetings and workshops
This wiki is frozen and should no longer be use. If you encounter any issues, please contact the ALICE Helpdesk.
News
Latest News
- 06 Oct 2022 - New user wiki: So far, there have been separate user wikis for ALICE HPC cluster and the SHARK HPC cluster at LUMC. However, there is a great deal of overlap in terms of information that you as a user need to work on ALICE or SHARK. Therefore, the support teams of both clusters are starting to move to a new joined HPC user wiki. The new wiki is live and can be found here: https://pubappslu.atlassian.net/wiki/spaces/HPCWIKI/. The old wikis are now frozen and no new content will be added to them. The new wiki provides information specific to each cluster in addition to a user guide and tutorials which apply to both clusters. There is also a news section, a calendar where we publish events, information about user meetings and workshops.
- 21 Sep 2022 - Access to ALICE: On 26 Sept 2022 between 18:00 and 18:30, access to ALICE will not be possible due to maintenance on the University cloud platform.
- 24 Aug 2022 - ALICE available again: Maintenance on ALICE is over. The cluster is online again and available to all users. We apologize for the delay.
- 23 Aug 2022 - ALICE system maintenance not finished and continues tomorrow: We managed to solve many of the issues that we faced yesterday. We are waiting for the completion of synchronization processes which are part of the high-availability setup procedure. If all goes well, we just need to run a few tests to verify that the new high-availability setup is working properly and all the nodes are coming back. Unfortunately, it was not possible to do today anymore. In case the setup fails after all, we are prepared to revert back all the changes and bring ALICE online again. In any case, we expect ALICE to be online again sometime tomorrow afternoon. We are sorry for the delay, but the new high-availability setup is vital for ALICE which is why have been working hard to get it done.
- 22 Aug 2022 - ALICE is offline due to system maintenance - Continues tomorrow: We encountered unexpected technical issues during our highest priority task for this maintenance day, the high-availability setup. Because this is a critical component for the continuing stability of ALICE and we require the cluster to be offline, we decided to continue solving the issues tomorrow and keep the cluster offline.
- 17 Aug 2022 - REMINDER - ALICE system maintenance on 22 Aug 2022: We will perform system maintenance on ALICE on 22 Aug 2022 between 09:00 and 18:00 CEST. Our primary focus will be the high-availability set up of ALICE in addition to other maintenance tasks. This will require us to take all compute and login nodes of the cluster offline. It will not be possible to run any jobs and access data on ALICE. The login nodes will be rebooted and all active terminal or X2Go sessions will be terminated. Until maintenance starts, you can continue to use ALICE as usual and submit jobs. Slurm will also continue to run your job if the requested running time will allow it to finish before the maintenance starts. If you have any questions, please contact the ALICE Helpdesk.
- 01 Aug 2022 - ALICE system maintenance on 22 Aug 2022 - First announcement: We will perform system maintenance on ALICE on 22 Aug 2022 between 09:00 and 18:00 CEST. Our primary focus will be the high-availability set up of ALICE in addition to other maintenance tasks. This will require us to take all compute and login nodes of the cluster offline. It will not be possible to run any jobs and access data on ALICE. Until maintenance starts, you can continue to use ALICE as usual and submit jobs. Slurm will also continue to run your job if the requested running time will allow it to finish before the maintenance starts. If you have any questions, please contact the ALICE Helpdesk.
- 01 Jun 2022 - Disabled access to old scratch storage: As previously announced, we have disabled access to the old scratch storage. We will keep the data available until 30 June 2022. Afterwards, we will start to delete data so that we can repurpose the storage within ALICE. You can request temporary access by contacting the ALICE Helpdesk. See also the wiki page: Data Storage.
Older News
For older news, please have a look at the news archive: News Archive
Events
Here you can find information about upcoming events related to ALICE.
Maintenance
This section is used to announce upcoming maintenance and provide information before, during and after it. For general information about our maintenance policy, please have a look here: To maintenance policy
Next Maintenance
System Maintenance on ALICE will take place on 22 Aug 2022 between 09:00 and 18:00 CEST (See the Maintenance Announcement)
We will perform system maintenance on the ALICE HPC cluster on Monday 22 August 2022 between 09:00 and 18:00.
On this day, it will not be possible to run any jobs and access data on ALICE. Until maintenance starts, you can continue to use ALICE as usual and submit jobs. Slurm will also continue to run your job if the requested running time will allow it to finish before the maintenance starts.
Our primary focus will be the high-availability set up of ALICE in addition to other maintenance tasks.
We understand that this represents an inconvenience for you. If you have any questions, please contact the ALICE Helpdesk.
Previous Maintenance days
ALICE node status
Gateway: UP Head node: UP Login nodes: UP GPU nodes: UP CPU nodes: Up High memory nodes: UP Storage: UP Network: UP
Current Issues
- Copying data to the shared scratch via sftp:
- There is currently an issue on the sftp gateway which does prevents users from copying data to their shared scratch directory, i.e.,
/home/<username>/data
- A current work-around is to use scp or sftp via the ssh gateway and the login nodes.
- Status: Work in Progress
- Last Updated: 30 Nov 2021, 14:56 CET
- There is currently an issue on the sftp gateway which does prevents users from copying data to their shared scratch directory, i.e.,
See here for other recently solved issues: Solved Issues
Publications
Articles with acknowledgements to the use of ALICE
Astronomy and Astrophysics
- High-level ab initio quartic force fields and spectroscopic characterization of C2N−, Rocha, C. M. R. and Linnartz, H, Phys. Chem. Chem. Phys, November 2021, DOI: https://doi.org/10.1039/D1CP03505C
- Effects of stellar density on the photoevaporation of circumstellar discs, Concha-Ramirez, F. et al., MNRAS, 501, 1782 (February 2021), DOI: https://doi.org/10.1093/mnras/staa3669
- Lucky planets: how circum-binary planets survive the supernova in one of the inner-binary components, Fagginger Auer, F. & Portegies Zwart, S., eprint arXiv:2101.08033, Submitted to SciPost Astronomy (January 2021), https://ui.adsabs.harvard.edu/link_gateway/2021arXiv210108033F/arxiv:2101.08033
- Trimodal structure of Hercules stream explained by originating from bar resonances, Asano, T. et al., MNRAS, 499, 2416 (December 2020), DOI: https://doi.org/10.1093/mnras/staa2849
- Oort cloud Ecology II: Extra-solar Oort clouds and the origin of asteroidal interlopers, Portegies Zwart, S., eprint arXiv:2011.08257, accepted for publication by A&A, (November 2020), https://ui.adsabs.harvard.edu/link_gateway/2020arXiv201108257P/arxiv:2011.08257
- The ecological impact of high-performance computing in astrophysics. Portegies Zwart, S., Nature Astronomy, 4, 819–822 (September 2020), DOI: https://doi.org/10.1038/s41550-020-1208-y.
Computer Sciences
- Better Distractions: Transformer-based Distractor Generationand Multiple Choice Question Filtering, Offerijns, J., Verberne, V., Verhoe, T., eprint arXiv:2010.09598, (October 2020), https://arxiv.org/abs/2010.09598
Ecology
- Improving estimations of life history parameters of small animals in mesocosm experiments: A case study on mosquitoes, Dellar, M., Sam, B.P., Holmes, D., Methods in Ecology and Evolution, (February 2022), https://besjournals.onlinelibrary.wiley.com/doi/10.1111/2041-210X.13814
Leiden researchers and their use of HPC
- Identifying Earth-impacting asteroids using an artificial neural network. John D. Hefele, Francesco Bortolussi and Simon Portegies Zwart. Astronomy & Astrophysics, February 2020.
News articles featuring ALICE
- Hazardous Object Identifier: Supercomputer Helps to Identify Dangerous Asteroids, Oliver Peckman, HPC Wire, 04 March 2020, link
- Elf reuzestenen op ramkoers met de aarde?, Annelies Bes, 13 February 2020, Kijk Magazine, link
- Leidse sterrenkundigen ontdekken aardscheerders-in-spé, NOVA, 12 February 2020, link