Difference between revisions of "Latest News"
From ALICE Documentation
(→Latest News) |
(→Latest News) |
||
(4 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
=== Latest News === | === Latest News === | ||
+ | *'''10 May 2022 - Security update of Slurm:''' Because of recently disclosed critical vulnerabilities in Slurm 20.11.7, we had to update Slurm to 20.11.8 today. The vulnerabilities were severe enough that they required immediate action from us. | ||
+ | *'''02 May 2022 - Old shared scratch space:''' We have extended the availability of the old shared scratch space on <code>/data</code> until 31 May 2022. If you have not done yet, please move your data to the new scratch space. If you need assistance, please contact the ALICE Helpdesk. After this date, we will disable access to it. See also the wiki page: [[Data storage|Data Storage]]. | ||
*'''21 Apr 2022 - ALICE-SHARK User Meeting 2022 - Second Announcement and reminder about contributions:''' It is still possible to register for the first joined meeting of the user communities of the ALICE HPC cluster (Leiden Univeristy) and the SHARK HPC cluster (LUMC). The deadline for submitting a title/abstract for a talk is 25 Apr 2022 at 23:59 CEST. For more information, please see here: [[:ALICE-SHARK User Meeting 2022]] | *'''21 Apr 2022 - ALICE-SHARK User Meeting 2022 - Second Announcement and reminder about contributions:''' It is still possible to register for the first joined meeting of the user communities of the ALICE HPC cluster (Leiden Univeristy) and the SHARK HPC cluster (LUMC). The deadline for submitting a title/abstract for a talk is 25 Apr 2022 at 23:59 CEST. For more information, please see here: [[:ALICE-SHARK User Meeting 2022]] | ||
*'''29 Mar 2022 - ALICE-SHARK User Meeting 2022 - Announcement and Registration open:''' The first joined meeting of users of the ALICE HPC cluster at Leiden University and the SHARK HPC cluster at the Leiden University Medical Center will take place on 18 May 2022 from 09:00 - 13:00. The meeting will provide an opportunity for users to connect with each other and the support teams behind the clusters. The meeting will feature an overview and update for both clusters, a selection of talks from users on past, ongoing or upcoming projects and aa Q&A session with the support team of both clusters. Registration is now open and mandatory. You can find more information here: [[:ALICE-SHARK User Meeting 2022]] | *'''29 Mar 2022 - ALICE-SHARK User Meeting 2022 - Announcement and Registration open:''' The first joined meeting of users of the ALICE HPC cluster at Leiden University and the SHARK HPC cluster at the Leiden University Medical Center will take place on 18 May 2022 from 09:00 - 13:00. The meeting will provide an opportunity for users to connect with each other and the support teams behind the clusters. The meeting will feature an overview and update for both clusters, a selection of talks from users on past, ongoing or upcoming projects and aa Q&A session with the support team of both clusters. Registration is now open and mandatory. You can find more information here: [[:ALICE-SHARK User Meeting 2022]] | ||
*'''24 Mar. 2022 - New scratch storage available to all users: ''' We are excited to announce that the new scratch storage on ALICE is available for you to use from now on. It is a BeeGFS-powered parallel file system storage with a total capacity of 370TB. We have created a user directories for all ALICE users on the new scratch storage: <code>/data1/$USER</code> with a link in your home directory</code>/home/$USER/data1</code>. By default, you have a quota of 5TB which can be extended upon request. We ask all users to migrate their data to the new storage and adjust their workflows accordingly. See also the wiki page: [[Data storage|Data Storage]]. We will keep the old scratch storage available for you to use '''until 30 April 2022'''. Then, we will disable access to it and you will have to contact us to gain access. Another two months later, we will start to remove any remaining data on the old scratch storage. Project directories on the old shared scratch have also been set up on the new scratch storage in <code>/data1/projects/</code>, but links in home directories of project team members have not been changed in order to avoid breaking existing workflows. We ask PIs to also start migrating the data in their project directories. After the migration has been completed, we will change links in the home directories of team members. If you have any questions or need assistance for migrating your data and workflow, please do not hesitate to the ALICE helpdesk. | *'''24 Mar. 2022 - New scratch storage available to all users: ''' We are excited to announce that the new scratch storage on ALICE is available for you to use from now on. It is a BeeGFS-powered parallel file system storage with a total capacity of 370TB. We have created a user directories for all ALICE users on the new scratch storage: <code>/data1/$USER</code> with a link in your home directory</code>/home/$USER/data1</code>. By default, you have a quota of 5TB which can be extended upon request. We ask all users to migrate their data to the new storage and adjust their workflows accordingly. See also the wiki page: [[Data storage|Data Storage]]. We will keep the old scratch storage available for you to use '''until 30 April 2022'''. Then, we will disable access to it and you will have to contact us to gain access. Another two months later, we will start to remove any remaining data on the old scratch storage. Project directories on the old shared scratch have also been set up on the new scratch storage in <code>/data1/projects/</code>, but links in home directories of project team members have not been changed in order to avoid breaking existing workflows. We ask PIs to also start migrating the data in their project directories. After the migration has been completed, we will change links in the home directories of team members. If you have any questions or need assistance for migrating your data and workflow, please do not hesitate to the ALICE helpdesk. | ||
*'''09 Mar. 2022 - New short partition amd-short for all users''' So far node802 has been exclusive to researchers of MI. In agreement with the PI of node802, we are making parts of the resources of this node available to all users now. This will be facilitated through a specific partition called "amd-short" that can run jobs up to 4h using up to 64 cores and up to 1TB of memory. Node802 is somewhat different than all other nodes on ALICE which is you should go through the section "[https://wiki.alice.universiteitleiden.nl/index.php?title=Running_jobs_on_ALICE#Important_information_about_partition_amd-short Important information about amd-short]" before you start using the new partition. | *'''09 Mar. 2022 - New short partition amd-short for all users''' So far node802 has been exclusive to researchers of MI. In agreement with the PI of node802, we are making parts of the resources of this node available to all users now. This will be facilitated through a specific partition called "amd-short" that can run jobs up to 4h using up to 64 cores and up to 1TB of memory. Node802 is somewhat different than all other nodes on ALICE which is you should go through the section "[https://wiki.alice.universiteitleiden.nl/index.php?title=Running_jobs_on_ALICE#Important_information_about_partition_amd-short Important information about amd-short]" before you start using the new partition. |
Revision as of 11:12, 11 May 2022
Latest News
- 10 May 2022 - Security update of Slurm: Because of recently disclosed critical vulnerabilities in Slurm 20.11.7, we had to update Slurm to 20.11.8 today. The vulnerabilities were severe enough that they required immediate action from us.
- 02 May 2022 - Old shared scratch space: We have extended the availability of the old shared scratch space on
/data
until 31 May 2022. If you have not done yet, please move your data to the new scratch space. If you need assistance, please contact the ALICE Helpdesk. After this date, we will disable access to it. See also the wiki page: Data Storage. - 21 Apr 2022 - ALICE-SHARK User Meeting 2022 - Second Announcement and reminder about contributions: It is still possible to register for the first joined meeting of the user communities of the ALICE HPC cluster (Leiden Univeristy) and the SHARK HPC cluster (LUMC). The deadline for submitting a title/abstract for a talk is 25 Apr 2022 at 23:59 CEST. For more information, please see here: ALICE-SHARK User Meeting 2022
- 29 Mar 2022 - ALICE-SHARK User Meeting 2022 - Announcement and Registration open: The first joined meeting of users of the ALICE HPC cluster at Leiden University and the SHARK HPC cluster at the Leiden University Medical Center will take place on 18 May 2022 from 09:00 - 13:00. The meeting will provide an opportunity for users to connect with each other and the support teams behind the clusters. The meeting will feature an overview and update for both clusters, a selection of talks from users on past, ongoing or upcoming projects and aa Q&A session with the support team of both clusters. Registration is now open and mandatory. You can find more information here: ALICE-SHARK User Meeting 2022
- 24 Mar. 2022 - New scratch storage available to all users: We are excited to announce that the new scratch storage on ALICE is available for you to use from now on. It is a BeeGFS-powered parallel file system storage with a total capacity of 370TB. We have created a user directories for all ALICE users on the new scratch storage:
/data1/$USER
with a link in your home directory/home/$USER/data1. By default, you have a quota of 5TB which can be extended upon request. We ask all users to migrate their data to the new storage and adjust their workflows accordingly. See also the wiki page: Data Storage. We will keep the old scratch storage available for you to use until 30 April 2022. Then, we will disable access to it and you will have to contact us to gain access. Another two months later, we will start to remove any remaining data on the old scratch storage. Project directories on the old shared scratch have also been set up on the new scratch storage in/data1/projects/
, but links in home directories of project team members have not been changed in order to avoid breaking existing workflows. We ask PIs to also start migrating the data in their project directories. After the migration has been completed, we will change links in the home directories of team members. If you have any questions or need assistance for migrating your data and workflow, please do not hesitate to the ALICE helpdesk. - 09 Mar. 2022 - New short partition amd-short for all users So far node802 has been exclusive to researchers of MI. In agreement with the PI of node802, we are making parts of the resources of this node available to all users now. This will be facilitated through a specific partition called "amd-short" that can run jobs up to 4h using up to 64 cores and up to 1TB of memory. Node802 is somewhat different than all other nodes on ALICE which is you should go through the section "Important information about amd-short" before you start using the new partition.