Sapphire Rapid Nodes

These new nodes entered general service in July 2023.

Slurm partitions

  • The Sapphire Rapid nodes are named according to the scheme cpu-r-[1-112].
  • Some of the Sapphire Rapid nodes are in the sapphire Slurm partition. Your existing -CPU projects will be able to submit jobs to these.
  • The Sapphire Rapid nodes have 112 cpus (1 cpu = 1 core), and 4580 MiB RAM per cpu for a total of 512 GB RAM per node.
  • The Sapphire Rapid nodes are interconnected by Mellanox NDR200 Infiniband
  • The Sapphire Rapid nodes are running Rocky Linux 8, which is a rebuild of Red Hat Enterprise Linux 8 (RHEL8).

Recommendations for running on Sapphire Rapids

Since the cpu-r nodes are running Rocky 8, you may need to recompile your applications.

It is possible to log into a login node with compatible OS/software by logging into login-icelake.hpc.cam.ac.uk, which will land you on one of the login-q-* nodes.

Alternatively, request an interactive node using sintr:

sintr -t 4:0:0 -N1 -n38 -A YOURPROJECT-CPU -p sapphire

The per-job wallclock time limits are 36 hours and 12 hours for SL1/2 and SL3 respectively.

The per-job, per-user cpu limits are now 4256 and 448 cpus for SL1/2 and SL3 respectively.

These limits should be regarded as provisional and may be revised.

Default submission script for Sapphire Rapids

There is no default job submission template dedicated to Sapphire Rapid nodes, but you might be able to make one for yourself by tweaking an Icelake template.

You should find a symbolic link to an Icelake default job submission script modified for the icelake nodes in your home directory, called:

slurm_submit.peta4-icelake

This is set up for non-MPI jobs using icelake, but can be modified for other types of job. If you prefer to modify your existing job scripts, please see the following sections for guidance.

Jobs not using MPI

In this case you should be able to simply specify the sapphire partition to the -p sbatch directive, e.g.:

#SBATCH -p sapphire

will submit a job able to run on the first nodes available in the sapphire partition. If you need more than 4580 MiB of memory, you will need to specify it as a sbatch directive like this:

#SBATCH --mem=8000

to ask for 8000 MiB memory.

Jobs requiring MPI

We currently recommend using Intel MPI 2021.6.0 on Sapphire Rapid nodes. There are other, related changes to the default environment seen by jobs running on the sapphire nodes. If you wish to recompile or test against the Sapphire Rapid environment, the simplest option is to use a login node as shown above. Alternatively, request an interactive node and work on a sapphire partition node directly.

For reference, the default environment on the sapphire (cpu-r) nodes is provided by loading a module as follows:

module purge
module load rhel8/default-sar

It is not recommended to build software intended to run on Sapphire Rapids on other type of nodes (cclake, icelake), because the CPU type is different.