GROMACS¶
Gromacs describes itself at http://www.gromacs.org
GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.
Running GROMACS on CSD3¶
GROMACS is supported on each of the hardware partitions on CSD3.
To load the most recent build of GROMACS, first load the partition default environment e.g. (for the ampere partition):
module purge && module load rhel8/default-amp
and then do:
module load gromacs
which will make available the gmx_mpi front-end as well as various mdrun binaries.
A sample job script to run GROMACS:
#!/bin/bash
#SBATCH --account CHANGEME
#SBATCH --partition cclake
#SBATCH --nodes 2
#SBATCH --ntasks 28
#SBATCH --cpus-per-task 4 # 2 full cclake nodes == 112 cpus, split into 28 MPI tasks with 4 OpenMP thread per task (cpus-per-task)
#SBATCH --time 02:00:00
#SBATCH --output gromacs.out
#SBATCH --error gromacs.err
module purge
module load rhel8/cclake/base gromacs/2024.4/gcc/intel-oneapi-mpi/wnncn7o4
# Running Intro tutorial (https://tutorials.gromacs.org/docs/md-intro-tutorial.html)
# Comment out the following (except the last line) to use your own setup.
wget https://gitlab.com/gromacs/online-tutorials/md-intro-tutorial/-/archive/main/md-intro-tutorial-main.zip
unzip md-intro-tutorial-main.zip
cd md-intro-tutorial-main/data
grep -v HETATM input/1fjs.pdb > 1fjs_protein_tmp.pdb
grep -v CONECT 1fjs_protein_tmp.pdb > 1fjs_protein.pdb
# All following gromacs preparation commands are run with a single thread (-np 1)
# to prevent errors arising from multiple threads writing to the same output files
mpirun -np 1 gmx_mpi pdb2gmx -f 1fjs_protein.pdb -o 1fjs_processed.gro -water tip3p -ff "charmm27"
mpirun -np 1 gmx_mpi editconf -f 1fjs_processed.gro -o 1fjs_newbox.gro -c -d 1.0 -bt dodecahedron
mpirun -np 1 gmx_mpi solvate -cp 1fjs_newbox.gro -cs spc216.gro -o 1fjs_solv.gro -p topol.top
touch ions.mdp
mpirun -np 1 gmx_mpi grompp -f ions.mdp -c 1fjs_solv.gro -p topol.top -o ions.tpr
printf "SOL\n" | mpirun -np 1 gmx_mpi genion -s ions.tpr -o 1fjs_solv_ions.gro -conc 0.15 -p topol.top -pname NA -nname CL -neutral
mpirun -np 1 gmx_mpi grompp -f input/emin-charmm.mdp -c 1fjs_solv_ions.gro -p topol.top -o em.tpr
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
mpirun gmx_mpi mdrun -v -deffnm em -ntomp $SLURM_CPUS_PER_TASK # this is the only gmx command to be run with all the cpus
where we have automatically determined the correct mdrun binary to run for the slurm partition. This will read from an input file em.tpr which can be prepared with the other GROMACS tools.
To run on GPU the job script is almost identical:
#!/bin/bash
#SBATCH --account CHANGEME
#SBATCH --partition ampere
#SBATCH --nodes 1
#SBATCH --ntasks 4
#SBATCH --cpus-per-task 8
#SBATCH --gres=gpu:1
#SBATCH --time 00:30:00
#SBATCH --output gromacs.out
#SBATCH --error gromacs.err
module purge
module load rhel8/default-amp gromacs/2021.3/openmpi-4.1.1/gcc-9.4.0-hzwzjqx
# Running Intro tutorial (https://tutorials.gromacs.org/docs/md-intro-tutorial.html)
# Comment out the following (except the last line) to use your own setup.
wget https://gitlab.com/gromacs/online-tutorials/md-intro-tutorial/-/archive/main/md-intro-tutorial-main.zip
unzip md-intro-tutorial-main.zip
cd md-intro-tutorial-main/data
grep -v HETATM input/1fjs.pdb > 1fjs_protein_tmp.pdb
grep -v CONECT 1fjs_protein_tmp.pdb > 1fjs_protein.pdb
# All following gromacs preparation commands are run with a single thread (-np 1)
# to prevent errors arising from multiple threads writing to the same output files
mpirun -np 1 gmx_mpi pdb2gmx -f 1fjs_protein.pdb -o 1fjs_processed.gro -water tip3p -ff "charmm27"
mpirun -np 1 gmx_mpi editconf -f 1fjs_processed.gro -o 1fjs_newbox.gro -c -d 1.0 -bt dodecahedron
mpirun -np 1 gmx_mpi solvate -cp 1fjs_newbox.gro -cs spc216.gro -o 1fjs_solv.gro -p topol.top
touch ions.mdp
mpirun -np 1 gmx_mpi grompp -f ions.mdp -c 1fjs_solv.gro -p topol.top -o ions.tpr
printf "SOL\n" | mpirun -np 1 gmx_mpi genion -s ions.tpr -o 1fjs_solv_ions.gro -conc 0.15 -p topol.top -pname NA -nname CL -neutral
mpirun -np 1 gmx_mpi grompp -f input/emin-charmm.mdp -c 1fjs_solv_ions.gro -p topol.top -o em.tpr
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
mpirun -np ${SLURM_NTASKS} gmx_mpi mdrun -v -deffnm em -ntomp $OMP_NUM_THREADS # this is the only gmx command to be run with all the cpus.
where we have modified the directives to slurm to ask for: 1 GPU per node, 4 MPI tasks per GPU, and also loaded the default GPU environment.
Checkpointing GROMACS jobs¶
In case the simulation requires more than the job timelimits allowed by our policies, GROMACS supports checkpointing. Please refer to the gromacs documentation at https://manual.gromacs.org/current/user-guide/managing-simulations.html.