This is a short tutorial to help setup a computing account at the University of Helsinki’s computing cluster. The guide is only relevant for UH students and staff.

As a very first step, you need to ask your group leader to add you to the cluster user group. After that is done, you can follow these steps to log in and install runko.

Preliminary access

First, test that you can login to turso cluster with

ssh -YA username@turso.cs.helsinki.fi

where you need to replace username with your uni account name. The connection only works within the university’s eduroam internet network, i.e., you have to be physically at the campus. If you are greeted with the turso terminal, you can continue to the next step.

If you want to connect from outside the university you need to first jump through via e.g., melkinpaasi.cs.helsinki.fi. Tips for how to automate this are given below.

SSH connection

SSH keys for easier login

A regular ssh connection requires you to type the (university) password on every login. You can make your life a bit easier by adding your SSH public key to the accepted connections on turso.

The following steps need to be done only once per machine that you will use to login to the terminal.

First, we need to generate a public SSH key (if you dont already have one). On your own machine’s home directory (i.e., ~/) execute

mkdir .ssh 
ssh-keygen -t rsa

and press enter for the default suggested directory and for passphrase (i.e. it leave empty).

The command generates

  • .ssh/id_rsa private ssh key
  • .ssh/id_rsa.pub public ssh key (for sharing)

In order to whitelist your computer you need to add the id_rsa.pub key to the host machine’s SSH config. In practice, print out the public key on your own local machine and copy the content to clipboard with

cat .ssh/id_rsa.pub

Then, connect via ssh to turso (command above) and paste the content of the id_rsa.pub (on your own machine) to ~/.ssh/authorized_keys (on the host machine). Then, repeat the same process on melkinpaasi by issuing a ssh connection to username@melkinpaasi.cs.helsinki.fi and paste the same key to to there as well.

SSH shortcut to your .ssh/config

One final touch can be done by configuring your own SSH connections to include turso as a known host. The following steps need to be done only once per machine that you will use to login to the terminal.

Add to your own machine’s ~/.ssh/config

Host turso
    HostName turso.cs.helsinki.fi
    User your_username
    IdentityFile ~/.ssh/id_rsa
    ProxyJump your_username@melkinpaasi.cs.helsinki.fi

and replace your_username with the university account name (note that it appears in 2 places here). Note that the whitespace on the command is made via tabs (not spaces).

After this, you should be able to connect to turso from anywhere with

ssh turso

Runko installation

Modules

Next, we will automate the loading of the necessary HPC modules on turso. SSH to turso and move to the vakka work disk space, and clone runko there for later access:

ssh turso
cd /wrk-vakka/users/$USER
git clone --recursive https://github.com/natj/runko.git

Then, move back to the turso home directory, create a modules/runko folder and copy the module file template there with

cd ~
mkdir modules
cd modules
mkdir runko
cp /wrk-vakka/users/$USER/runko/archs/modules/5.0.0.lua ~/modules/runko/5.0.0.lua 

Next, we need to introduce our own module directory to lmod. Its best to automate this by adding to your ~/.bash_profile on turso a line (and create the file if does not exist)

module use --append /home/your_username/modules

where your_username needs to be replaced with the real one. Now, when you login to turso, the available module list will be automatically updated.

Virtual Python environment

Finally, we need to setup the virtual python environment. To do this, first load the (incomplete) modules we just created with

module use --append /home/$USER/modules

and create a python virtual environment on turso home directory with

mkdir venvs
cd ~/venvs
virtualenv runko

Then activate the environment with

source ~/venvs/runko/bin/activate

after which you should see the terminal status bar change to

(runko) username@turso2:~$ 

or similar.

Then, we can install the python requirements (stored and reloaded automatically when we activate the venv) with

pip install mpi4py h5py scipy matplotlib numpy

The computing environment is now ready for compilation.

Installation

Runko installation is now easy. We login to turso, load the runko module,

ssh turso
module load runko

and can compile runko in /wrk-vakka (which was the location where we cloned the code in the SSH setup stage) with

cd /wrk-vakka/users/$USER/runko
mkdir build
cd build
cmake ..
make -j8

After which you should see the compilation take place and the tests being run.

Runko and SLURM usage

Submitting an example job

The code can be ran by e.g., submitting an example SLURM job in the shock project directory

cd $RUNKODIR
cd projects/pic-shocks
cd jobs

and submitting the example job

sbatch 1ds3.turso

with the content of 1ds3.turso being something like

#!/bin/bash
#SBATCH -J 1ds3              # user-given SLURM job name
#SBATCH -M ukko              # machine name [ukko, kale, carrington, hile]
#SBATCH -p short             # partition; use "sinfo -M all" for options
#SBATCH --output=%J.out      # output file name
#SBATCH --error=%J.err       # output error file name
#SBATCH -t 0-05:00:00        # maximum job duration
#SBATCH --nodes=1            # number of computing nodes
#SBATCH --ntasks-per-node=16 # MPI tasks launched per node
#SBATCH --constraint=amd     # target specific nodes; [amd, intel]
#SBATCH --exclusive          # reserve the full node for the job

# INFO: ukko AMD nodes have 128 EPYC Rome cores

# modules
module use /home/jnattila/modules
module load runko

# HPC configurations
export OMP_NUM_THREADS=1
export PYTHONDONTWRITEBYTECODE=true
export HDF5_USE_FILE_LOCKING=FALSE

# go to working directory
cd $RUNKODIR/projects/pic-shocks/

mpirun -n 16 python pic.py --conf 2dsig3.ini

This uses ukko (-M ukko) to run a job in the short queue (-p short) on one node (--nodes=1) with 16 cores (--ntasks-per-node=16).

Basic SLURM commands

You can check the status of the SLURM queue with

squeue

and status of your own jobs with

sacct

Sometimes you might also need information about the available partitions which can be accessed with

sinfo -M all

Updated: