From 3f16abf1f3195f90970320383851bed203d479cf Mon Sep 17 00:00:00 2001 From: Kasper Skytte Andersen Date: Wed, 8 Nov 2023 13:35:25 +0100 Subject: [PATCH] external link --- docs/slurm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/slurm.md b/docs/slurm.md index 6c31354..3717e95 100644 --- a/docs/slurm.md +++ b/docs/slurm.md @@ -140,7 +140,7 @@ There are plenty of options with the SLURM job submission commands, but below ar Most options are self-explanatory. But for our setup and common use-cases you almost always want to set `--nodes` to 1, meaning your job will only run on a single compute node at a time. For multithreaded applications you mostly only need to set `ntasks` to `1` because threads are spawned from a single process/task, and then increase `--cpus-per-task` instead. For jobs that will run many things in parallel, fx when using GNU `parallel` or `xargs`, you must instead increase `ntasks` accordingly and set `--cpus-per-task` to `1`, because multiple and independent processes will be launched that likely don't use any multithreading. -Should it be needed the BioCloud is properly set up with the `OpenMPI` and `PMIx` message interfaces for distributed work across multiple compute nodes, but it requires you to tailor your scripts and commands specifically for distributed work and is a topic for another time. You can run "brute-force parallel" jobs, however, using for example [GNU parallel](https://curc.readthedocs.io/en/latest/software/GNUParallel.html) and distribute them across nodes, but this is only for experienced users and they must figure that out for themselves for now. +Should it be needed the BioCloud is properly set up with the `OpenMPI` and `PMIx` message interfaces for distributed work across multiple compute nodes, but it requires you to [tailor your scripts and commands](https://curc.readthedocs.io/en/latest/programming/parallel-programming-fundamentals.html) specifically for distributed work and is a topic for another time. You can run "brute-force parallel" jobs, however, using for example [GNU parallel](https://curc.readthedocs.io/en/latest/software/GNUParallel.html) and distribute them across nodes, but this is only for experienced users and they must figure that out for themselves for now. If you need to use one or more GPUs you need to specify `--partition=biocloud-gpu` and set `--gres=gpu:x`, where `x` refers to the number of GPUs you need. Please don't do CPU work on the `biocloud-gpu` partition unless you also need a GPU.