From e6961e76d84ea5cf8c825242c9e02b6bdd2709d8 Mon Sep 17 00:00:00 2001 From: Foivos Gypas Date: Sun, 22 Dec 2024 19:51:00 +0100 Subject: [PATCH] Fix small rendering issues --- docs/README.md | 4 ++-- docs/guides/installation.md | 21 ++++++++++++--------- 2 files changed, 14 insertions(+), 11 deletions(-) diff --git a/docs/README.md b/docs/README.md index 479a383..f34c3f7 100644 --- a/docs/README.md +++ b/docs/README.md @@ -14,11 +14,11 @@ The workflow is developed in [Snakemake][snakemake], a widely used workflow mana ## How does it work? -ZARP requires conda or mamba to install the basic dependencies. Each individual step of the workflow run either in its own Apptainer (Singularity) container or in its own Conda virtual environment. +ZARP requires Conda or Mamba to install the basic dependencies. Each individual step of the workflow runs either in its own Apptainer (Singularity) container or in its own Conda virtual environment. Once the installation is complete, you fill in a [config.yaml](https://github.com/zavolanlab/zarp/blob/dev/tests/input_files/config.yaml) file with parameters and a [samples.tsv](https://github.com/zavolanlab/zarp/blob/dev/tests/input_files/samples.tsv) file with sample specific information. You can easily trigger ZARP by making a call to snakemake with the appropriate parameters. -The pipeline can be executed in different systems or HPC clusters. ZARP generates multiple output files that help you Quality Control (QC) your data and proceed with downstream analyses. Apart from running the main ZARP workflow, you can also run a second pipeline that pulls sequencing sample data from the Sequence Read Archive (SRA), and a third pipeline that populates a file with the samples and infers missing metadata. +The pipeline can be executed in different systems or High Performance Computing (HPC) clusters. ZARP generates multiple output files that help you Quality Control (QC) your data and proceed with downstream analyses. Apart from running the main ZARP workflow, you can also run a second pipeline that pulls sequencing sample data from the Sequence Read Archive (SRA), and a third pipeline that populates a file with the samples and infers missing metadata. ## How to cite diff --git a/docs/guides/installation.md b/docs/guides/installation.md index d336cc1..cf86892 100644 --- a/docs/guides/installation.md +++ b/docs/guides/installation.md @@ -125,7 +125,7 @@ After installing Singularity, install the remaining dependencies with: mamba env create -f install/environment.yml ``` -### As root user on Linux +**As root user on Linux** If you have a Linux machine, as well as root privileges, (e.g., if you plan to run the workflow on your own computer), you can execute the following command @@ -135,7 +135,7 @@ to include Singularity in the Conda environment: mamba env update -f install/environment.root.yml ``` -## 5. Activate ZARP environment +### 5. Activate ZARP environment Activate the Conda environment with: @@ -143,9 +143,9 @@ Activate the Conda environment with: conda activate zarp ``` -## 6. Optional installation steps +### 6. Optional installation steps -### Install test dependencies +#### Install test dependencies Most tests have additional dependencies. If you are planning to run tests, you will need to install these by executing the following command _in your active @@ -155,7 +155,7 @@ Conda environment_: mamba env update -f install/environment.dev.yml ``` -### Run installation tests +#### Run installation tests We have prepared several tests to check the integrity of the workflow and its components. These can be found in subdirectories of the `tests/` directory. @@ -167,24 +167,27 @@ Execute one of the following commands to run the test workflow on your local machine: -#### Test workflow on local machine with **Singularity**: +##### Test workflow on local machine with **Singularity**: + ```bash bash tests/test_integration_workflow/test.local.sh ``` -#### Test workflow on local machine with **Conda**: + +##### Test workflow on local machine with **Conda**: + ```bash bash tests/test_integration_workflow_with_conda/test.local.sh ``` Execute one of the following commands to run the test workflow on a [Slurm][slurm]-managed high-performance computing (HPC) cluster: - #### Test workflow with **Singularity**: +##### Test workflow with **Singularity**: ```bash bash tests/test_integration_workflow/test.slurm.sh ``` -#### Test workflow with **Conda**: +##### Test workflow with **Conda**: ```bash bash tests/test_integration_workflow_with_conda/test.slurm.sh