Skip to content

Commit

Permalink
Fix small rendering issues
Browse files Browse the repository at this point in the history
  • Loading branch information
fgypas committed Dec 22, 2024
1 parent d7af25e commit e6961e7
Show file tree
Hide file tree
Showing 2 changed files with 14 additions and 11 deletions.
4 changes: 2 additions & 2 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,11 @@ The workflow is developed in [Snakemake][snakemake], a widely used workflow mana

## How does it work?

ZARP requires conda or mamba to install the basic dependencies. Each individual step of the workflow run either in its own Apptainer (Singularity) container or in its own Conda virtual environment.
ZARP requires Conda or Mamba to install the basic dependencies. Each individual step of the workflow runs either in its own Apptainer (Singularity) container or in its own Conda virtual environment.

Once the installation is complete, you fill in a [config.yaml](https://github.com/zavolanlab/zarp/blob/dev/tests/input_files/config.yaml) file with parameters and a [samples.tsv](https://github.com/zavolanlab/zarp/blob/dev/tests/input_files/samples.tsv) file with sample specific information. You can easily trigger ZARP by making a call to snakemake with the appropriate parameters.

The pipeline can be executed in different systems or HPC clusters. ZARP generates multiple output files that help you Quality Control (QC) your data and proceed with downstream analyses. Apart from running the main ZARP workflow, you can also run a second pipeline that pulls sequencing sample data from the Sequence Read Archive (SRA), and a third pipeline that populates a file with the samples and infers missing metadata.
The pipeline can be executed in different systems or High Performance Computing (HPC) clusters. ZARP generates multiple output files that help you Quality Control (QC) your data and proceed with downstream analyses. Apart from running the main ZARP workflow, you can also run a second pipeline that pulls sequencing sample data from the Sequence Read Archive (SRA), and a third pipeline that populates a file with the samples and infers missing metadata.

## How to cite

Expand Down
21 changes: 12 additions & 9 deletions docs/guides/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ After installing Singularity, install the remaining dependencies with:
mamba env create -f install/environment.yml
```

### As root user on Linux
**As root user on Linux**

If you have a Linux machine, as well as root privileges, (e.g., if you plan to
run the workflow on your own computer), you can execute the following command
Expand All @@ -135,17 +135,17 @@ to include Singularity in the Conda environment:
mamba env update -f install/environment.root.yml
```

## 5. Activate ZARP environment
### 5. Activate ZARP environment

Activate the Conda environment with:

```bash
conda activate zarp
```

## 6. Optional installation steps
### 6. Optional installation steps

### Install test dependencies
#### Install test dependencies

Most tests have additional dependencies. If you are planning to run tests, you
will need to install these by executing the following command _in your active
Expand All @@ -155,7 +155,7 @@ Conda environment_:
mamba env update -f install/environment.dev.yml
```

### Run installation tests
#### Run installation tests

We have prepared several tests to check the integrity of the workflow and its
components. These can be found in subdirectories of the `tests/` directory.
Expand All @@ -167,24 +167,27 @@ Execute one of the following commands to run the test workflow
on your local machine:


#### Test workflow on local machine with **Singularity**:
##### Test workflow on local machine with **Singularity**:

```bash
bash tests/test_integration_workflow/test.local.sh
```
#### Test workflow on local machine with **Conda**:

##### Test workflow on local machine with **Conda**:

```bash
bash tests/test_integration_workflow_with_conda/test.local.sh
```
Execute one of the following commands to run the test workflow
on a [Slurm][slurm]-managed high-performance computing (HPC) cluster:

#### Test workflow with **Singularity**:
##### Test workflow with **Singularity**:

```bash
bash tests/test_integration_workflow/test.slurm.sh
```

#### Test workflow with **Conda**:
##### Test workflow with **Conda**:

```bash
bash tests/test_integration_workflow_with_conda/test.slurm.sh
Expand Down

0 comments on commit e6961e7

Please sign in to comment.