Skip to content

Bactopia Tool - ariba

The ariba module uses ARIBA to rapidly identify genes in a database by creating local assemblies.

Example Usage

bactopia --wf ariba \
  --bactopia /path/to/your/bactopia/results \ 
  --include includes.txt  

Output Overview

Below is the default output structure for the ariba tool. Where possible the file descriptions below were modified from a tools description.

<BACTOPIA_DIR>
├── <SAMPLE_NAME>
│   └── tools
│       └── ariba
│           └── card
│               ├── <SAMPLE_NAME>-report.tsv
│               ├── <SAMPLE_NAME>-summary.csv
│               ├── assembled_genes.fa.gz
│               ├── assembled_seqs.fa.gz
│               ├── assemblies.fa.gz
│               ├── debug.report.tsv
│               ├── log.clusters.gz
│               ├── logs
│               │   ├── nf-ariba.{begin,err,log,out,run,sh,trace}
│               │   └── versions.yml
│               └── version_info.txt
└── bactopia-runs
    └── ariba-<TIMESTAMP>
        ├── merged-results
        │   ├── card-report.tsv
        │   ├── card-summary.csv
        │   └── logs
        │       ├── card-report
        │       │   ├── nf-merged-results.{begin,err,log,out,run,sh,trace}
        │       │   └── versions.yml
        │       └── card-summary
        │           ├── nf-merged-results.{begin,err,log,out,run,sh,trace}
        │           └── versions.yml
        └── nf-reports
            ├── ariba-dag.dot
            ├── ariba-report.html
            ├── ariba-timeline.html
            └── ariba-trace.txt

Directory structure might be different

ariba is available as a standalone Bactopia Tool, as well as from the main Bactopia workflow (e.g. through Staphopia or Merlin). If executed from Bactopia, the ariba directory structure might be different, but the output descriptions below still apply.

Results

Merged Results

Below are results that are concatenated into a single file.

Extension Description
-report.tsv A merged TSV file with ariba results from all samples
-summary.csv A merged CSV file created with ariba summary

Ariba

Below is a description of the per-sample results from ARIBA.

Filename Description
<SAMPLE_NAME>-report.tsv A report of the ARIBA analysis results
<SAMPLE_NAME>-summary.csv A summary of the report created using ariba summary
assembled_genes.fa.gz All the assembled genes
assembled_seqs.fa.gz All the assembled sequences that match the reference
assemblies.fa.gz All the raw local assembles
debug.report.tsv Contains the results from report.tsv in addition to synonymous mutations
log.clusters.gz A log of the ARIBA analysis
version_info.txt Contains info on the versions of ARIBA and its dependencies

Audit Trail

Below are files that can assist you in understanding which parameters and program versions were used.

Logs

Each process that is executed will have a folder named logs. In this folder are helpful files for you to review if the need ever arises.

Extension Description
.begin An empty file used to designate the process started
.err Contains STDERR outputs from the process
.log Contains both STDERR and STDOUT outputs from the process
.out Contains STDOUT outputs from the process
.run The script Nextflow uses to stage/unstage files and queue processes based on given profile
.sh The script executed by bash for the process
.trace The Nextflow Trace report for the process
versions.yml A YAML formatted file with program versions

Nextflow Reports

These Nextflow reports provide great a great summary of your run. These can be used to optimize resource usage and estimate expected costs if using cloud platforms.

Filename Description
ariba-dag.dot The Nextflow DAG visualisation
ariba-report.html The Nextflow Execution Report
ariba-timeline.html The Nextflow Timeline Report
ariba-trace.txt The Nextflow Trace report

Program Versions

At the end of each run, each of the versions.yml files are merged into the files below.

Filename Description
software_versions.yml A complete list of programs and versions used by each process
software_versions_mqc.yml A complete list of programs and versions formatted for MultiQC

Parameters

Required Parameters

Define where the pipeline should find input data and save output data.

Parameter Description
--bactopia The path to bactopia results to use as inputs
Type: string

Filtering Parameters

Use these parameters to specify which samples to include or exclude.

Parameter Description
--include A text file containing sample names (one per line) to include from the analysis
Type: string
--exclude A text file containing sample names (one per line) to exclude from the analysis
Type: string

Ariba Run Parameters

Parameter Description
--ariba_db A database to query, if unavailable it will be downloaded to the path given by --datasets_cache
Type: string
--nucmer_min_id Minimum alignment identity (delta-filter -i)
Type: integer, Default: 90
--nucmer_min_len Minimum alignment identity (delta-filter -i)
Type: integer, Default: 20
--nucmer_breaklen Value to use for -breaklen when running nucmer
Type: integer, Default: 200
--assembly_cov Target read coverage when sampling reads for assembly
Type: integer, Default: 50
--min_scaff_depth Minimum number of read pairs needed as evidence for scaffold link between two contigs
Type: integer, Default: 10
--spades_options Extra options to pass to Spades assembler
Type: string
--assembled_threshold If proportion of gene assembled (regardless of into how many contigs) is at least this value then the flag gene_assembled is set
Type: number, Default: 0.95
--gene_nt_extend Max number of nucleotides to extend ends of gene matches to look for start/stop codons
Type: integer, Default: 30
--unique_threshold If proportion of bases in gene assembled more than once is <= this value, then the flag unique_contig is set
Type: number, Default: 0.03
--ariba_no_clean Do not clean up intermediate files created by Ariba.
Type: boolean

Optional Parameters

These optional parameters can be useful in certain settings.

Parameter Description
--outdir Base directory to write results to
Type: string, Default: ./
--run_name Name of the directory to hold results
Type: string, Default: bactopia
--skip_compression Ouput files will not be compressed
Type: boolean
--datasets The path to cache datasets to
Type: string
--keep_all_files Keeps all analysis files created
Type: boolean

Max Job Request Parameters

Set the top limit for requested resources for any single job.

Parameter Description
--max_retry Maximum times to retry a process before allowing it to fail.
Type: integer, Default: 3
--max_cpus Maximum number of CPUs that can be requested for any single job.
Type: integer, Default: 4
--max_memory Maximum amount of memory (in GB) that can be requested for any single job.
Type: integer, Default: 32
--max_time Maximum amount of time (in minutes) that can be requested for any single job.
Type: integer, Default: 120
--max_downloads Maximum number of samples to download at a time
Type: integer, Default: 3

Nextflow Configuration Parameters

Parameters to fine-tune your Nextflow setup.

Parameter Description
--nfconfig A Nextflow compatible config file for custom profiles, loaded last and will overwrite existing variables if set.
Type: string
--publish_dir_mode Method used to save pipeline results to output directory.
Type: string, Default: copy
--infodir Directory to keep pipeline Nextflow logs and reports.
Type: string, Default: ${params.outdir}/pipeline_info
--force Nextflow will overwrite existing output files.
Type: boolean
--cleanup_workdir After Bactopia is successfully executed, the work directory will be deleted.
Type: boolean

Nextflow Profile Parameters

Parameters to fine-tune your Nextflow setup.

Parameter Description
--condadir Directory to Nextflow should use for Conda environments
Type: string
--registry Docker registry to pull containers from.
Type: string, Default: dockerhub
--datasets_cache Directory where downloaded datasets should be stored.
Type: string, Default: <BACTOPIA_DIR>/data/datasets
--singularity_cache Directory where remote Singularity images are stored.
Type: string
--singularity_pull_docker_container Instead of directly downloading Singularity images for use with Singularity, force the workflow to pull and convert Docker containers instead.
Type: boolean
--force_rebuild Force overwrite of existing pre-built environments.
Type: boolean
--queue Comma-separated name of the queue(s) to be used by a job scheduler (e.g. AWS Batch or SLURM)
Type: string, Default: general,high-memory
--cluster_opts Additional options to pass to the executor. (e.g. SLURM: '--account=my_acct_name'
Type: string
--disable_scratch All intermediate files created on worker nodes of will be transferred to the head node.
Type: boolean

Helpful Parameters

Uncommonly used parameters that might be useful.

Parameter Description
--monochrome_logs Do not use coloured log outputs.
Type: boolean
--nfdir Print directory Nextflow has pulled Bactopia to
Type: boolean
--sleep_time The amount of time (seconds) Nextflow will wait after setting up datasets before execution.
Type: integer, Default: 5
--validate_params Boolean whether to validate parameters against the schema at runtime
Type: boolean, Default: True
--help Display help text.
Type: boolean
--wf Specify which workflow or Bactopia Tool to execute
Type: string, Default: bactopia
--list_wfs List the available workflows and Bactopia Tools to use with '--wf'
Type: boolean
--show_hidden_params Show all params when using --help
Type: boolean
--help_all An alias for --help --show_hidden_params
Type: boolean
--version Display version text.
Type: boolean

Citations

If you use Bactopia and ariba in your analysis, please cite the following.