Skip to content
This repository has been archived by the owner on Jul 3, 2023. It is now read-only.

Standardized and Automated MetaBarcoding Analyses workflow (Mirror)

License

Notifications You must be signed in to change notification settings

ifremer-bioinformatics/samba

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SAMBA: Standardized and Automated MetaBarcoding Analyses workflow.

SAMBA version Nextflow Run with with conda Run with docker Run with with singularity SeBiMER Docker

[DEPRECATED] Now use the samba v4 version : https://gitlab.ifremer.fr/bioinfo/workflows/samba

Introduction

SAMBA is a FAIR scalable workflow integrating, into a unique tool, state-of-the-art bioinformatics and statistical methods to conduct reproducible eDNA analyses using Nextflow. SAMBA starts processing by verifying integrity of raw reads and metadata. Then all bioinformatics processing is done using commonly used procedure (QIIME 2 and DADA2) but adds new steps relying on dbOTU3 and microDecon to build high quality ASV count tables. Extended statistical analyses are also performed. Finally, SAMBA produces a full dynamic HTML report including resources used, commands executed, intermediate results, statistical analyses and figures.

The SAMBA pipeline can run tasks across multiple compute infrastructures in a very portable manner. It comes with singularity containers making installation trivial and results highly reproducible.

Quick Start

i. Install nextflow

ii. Install either Docker or Singularity for full pipeline reproducibility (please only use Conda as a last resort)

iii. Download the pipeline and test it on a minimal dataset with a single command

  • for short reads test :
nextflow run main.nf -profile shortreadstest,<docker/singularity/conda>
  • for long reads test :
nextflow run main.nf -profile longreadstest,<docker/singularity/conda>

To use samba on a computing cluster, it is necessary to provide a configuration file for your system. For some institutes, this one already exists and is referenced on nf-core/configs. If so, you can simply download your institute custom config file and simply use -c <institute_config_file> in your command. This will enable either docker or singularity and set the appropriate execution settings for your local compute environment.

iv. Start running your own analysis!

nextflow run main.nf -profile <docker/singularity/conda>,custom [-c <institute_config_file>]

See usage docs for a complete description of all of the options available when running the pipeline.

Documentation

The samba workflow comes with documentation about the pipeline, found in the docs/ directory:

  1. Introduction
  2. Pipeline installation
  3. Running the pipeline
  4. Output and how to interpret the results
  5. Troubleshooting

Here is an overview of the many steps available in samba pipeline:

SAMBA Workflow

At the end of samba pipeline execution, you get an interactive HTML report that's look like this one:

SAMBA report

Full report description is available in samba pipeline documentation.

Credits

samba is written by SeBiMER, the Bioinformatics Core Facility of IFREMER.

Contributions

We welcome contributions to the pipeline. If such case you can do one of the following:

  • Use issues to submit your questions
  • Fork the project, do your developments and submit a pull request
  • Contact us (see email below)

Support

For further information or help, don't hesitate to get in touch with the samba developpers:

samba email

Citation

References

References databases (SILVA v132, PR2, UNITE) are available on IFREMER FTP at ftp://ftp.ifremer.fr/ifremer/dataref/bioinfo/sebimer/sequence-set/SAMBA/2019.10.

Training dataset used from [Qiime2 Tutorial] (https://docs.qiime2.org/2019.7/tutorials/atacama-soils), associated publication.