Skip to content

Nextflow pipeline to combine nanoseq and ANNEXA for long-read RNASeq data

License

Notifications You must be signed in to change notification settings

IGDRion/ANNEXseq

Repository files navigation

Introduction

ANNEXAseq is a bioinformatics analysis pipeline for LR-RNAseq data (Long-Read RNASeq). It works by performing basecalling, demultiplexing, QC, alignment on Nanopore RNA data using the nf-core/nanoseq pipeline, and then reconstructs and quantifies both known, novel genes and isoforms using the ANNEXA pipeline.

Pipeline Summary

Nanoseq

The nanoseq pipeline takes raw Nanopore sequencing data as input and performs various optional pre-processing steps including demultiplexing and cleaning, followed by quality control checks and alignment using either GraphMap2 or minimap2 aligners.

Mapping metrics are obtained and bigWig and bigBed coverage tracks are created for visualization. For DNA samples, short and structural variant calling is performed using medaka, deepvariant, pepper_margin_deepvariant, sniffles or cutesv.

For RNA samples, transcript reconstruction and quantification is performed using either bambu or StringTie2, followed by differential expression analysis using DESeq2 and/or DEXSeq.

Additional analyses including RNA modification detection and RNA fusion detection are performed using xpore, m6anet and JAFFAL.

MultiQC is used to present QC for raw read and alignment results.

Annexa

ANNEXA pipeline takes a reference genome, a reference annotation, and mapping files as inputs, and it provides an extended annotation that distinguishes between novel protein-coding (mRNA) and long non-coding RNAs (lncRNA) genes.

The pipeline performs transcriptome reconstruction and quantification, novel classification, and filtering based on multiple features.

The output is a final gtf file with a 3-level structure (gene, transcript, exon), and graphical outputs containing information about known and novel gene/transcript models such as length, number of spliced transcripts, and normalized expression levels.

The pipeline also performs quality control and an optional gene body coverage check.

Pipeline Outline

  1. Nanoseq

    1. Demultiplexing (qcat; optional)
    2. Raw read cleaning (NanoLyse; optional)
    3. Raw read QC (NanoPlot, FastQC)
    4. Alignment (GraphMap2 or minimap2)
      • Both aligners are capable of performing unspliced and spliced alignment. Sensible defaults will be applied automatically based on a combination of the input data and user-specified parameters
      • Each sample can be mapped to its own reference genome if multiplexed in this way
      • Convert SAM to co-ordinate sorted BAM and obtain mapping metrics (samtools)
    5. Create bigWig (BEDTools, bedGraphToBigWig) and bigBed (BEDTools, bedToBigBed) coverage tracks for visualisation
    6. DNA specific downstream analysis:
    7. RNA specific downstream analysis:
      • Transcript reconstruction and quantification (bambu or StringTie2)
      • bambu performs both transcript reconstruction and quantification
      • When StringTie2 is chosen, each sample can be processed individually and combined. After which, featureCounts will be used for both gene and transcript quantification.
      • Differential expression analysis (DESeq2 and/or DEXSeq)
      • RNA modification detection (xpore and/or m6anet)
      • RNA fusion detection (JAFFAL)
    8. Present QC for raw read and alignment results (MultiQC)
  2. Annexa

    1. Check if the input annotation contains all the information needed.
    2. Transcriptome reconstruction and quantification with bambu.
    3. Novel classification with FEELnc.
    4. Retrieve information from input annotation and format final gtf with 3 structure levels:
      • gene
      • transcript
      • exon.
    5. Filter novel transcripts based on bambu and/or TransformKmers Novel Discovery Rates.
    6. Perform a quality control (see qc) of both the full and filtered extended annotations (see example).
    7. Optional: Check gene body coverage with RSeQC.

Functionality Overview

A graphical overview of suggested routes through the pipeline depending on the desired output can be seen below.

nf-core/nanoseq metro map

Quick Start

  1. Install Nextflow (>=22.10.1)

  2. Install any of Docker, Singularity (you can follow this tutorial), Podman, Shifter or Charliecloud for full pipeline reproducibility (you can use Conda both to install Nextflow itself and also to manage software within pipelines. Please only use it within pipelines as a last resort; see docs).

  3. Test the pipelines on a minimal dataset with a couple commands:

    nextflow run nf-core/nanoseq -profile test,YOURPROFILE
    nextflow run IGDRion/ANNEXA -profile test,conda

    Note that some form of configuration will be needed so that Nextflow knows how to fetch the required software. This is usually done in the form of a config profile (YOURPROFILE in the example command above). You can chain multiple config profiles in a comma-separated string.

    • The pipeline comes with config profiles called docker, singularity, podman, shifter, charliecloud and conda which instruct the pipeline to use the named tool for software management. For example, -profile test,docker.
    • Please check nf-core/configs to see if a custom config file to run nf-core pipelines already exists for your Institute. If so, you can simply use -profile <institute> in your command. This will enable either docker or singularity and set the appropriate execution settings for your local compute environment.
    • If you are using singularity and are persistently observing issues downloading Singularity images directly due to timeout or network issues, then you can use the --singularity_pull_docker_container parameter to pull and convert the Docker image instead. Alternatively, you can use the nf-core download command to download images first, before running the pipeline. Setting the NXF_SINGULARITY_CACHEDIR or singularity.cacheDir Nextflow options enables you to store and re-use the images from a central location for future pipeline runs.
    • If you are using conda, it is highly recommended to use the NXF_CONDA_CACHEDIR or conda.cacheDir settings to store the environments in a central location for future pipeline runs.
  4. Start running your own analysis!

Documentation

Nanoseq

The nf-core/nanoseq pipeline comes with documentation about the pipeline usage, parameters and output.

nextflow run nf-core/nanoseq \
    --input samplesheet.csv \
    --protocol DNA \
    --barcode_kit SQK-PBK004 \
    -profile <docker/singularity/podman/institute>

See usage docs for all of the available options when running the pipeline.

An example input samplesheet for performing both basecalling and demultiplexing can be found here.

Annexa

Run ANNEXA on your own data (change input, gtf, fa with path of your files).

nextflow run IGDRion/ANNEXA \
    -profile {test,docker,singularity,conda,slurm} \
    --input samples.txt \
    --gtf /path/to/ref.gtf \
    --fa /path/to/ref.fa

The input parameter takes a file listing the bam path files to analyze (see example below)

/path/to/1.bam
/path/to/2.bam
/path/to/3.bam

Options

Required:
--input             : Path to file listing paths to bam files.
--fa                : Path to reference genome.
--gtf               : Path to reference annotation.


Optional:
-profile test       : Run annexa on toy dataset.
-profile slurm      : Run annexa on slurm executor.
-profile singularity: Run annexa in singularity container.
-profile conda      : Run annexa in conda environment.
-profile docker     : Run annexa in docker container.

--filter            : Perform or not the filtering step. false by default.
--tfkmers_tokenizer : Path to TransforKmers tokenizer. Required if filter activated.
--tfkmers_model     : Path to TransforKmers model. Required if filter activated.
--bambu_threshold   : bambu NDR threshold below which new transcripts are retained.
--tfkmers_threshold : TransforKmers NDR threshold below which new transcripts are retained.
--operation         : Operation to retained novel transcripts. "union" retain tx validated by either bambu or transforkmers, "intersection" retain tx validated by both.

--withGeneCoverage  : Run RSeQC (can be long depending on annotation and bam sizes). False by default.

--maxCpu            : max cpu threads used by ANNEXA. 8 by default.
--maxMemory         : max memory used by ANNEXA. 40GB by default.

If the filter argument is set to true, TransforKmers model and tokenizer paths have to be given. They can be either downloaded from the TransforKmers official repository or trained in advance by yourself on your own data.

Filtering step

By activating the filtering step (--filter), ANNEXA proposes to filter the generated extended annotation according to 2 methods:

  1. By using the NDR proposed by bambu. This threshold includes several information such as sequence profile, structure (mono-exonic, etc) and quantification (number of samples, expression). Each transcript with an NDR below the classification threshold will be retained by ANNEXA.

  2. By analysing the TSS of each new transcript using the TransforKmers (deep-learning) tool. Each TSS validated below a certain threshold will be retained. We already provide 2 trained models for filtering TSS with TransforKmers.

To use them, extract the zip, and point --tfkmers_model and --tfkmers_tokenizer to the subdirectories.

The filtered annotation can be the union of these 2 tools, i.e. all the transcripts validated by one or both of these tools; or the intersection, i.e. the transcripts validated by both tools.

At the end, the QC steps are performed both on the full and filtered extended annotations.

Testing

The nanoseq pipeline is tested through automated continuous integration. See the nf-core/nanoseq repository.

The ANNEXA pipeline has been tested with reference annotation from Ensembl and NCBI-RefSeq.

Credits

ANNEXseq is written by @enoracrl, @vlebars, @atoffano, @Aurore-B, @tderrien from the Institue of Genetics and Development of Rennes.

nf-core/nanoseq was originally written by Chelsea Sawyer and Harshil Patel from The Bioinformatics & Biostatistics Group for use at The Francis Crick Institute, London. Other primary contributors include Laura Wratten, Ying Chen, Yuk Kei Wan and Jonathan Goeke from the Genome Institute of Singapore, Christopher Hakkaart from Institute of Medical Genetics and Applied Genomics, Germany, and Johannes Alneberg and Franziska Bonath from SciLifeLab, Sweden. Many thanks to others who have helped out along the way too, including (but not limited to): @crickbabs, @AnnaSyme, @ekushele.

ANNEXA was originally written by @mlorthiois, @tderrien, @Aurore-B from the Institue of Genetics and Development of Rennes.

Citations

An extensive list of references for the tools used by the nanoseq pipeline can be found in the CITATIONS.md file.

You can cite the nf-core publication as follows:

The nf-core framework for community-curated bioinformatics pipelines.

Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.

Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x.