Introduction

Import and summarize transcript-level abundance estimates for transcript- and gene-level analysis with Bioconductor packages, such as edgeR, DESeq2, and limma-voom. The motivation and methods for the functions provided by the tximport package are described in the following article (Soneson, Love, and Robinson 2015):

Charlotte Soneson, Michael I. Love, Mark D. Robinson (2015): Differential analyses for RNA-seq: transcript-level estimates improve gene-level inferences. F1000Research http://dx.doi.org/10.12688/f1000research.7563.1

In particular, the tximport pipeline offers the following benefits: (i) this approach corrects for potential changes in gene length across samples (e.g. from differential isoform usage) (Trapnell et al. 2013), (ii) some of the upstream quantification methods (Salmon, Sailfish, kallisto) are substantially faster and require less memory and disk usage compared to alignment-based methods that require creation and storage of BAM files, and (iii) it is possible to avoid discarding those fragments that can align to multiple genes with homologous sequence, thus increasing sensitivity (Robert and Watson 2015).

Note: another Bioconductor package, tximeta (Love et al. 2020), extends tximport, offering the same functionality, plus the additional benefit of automatic addition of annotation metadata for commonly used transcriptomes (GENCODE, Ensembl, RefSeq for human and mouse). See the tximeta package vignette for more details. Whereas tximport outputs a simple list of matrices, tximeta will output a SummarizedExperiment object with appropriate GRanges added if the transcriptome is from one of the sources above for human and mouse. tximeta also offers easy conversion to data objects used by edgeR and limma with the makeDGEList function.

Import transcript-level estimates

We begin by locating some prepared files that contain transcript abundance estimates for six samples, from the tximportData package. The tximport pipeline will be nearly identical for various quantification tools, usually only requiring one change the type argument. We begin with quantification files generated by the Salmon software, and later show the use of tximport with any of:

  • Salmon (Patro et al. 2017)
  • Alevin (Srivastava et al. 2019)
  • Sailfish (Patro, Mount, and Kingsford 2014)
  • kallisto (Bray et al. 2016)
  • RSEM (Li and Dewey 2011)
  • StringTie (Pertea et al. 2015)

First, we locate the directory containing the files. (Here we use system.file to locate the package directory, but for a typical use, we would just provide a path, e.g. "/path/to/dir".)

library(tximportData)
dir <- system.file("extdata", package = "tximportData")
list.files(dir)
##  [1] "alevin"                  "cufflinks"              
##  [3] "kallisto"                "kallisto_boot"          
##  [5] "refseq"                  "rsem"                   
##  [7] "sailfish"                "salmon"                 
##  [9] "salmon_dm"               "salmon_ec"              
## [11] "salmon_gibbs"            "samples.txt"            
## [13] "samples_extended.txt"    "tx2gene.csv"            
## [15] "tx2gene.ensembl.v87.csv" "tx2gene.gencode.v27.csv"
## [17] "tx2gene_alevin.tsv"

Next, we create a named vector pointing to the quantification files. We will create a vector of filenames first by reading in a table that contains the sample IDs, and then combining this with dir and "quant.sf.gz". (We gzipped the quantification files to make the data package smaller, this is not a problem for R functions that we use to import the files.)

samples <- read.table(file.path(dir, "samples.txt"), header = TRUE)
samples
##   pop center                assay    sample experiment       run
## 1 TSI  UNIGE NA20503.1.M_111124_5 ERS185497  ERX163094 ERR188297
## 2 TSI  UNIGE NA20504.1.M_111124_7 ERS185242  ERX162972 ERR188088
## 3 TSI  UNIGE NA20505.1.M_111124_6 ERS185048  ERX163009 ERR188329
## 4 TSI  UNIGE NA20507.1.M_111124_7 ERS185412  ERX163158 ERR188288
## 5 TSI  UNIGE NA20508.1.M_111124_2 ERS185362  ERX163159 ERR188021
## 6 TSI  UNIGE NA20514.1.M_111124_4 ERS185217  ERX163062 ERR188356
files <- file.path(dir, "salmon", samples$run, "quant.sf.gz")
names(files) <- paste0("sample", 1:6)
all(file.exists(files))
## [1] TRUE

Transcripts need to be associated with gene IDs for gene-level summarization. If that information is present in the files, we can skip this step. For Salmon, Sailfish, and kallisto the files only provide the transcript ID. We first make a data.frame called tx2gene with two columns: 1) transcript ID and 2) gene ID. The column names do not matter but this column order must be used. The transcript ID must be the same one used in the abundance files.

Creating this tx2gene data.frame can be accomplished from a TxDb object and the select function from the AnnotationDbi package. The following code could be used to construct such a table:

library(TxDb.Hsapiens.UCSC.hg19.knownGene)
txdb <- TxDb.Hsapiens.UCSC.hg19.knownGene
k <- keys(txdb, keytype = "TXNAME")
tx2gene <- select(txdb, k, "GENEID", "TXNAME")

Note: if you are using an Ensembl transcriptome, the easiest way to create the tx2gene data.frame is to use the ensembldb packages. The annotation packages can be found by version number, and use the pattern EnsDb.Hsapiens.vXX. The transcripts function can be used with return.type="DataFrame", in order to obtain something like the df object constructed in the code chunk above. See the ensembldb package vignette for more details.

In this case, we’ve used the Gencode v27 CHR transcripts to build our index, and we used makeTxDbFromGFF and code similar to the chunk above to build the tx2gene table. We then read in a pre-constructed tx2gene table:

library(readr)
tx2gene <- read_csv(file.path(dir, "tx2gene.gencode.v27.csv"))
head(tx2gene)
## # A tibble: 6 × 2
##   TXNAME            GENEID           
##   <chr>             <chr>            
## 1 ENST00000456328.2 ENSG00000223972.5
## 2 ENST00000450305.2 ENSG00000223972.5
## 3 ENST00000473358.1 ENSG00000243485.5
## 4 ENST00000469289.1 ENSG00000243485.5
## 5 ENST00000607096.1 ENSG00000284332.1
## 6 ENST00000606857.1 ENSG00000268020.3

The tximport package has a single function for importing transcript-level estimates. The type argument is used to specify what software was used for estimation. A simple list with matrices, "abundance", "counts", and "length", is returned, where the transcript level information is summarized to the gene-level. Typically, abundance is provided by the quantification tools as TPM (transcripts-per-million), while the counts are estimated counts (possibly fractional), and the "length" matrix contains the effective gene lengths. The "length" matrix can be used to generate an offset matrix for downstream gene-level differential analysis of count matrices, as shown below.

Note: While tximport works without any dependencies, it is significantly faster to read in files using the readr package. If tximport detects that readr is installed, then it will use the readr::read_tsv function by default. A change from version 1.2 to 1.4 is that the reader is not specified by the user anymore, but chosen automatically based on the availability of the readr package. Advanced users can still customize the import of files using the importer argument.

library(tximport)
txi <- tximport(files, type = "salmon", tx2gene = tx2gene)
names(txi)
## [1] "abundance"           "counts"              "length"             
## [4] "countsFromAbundance"
head(txi$counts)
##                       sample1   sample2    sample3    sample4    sample5
## ENSG00000000003.14    2.58012    2.0000   27.09648    8.48076    5.11217
## ENSG00000000005.5     0.00000    0.0000    0.00000    0.00000    0.00000
## ENSG00000000419.12 1056.99960 1337.9970 1452.99497 1289.00390  920.99960
## ENSG00000000457.13  462.88490  498.8622  560.75380  386.28665  531.57867
## ENSG00000000460.16  633.52972  418.4429 1170.33387  610.58867  915.70085
## ENSG00000000938.12 2616.00250 3697.9989 3110.00380 2690.00120 1897.00030
##                       sample6
## ENSG00000000003.14    5.80674
## ENSG00000000005.5     0.00000
## ENSG00000000419.12 1331.99780
## ENSG00000000457.13  542.45410
## ENSG00000000460.16  637.30845
## ENSG00000000938.12 1911.00020

We could alternatively generate counts from abundances, using the argument countsFromAbundance, scaled to library size, "scaledTPM", or additionally scaled using the average transcript length, averaged over samples and to library size, "lengthScaledTPM". Using either of these approaches, the counts are not correlated with length, and so the length matrix should not be provided as an offset for downstream analysis packages. As of tximport version 1.10, we have added a new countsFromAbundance option "dtuScaledTPM". This scaling option is designed for use with txOut=TRUE for differential transcript usage analyses. See ?tximport for details on the various countsFromAbundance options.

We can avoid gene-level summarization by setting txOut=TRUE, giving the original transcript level estimates as a list of matrices.

txi.tx <- tximport(files, type = "salmon", txOut = TRUE)

These matrices can then be summarized afterwards using the function summarizeToGene. This then gives the identical list of matrices as using txOut=FALSE (default) in the first tximport call.

txi.sum <- summarizeToGene(txi.tx, tx2gene)
all.equal(txi$counts, txi.sum$counts)
## [1] TRUE

Salmon

Salmon or Sailfish quant.sf files can be imported by setting type to "salmon" or "sailfish".

files <- file.path(dir, "salmon", samples$run, "quant.sf.gz")
names(files) <- paste0("sample", 1:6)
txi.salmon <- tximport(files, type = "salmon", tx2gene = tx2gene)
head(txi.salmon$counts)
##                       sample1   sample2    sample3    sample4    sample5
## ENSG00000000003.14    2.58012    2.0000   27.09648    8.48076    5.11217
## ENSG00000000005.5     0.00000    0.0000    0.00000    0.00000    0.00000
## ENSG00000000419.12 1056.99960 1337.9970 1452.99497 1289.00390  920.99960
## ENSG00000000457.13  462.88490  498.8622  560.75380  386.28665  531.57867
## ENSG00000000460.16  633.52972  418.4429 1170.33387  610.58867  915.70085
## ENSG00000000938.12 2616.00250 3697.9989 3110.00380 2690.00120 1897.00030
##                       sample6
## ENSG00000000003.14    5.80674
## ENSG00000000005.5     0.00000
## ENSG00000000419.12 1331.99780
## ENSG00000000457.13  542.45410
## ENSG00000000460.16  637.30845
## ENSG00000000938.12 1911.00020

We quantified with Sailfish against a different transcriptome, so we need to read in a different tx2gene for this next code chunk.

tx2knownGene <- read_csv(file.path(dir, "tx2gene.csv"))
files <- file.path(dir, "sailfish", samples$run, "quant.sf")
names(files) <- paste0("sample", 1:6)
txi.sailfish <- tximport(files, type = "sailfish", tx2gene = tx2knownGene)
head(txi.sailfish$counts)
##             sample1   sample2    sample3   sample4   sample5   sample6
## A1BG     109.165000 316.13800 110.525000 116.00000  86.26030  76.75400
## A1BG-AS1  85.582100 141.15800 120.811000 153.49000 127.03400 105.32600
## A1CF       9.034322  10.02056   5.020884  13.02060  25.23152  25.07919
## A2M       24.000000   2.00000  21.000000   6.00000  38.00000   8.00000
## A2M-AS1    1.000000   1.00000   1.000000   1.00000   0.00000   0.00000
## A2ML1      3.046160   1.02936   4.074320   1.04871   3.07782   5.11940

Note: for previous version of Salmon or Sailfish, in which the quant.sf files start with comment lines, it is recommended to specify the importer argument as a function which reads in the lines beginning with the header. For example, using the following code chunk (un-evaluated):

txi <- tximport("quant.sf", type = "none", txOut = TRUE, txIdCol = "Name", abundanceCol = "TPM",
    countsCol = "NumReads", lengthCol = "Length", importer = function(x) read_tsv(x,
        skip = 8))

Salmon with inferential replicates

If inferential replicates (Gibbs or bootstrap samples) are present in expected locations relative to the quant.sf file, tximport will import these as well. Here we demonstrate using Salmon, run with only 5 Gibbs replicates (usually more Gibbs samples, such as 20 or 30, would be useful for capturing inferential uncertainty).

files <- file.path(dir, "salmon_gibbs", samples$run, "quant.sf.gz")
names(files) <- paste0("sample", 1:6)
txi.inf.rep <- tximport(files, type = "salmon", txOut = TRUE)
names(txi.inf.rep)
## [1] "abundance"           "counts"              "infReps"            
## [4] "length"              "countsFromAbundance"
names(txi.inf.rep$infReps)
## [1] "sample1" "sample2" "sample3" "sample4" "sample5" "sample6"
dim(txi.inf.rep$infReps$sample1)
## [1] 178136      5

The tximport arguments varReduce and dropInfReps can be used to summarize the inferential replicates into a single variance per transcript/gene and per sample, or to not import inferential replicates, respectively.

kallisto

kallisto abundance.h5 files can be imported by setting type to "kallisto". Note that this requires that you have the Bioconductor package rhdf5 installed. (Here we only demonstrate reading in transcript-level information.)

files <- file.path(dir, "kallisto_boot", samples$run, "abundance.h5")
names(files) <- paste0("sample", 1:6)
txi.kallisto <- tximport(files, type = "kallisto", txOut = TRUE)
head(txi.kallisto$counts)
##                   sample1 sample2 sample3 sample4 sample5 sample6
## ENST00000448914.1       0       0       0       0       0       0
## ENST00000631435.1       0       0       0       0       0       0
## ENST00000632684.1       0       0       0       0       0       0
## ENST00000434970.2       0       0       0       0       0       0
## ENST00000415118.1       0       0       0       0       0       0
## ENST00000633010.1       0       0       0       0       0       0

kallisto with inferential replicates

Because the kallisto_boot directory also has inferential replicate information, it was imported as well.

names(txi.kallisto)
## [1] "abundance"           "counts"              "infReps"            
## [4] "length"              "countsFromAbundance"
names(txi.kallisto$infReps)
## [1] "sample1" "sample2" "sample3" "sample4" "sample5" "sample6"
dim(txi.kallisto$infReps$sample1)
## [1] 178136      5

kallisto with TSV files

kallisto abundance.tsv files can be imported as well, but this is typically slower than the approach above. Note that we add an additional argument in this code chunk, ignoreAfterBar=TRUE. This is because the Gencode transcripts have names like “ENST00000456328.2|ENSG00000223972.5|…”, though our tx2gene table only includes the first “ENST” identifier. We therefore want to split the incoming quantification matrix rownames at the first bar “|”, and only use this as an identifier. We didn’t use this option earlier with Salmon, because we used the argument --gencode when running Salmon, which itself does the splitting upstream of tximport. Note that ignoreTxVersion and ignoreAfterBar are only to facilitating the summarization to gene level.

files <- file.path(dir, "kallisto", samples$run, "abundance.tsv.gz")
names(files) <- paste0("sample", 1:6)
txi.kallisto.tsv <- tximport(files, type = "kallisto", tx2gene = tx2gene, ignoreAfterBar = TRUE)
head(txi.kallisto.tsv$counts)
##                       sample1   sample2    sample3    sample4    sample5
## ENSG00000000003.14    2.59745    2.0000   27.15883    8.40623    5.06463
## ENSG00000000005.5     0.00000    0.0000    0.00000    0.00000    0.00000
## ENSG00000000419.12 1057.00040 1338.0006 1453.00134 1289.00080  921.00030
## ENSG00000000457.13  462.52870  495.4173  564.18460  385.98791  532.84843
## ENSG00000000460.16  630.39723  418.5453 1166.26643  611.51433  915.49327
## ENSG00000000938.12 2618.00130 3697.9998 3110.00650 2691.99670 1896.99980
##                       sample6
## ENSG00000000003.14    5.74125
## ENSG00000000005.5     0.00000
## ENSG00000000419.12 1332.00240
## ENSG00000000457.13  543.53370
## ENSG00000000460.16  636.25649
## ENSG00000000938.12 1909.99870

RSEM

RSEM sample.genes.results files can be imported by setting type to "rsem", and txIn and txOut to FALSE.

files <- file.path(dir, "rsem", samples$run, paste0(samples$run, ".genes.results.gz"))
names(files) <- paste0("sample", 1:6)
txi.rsem <- tximport(files, type = "rsem", txIn = FALSE, txOut = FALSE)
head(txi.rsem$counts)
##                    sample1 sample2 sample3 sample4 sample5 sample6
## ENSG00000000003.14    0.00    2.00   21.00    3.00       0    1.00
## ENSG00000000005.5     0.00    0.00    0.00    0.00       0    0.00
## ENSG00000000419.12 1000.00 1250.00 1377.00 1197.00       0 1254.00
## ENSG00000000457.13  401.48  457.55  511.17  337.52       1  474.38
## ENSG00000000460.16  613.72  407.84 1119.94  556.23       2  603.25
## ENSG00000000938.12 2387.00 3466.00 2904.00 2431.00       3 1720.00

RSEM sample.isoforms.results files can be imported by setting type to "rsem", and txIn and txOut to TRUE.

files <- file.path(dir, "rsem", samples$run, paste0(samples$run, ".isoforms.results.gz"))
names(files) <- paste0("sample", 1:6)
txi.rsem <- tximport(files, type = "rsem", txIn = TRUE, txOut = TRUE)
head(txi.rsem$counts)
##                   sample1 sample2 sample3 sample4 sample5 sample6
## ENST00000373020.8       0       0   19.29     2.4       0       1
## ENST00000494424.1       0       0    0.00     0.0       0       0
## ENST00000496771.5       0       0    0.00     0.0       0       0
## ENST00000612152.4       0       0    1.71     0.6       0       0
## ENST00000614008.4       0       2    0.00     0.0       0       0
## ENST00000373031.4       0       0    0.00     0.0       0       0

StringTie

StringTie t_data.ctab files giving the coverage and abundances for transcripts can be imported by setting type to stringtie. These files can be generated with the following command line call:

stringtie -eB -G transcripts.gff <source_file.bam> 

tximport will compute counts from the coverage information, by reversing the formula that StringTie uses to calculate coverage (see ?tximport). The read length is used in this formula, and so if you’ve set a different read length when using StringTie, you can provide this information with the readLength argument. The tx2gene table should connect transcripts to genes, and can be pulled out of one of the t_data.ctab files. The tximport call would look like the following (here not evaluated):

tmp <- read_tsv(files[1])
tx2gene <- tmp[, c("t_name", "gene_name")]
txi <- tximport(files, type = "stringtie", tx2gene = tx2gene)

Alevin

scRNA-seq data quantified with Alevin can be easily imported using tximport. The following unevaluated example shows import of the quants matrix (for a live example, see the unit test file test_alevin.R). A single file should be specified which will import a gene-by-cell matrix of data.

files <- "path/to/alevin/quants_mat.gz"
txi <- tximport(files, type = "alevin")

Downstream DGE in Bioconductor

Note: there are two suggested ways of importing estimates for use with differential gene expression (DGE) methods. The first method, which we show below for edgeR and for DESeq2, is to use the gene-level estimated counts from the quantification tools, and additionally to use the transcript-level abundance estimates to calculate a gene-level offset that corrects for changes to the average transcript length across samples. The code examples below accomplish these steps for you, keeping track of appropriate matrices and calculating these offsets. For edgeR you need to assign a matrix to y$offset, but the function DESeqDataSetFromTximport takes care of creation of the offset for you. Let’s call this method “original counts and offset”.

The second method is to use the tximport argument countsFromAbundance="lengthScaledTPM" or "scaledTPM", and then to use the gene-level count matrix txi$counts directly as you would a regular count matrix with these software. Let’s call this method “bias corrected counts without an offset

Note: Do not manually pass the original gene-level counts to downstream methods without an offset. The only case where this would make sense is if there is no length bias to the counts, as happens in 3’ tagged RNA-seq data (see section below). The original gene-level counts are in txi$counts when tximport was run with countsFromAbundance="no". This is simply passing the summed estimated transcript counts, and does not correct for potential differential isoform usage (the offset), which is the point of the tximport methods (Soneson, Love, and Robinson 2015) for gene-level analysis. Passing uncorrected gene-level counts without an offset is not recommended by the tximport package authors. The two methods we provide here are: “original counts and offset” or “bias corrected counts without an offset”. Passing txi to DESeqDataSetFromTximport as outlined below is correct: the function creates the appropriate offset for you to perform gene-level differential expression.

3’ tagged RNA-seq

If you have 3’ tagged RNA-seq data, then correcting the counts for gene length will induce a bias in your analysis, because the counts do not have length bias. Instead of using the default full-transcript-length pipeline, we recommend to use the original counts, e.g. txi$counts as a counts matrix, e.g. providing to DESeqDataSetFromMatrix or to the edgeR or limma functions without calculating an offset and without using countsFromAbundance.

DESeq2

An example of creating a DESeqDataSet for use with DESeq2 (Love, Huber, and Anders 2014):

library(DESeq2)

The user should make sure the rownames of sampleTable align with the colnames of txi$counts, if there are colnames. The best practice is to read sampleTable from a CSV file, and to construct files from a column of sampleTable, as was shown in the tximport examples above.

sampleTable <- data.frame(condition = factor(rep(c("A", "B"), each = 3)))
rownames(sampleTable) <- colnames(txi$counts)
dds <- DESeqDataSetFromTximport(txi, sampleTable, ~condition)
# dds is now ready for DESeq() see DESeq2 vignette

edgeR

An example of creating a DGEList for use with edgeR (Robinson, McCarthy, and Smyth 2010) follows. Note that the alternate package, tximeta, described above has a convenience function makeDGEList for creating a data object for use with edgeR and limma.

library(edgeR)
cts <- txi$counts
normMat <- txi$length

# Obtaining per-observation scaling factors for length, adjusted to avoid
# changing the magnitude of the counts.
normMat <- normMat/exp(rowMeans(log(normMat)))
normCts <- cts/normMat

# Computing effective library sizes from scaled counts, to account for
# composition biases between samples.
library(edgeR)
eff.lib <- calcNormFactors(normCts) * colSums(normCts)

# Combining effective library sizes with the length factors, and calculating
# offsets for a log-link GLM.
normMat <- sweep(normMat, 2, eff.lib, "*")
normMat <- log(normMat)

# Creating a DGEList object for use in edgeR.
y <- DGEList(cts)
y <- scaleOffset(y, normMat)

# filtering using the design information
design <- model.matrix(~condition, data = sampleTable)
keep <- filterByExpr(y, design)
y <- y[keep, ]
# y is now ready for estimate dispersion functions see edgeR User's Guide

For creating a matrix of CPMs within edgeR, the following code chunk can be used:

cpms <- edgeR::cpm(y, offset = y$offset, log = FALSE)

limma-voom

An example of creating a data object for use with limma-voom (Law et al. 2014). Because limma-voom does not use the offset matrix stored in y$offset, we recommend using the scaled counts generated from abundances, either "scaledTPM" or "lengthScaledTPM":

files <- file.path(dir, "salmon", samples$run, "quant.sf.gz")
names(files) <- paste0("sample", 1:6)
txi <- tximport(files, type = "salmon", tx2gene = tx2gene, countsFromAbundance = "lengthScaledTPM")
library(limma)
y <- DGEList(txi$counts)

# filtering using the design information:
design <- model.matrix(~condition, data = sampleTable)
keep <- filterByExpr(y, design)
y <- y[keep, ]

# normalize and run voom transformation
y <- calcNormFactors(y)
v <- voom(y, design)
# v is now ready for lmFit() see limma User's Guide

Acknowledgments

The development of tximport has benefited from contributions and suggestions from:

Rob Patro (inferential replicates import), Andrew Parker Morgan (RHDF5 support), Ryan C. Thompson (RHDF5 support), Matt Shirley (ignoreTxVersion), Avi Srivastava (alevin import), Scott Van Buren (infReps testing), Stephen Turner, Richard Smith-Unna, Rory Kirchner, Martin Morgan, Jenny Drnevich, Patrick Kimes, Leon Fodoulian, Koen Van den Berge, Aaron Lun, Alexander Toenges

Session info

sessionInfo()
## R version 4.3.1 (2023-06-16)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 22.04.3 LTS
## 
## Matrix products: default
## BLAS:   /home/biocbuild/bbs-3.18-bioc/R/lib/libRblas.so 
## LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.10.0
## 
## locale:
##  [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
##  [3] LC_TIME=en_GB              LC_COLLATE=C              
##  [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
##  [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
##  [9] LC_ADDRESS=C               LC_TELEPHONE=C            
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
## 
## time zone: America/New_York
## tzcode source: system (glibc)
## 
## attached base packages:
## [1] stats4    stats     graphics  grDevices utils     datasets  methods  
## [8] base     
## 
## other attached packages:
##  [1] edgeR_4.0.0                            
##  [2] limma_3.58.0                           
##  [3] DESeq2_1.42.0                          
##  [4] SummarizedExperiment_1.32.0            
##  [5] MatrixGenerics_1.14.0                  
##  [6] matrixStats_1.0.0                      
##  [7] tximport_1.30.0                        
##  [8] readr_2.1.4                            
##  [9] TxDb.Hsapiens.UCSC.hg19.knownGene_3.2.2
## [10] GenomicFeatures_1.54.0                 
## [11] AnnotationDbi_1.64.0                   
## [12] Biobase_2.62.0                         
## [13] GenomicRanges_1.54.0                   
## [14] GenomeInfoDb_1.38.0                    
## [15] IRanges_2.36.0                         
## [16] S4Vectors_0.40.0                       
## [17] BiocGenerics_0.48.0                    
## [18] tximportData_1.29.0                    
## [19] knitr_1.44                             
## 
## loaded via a namespace (and not attached):
##  [1] DBI_1.1.3                bitops_1.0-7             formatR_1.14            
##  [4] biomaRt_2.58.0           rlang_1.1.1              magrittr_2.0.3          
##  [7] compiler_4.3.1           RSQLite_2.3.1            png_0.1-8               
## [10] vctrs_0.6.4              stringr_1.5.0            pkgconfig_2.0.3         
## [13] crayon_1.5.2             fastmap_1.1.1            dbplyr_2.3.4            
## [16] XVector_0.42.0           utf8_1.2.4               Rsamtools_2.18.0        
## [19] rmarkdown_2.25           tzdb_0.4.0               bit_4.0.5               
## [22] xfun_0.40                zlibbioc_1.48.0          cachem_1.0.8            
## [25] jsonlite_1.8.7           progress_1.2.2           blob_1.2.4              
## [28] rhdf5filters_1.14.0      DelayedArray_0.28.0      Rhdf5lib_1.24.0         
## [31] BiocParallel_1.36.0      parallel_4.3.1           prettyunits_1.2.0       
## [34] R6_2.5.1                 bslib_0.5.1              stringi_1.7.12          
## [37] rtracklayer_1.62.0       jquerylib_0.1.4          Rcpp_1.0.11             
## [40] Matrix_1.6-1.1           tidyselect_1.2.0         abind_1.4-5             
## [43] yaml_2.3.7               codetools_0.2-19         curl_5.1.0              
## [46] lattice_0.22-5           tibble_3.2.1             KEGGREST_1.42.0         
## [49] evaluate_0.22            archive_1.1.6            BiocFileCache_2.10.0    
## [52] xml2_1.3.5               Biostrings_2.70.0        pillar_1.9.0            
## [55] filelock_1.0.2           generics_0.1.3           vroom_1.6.4             
## [58] RCurl_1.98-1.12          hms_1.1.3                ggplot2_3.4.4           
## [61] munsell_0.5.0            scales_1.2.1             glue_1.6.2              
## [64] tools_4.3.1              BiocIO_1.12.0            locfit_1.5-9.8          
## [67] GenomicAlignments_1.38.0 XML_3.99-0.14            rhdf5_2.46.0            
## [70] grid_4.3.1               colorspace_2.1-0         GenomeInfoDbData_1.2.11 
## [73] restfulr_0.0.15          cli_3.6.1                rappdirs_0.3.3          
## [76] fansi_1.0.5              S4Arrays_1.2.0           dplyr_1.1.3             
## [79] gtable_0.3.4             sass_0.4.7               digest_0.6.33           
## [82] SparseArray_1.2.0        rjson_0.2.21             memoise_2.0.1           
## [85] htmltools_0.5.6.1        lifecycle_1.0.3          httr_1.4.7              
## [88] statmod_1.5.0            bit64_4.0.5

References

Bray, Nicolas, Harold Pimentel, Pall Melsted, and Lior Pachter. 2016. “Near-Optimal Probabilistic Rna-Seq Quantification.” Nature Biotechnology 34: 525–27. http://dx.doi.org/10.1038/nbt.3519.

Law, Charity W., Yunshun Chen, Wei Shi, and Gordon K. Smyth. 2014. “voom: precision weights unlock linear model analysis tools for RNA-seq read counts.” Genome Biology 15 (2): 29. http://dx.doi.org/10.1186/gb-2014-15-2-r29.

Li, Bo, and Colin N. Dewey. 2011. “RSEM: accurate transcript quantification from RNA-Seq data with or without a reference genome.” BMC Bioinformatics 12: 323+. https://doi.org/10.1186/1471-2105-12-3231.

Love, Michael I., Wolfgang Huber, and Simon Anders. 2014. “Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2.” Genome Biology 15 (12): 550. http://dx.doi.org/10.1186/s13059-014-0550-8.

Love, Michael I., Charlotte Soneson, Peter F. Hickey, Lisa K. Johnson, N. Tessa Pierce, Lori Shepherd, Martin Morgan, and Rob Patro. 2020. “Tximeta: Reference sequence checksums for provenance identification in RNA-seq.” PLOS Computational Biology. https://doi.org/10.1371/journal.pcbi.1007664.

Patro, Rob, Geet Duggal, Michael I. Love, Rafael A. Irizarry, and Carl Kingsford. 2017. “Salmon Provides Fast and Bias-Aware Quantification of Transcript Expression.” Nature Methods. http://dx.doi.org/10.1038/nmeth.4197.

Patro, Rob, Stephen M. Mount, and Carl Kingsford. 2014. “Sailfish enables alignment-free isoform quantification from RNA-seq reads using lightweight algorithms.” Nature Biotechnology 32: 462–64. http://dx.doi.org/10.1038/nbt.2862.

Pertea, Mihaela, Geo M Pertea, Corina M Antonescu, Tsung-Cheng Chang, Joshua T Mendell, and Steven L Salzberg. 2015. “StringTie enables improved reconstruction of a transcriptome from RNA-seq reads.” Nature Biotechnology 33 (3): 290–95. https://dx.doi.org/10.1038%2Fnbt.3122.

Robert, Christelle, and Mick Watson. 2015. “Errors in RNA-Seq quantification affect genes of relevance to human disease.” Genome Biology. https://doi.org/10.1186/s13059-015-0734-x.

Robinson, Mark D., Davis J. McCarthy, and Gordon K. Smyth. 2010. “edgeR: a Bioconductor package for differential expression analysis of digital gene expression data.” Bioinformatics 26 (1): 139. http://dx.doi.org/10.1093/bioinformatics/btp616.

Soneson, Charlotte, Michael I. Love, and Mark Robinson. 2015. “Differential analyses for RNA-seq: transcript-level estimates improve gene-level inferences.” F1000Research 4 (1521). http://dx.doi.org/10.12688/f1000research.7563.1.

Srivastava, Avi, Laraib Malik, Tom Sean Smith, Ian Sudbery, and Rob Patro. 2019. “Alevin efficiently estimates accurate gene abundances from dscRNA-seq data.” Genome Biology 20 (65). https://doi.org/10.1186/s13059-019-1670-y.

Trapnell, Cole, David G Hendrickson, Martin Sauvageau, Loyal Goff, John L Rinn, and Lior Pachter. 2013. “Differential analysis of gene regulation at transcript resolution with RNA-seq.” Nature Biotechnology. https://doi.org/10.1038/nbt.2450.