Changes

From Genome Analysis Wiki
Jump to navigationJump to search
Line 1: Line 1:  
This page outlines a generic plan for analysis of a whole exome sequencing project. The idea is that the points listed here might serve as a starting point for discussion of the analyses needed in a specific project.
 
This page outlines a generic plan for analysis of a whole exome sequencing project. The idea is that the points listed here might serve as a starting point for discussion of the analyses needed in a specific project.
 +
 +
The initial version of this document was prepared with input from Shamil Sunayev, Ron Do and Goncalo Abecasis.
    
= Read Mapping and Variant Calling =
 
= Read Mapping and Variant Calling =
Line 5: Line 7:  
The first step in any analysis is to map sequence reads, callibrate base qualities, and call variants. Even at this stage, some simple quality metrics can be evaluated and will help identify potentially problematic samples.
 
The first step in any analysis is to map sequence reads, callibrate base qualities, and call variants. Even at this stage, some simple quality metrics can be evaluated and will help identify potentially problematic samples.
   −
=== Prior to Mapping ===
+
== Prior to Mapping ==
    
; Evaluate Base Composition Along Reads
 
; Evaluate Base Composition Along Reads
Line 19: Line 21:     
; Map Reads with Appropriate Read Mapper
 
; Map Reads with Appropriate Read Mapper
: Currently, [bio-bwa.sourceforge.net/bwa.shtm BWA] is a convenient, widely used read mapper.
+
: Currently, [http://bio-bwa.sourceforge.net/bwa.shtm BWA] is a convenient, widely used read mapper.
 +
 
 +
; Mark Duplicate Reads
 +
: Duplicate reads, whether generated as PCR artifacts during library preparation or optical duplicates during image analysis and base-calling, can mislead variant calling algorithms. To avoid problems, one typically removes from consideration all reads that appear to map at exactly the same location (for paired ends, only reads for which both ends map to the same locations are excluded.)
    
; Basic Mapping Statistics
 
; Basic Mapping Statistics
Line 29: Line 34:     
; Recalibrate Base Quality Scores
 
; Recalibrate Base Quality Scores
: Base quality scores can be updated by comparing sites that are unlikely to vary (such as those not currently reported as variants in dbSNP or in the most recent [[1000 Genome Project]] analyses.
+
: Base quality scores can be updated by comparing sites that are unlikely to vary (such as those not currently reported as variants in dbSNP or in the most recent [http://www.1000genomes.org 1000 Genomes Project] analyses.
    
; Update Base Quality Score Metrics
 
; Update Base Quality Score Metrics
 
: Generate new curves with base quality scores per position.
 
: Generate new curves with base quality scores per position.
 
: Calculate the number of mapped bases that reach at least Q20. Potentially, calculate Q20 ''equivalent'' bases by summing the quality scores for bases with base quality >Q20 and dividing the total by 20.
 
: Calculate the number of mapped bases that reach at least Q20. Potentially, calculate Q20 ''equivalent'' bases by summing the quality scores for bases with base quality >Q20 and dividing the total by 20.
 +
 +
; Evaluate Coverage as Function of GC Content
 +
: For each target region, calculate read depth and also the proportion of GC bases in the reference genome. Flag samples where coverage varies strongly as a function of GC content.
    
== Verify Sample Identities ==
 
== Verify Sample Identities ==
Line 52: Line 60:     
; Generate Linkage Disequilibrium Aware Set of Variant Calls
 
; Generate Linkage Disequilibrium Aware Set of Variant Calls
: A typical exome call set might include many sites where no variant is called due to low coverage. If we can integrate samples with previous [[GWAS]] data, we should be able to generate an update and much improved set of variant calls for each individual.  
+
: A typical exome call set might include many sites where no variant is called due to low coverage. If we can integrate samples with previous [[GWAS]] data, we should be able to generate an update and much improved set of variant calls for each individual. Due to limitations in current calling methods, the quality of variant call sets is expected to increase substantially if multiple variant callers are used and their results are merged democratically.
    
; Annotate Functional Impact of Each Variant
 
; Annotate Functional Impact of Each Variant
Line 61: Line 69:     
; Annotate Overall Variant Characteristics
 
; Annotate Overall Variant Characteristics
: Calculate overall ratio of transitions to transversions, separately for coding and non-coding variants. Within coding variants, analyse synonymous and non-synonymous variants separately.
+
: Calculate overall ratio of transitions to transversions, separately for coding and non-coding variants. Within coding variants, analyse synonymous and non-synonymous variants separately.
 +
 
 +
; CpG Sites
 +
: Calculate the rate of per base pair heterozygosity at potential CpG sites and compare this to other sites.
    
; Tabulate for Each Sample
 
; Tabulate for Each Sample
Line 68: Line 79:  
:* The number of transitions and transversions
 
:* The number of transitions and transversions
   −
== Special Considerations for Admixed Samples ==
+
== Evaluate Variant Calls for Reference Samples ==
 +
 
 +
; If Previously Sequenced Samples Available, Assess Variant Calls There
 +
: Assess concordance rate at non-reference sites
 +
 
 +
; If Duplicate Samples are Available, Assess Concordance
 +
: Assess concordance rate at non-reference sites.
 +
 
 +
; If Nuclear Families are Available, Assess Mendelian Consistency
 +
: When reporting rates of Mendelian inconsistencies, report these not as a fraction of all sites, but as a fraction of sites with at least one non-reference call in the trio.
 +
 
 +
== Variant Filters ==
 +
 
 +
Initial sets of SNP calls invariably will include many false positives. The fraction of false positives among all variants called is likely to increase as more and more samples are sequenced. To keep the fraction of false positives under control, it is important to both apply an increasingly strict set of quality control filters but also to experimentally validate some newly discovered variants. Many of these filters are currently implemented in [[GATK]].
 +
 
 +
; Mapping Quality Filter
 +
: Consider removing variants at sites with low mapping quality scores. Even if the average mapping quality score for a site is high, consider removing variants at sites where a noticeable fraction of reads have low mapping quality scores.
 +
 
 +
; Allele Balance Filter
 +
: Among individuals who are assigned an heterozygous genotype, check the proportion of reads supporting each allele. Consider filtering out variants where one of the alleles accounts for <30% of reads.
 +
 
 +
; Local Realignment Filter
 +
: Consider removing single nucleotide variants at sites where local realignment of all covering sequence reads suggests that an [[indel]] polymorphism is present in the population.
 +
 
 +
; Read Depth
 +
: Consider filtering out variants at sites where total read depth is unusually low. Unfortunately, capture protocols introduce very large amounts of variation in sequencing depth and it is usually not possible to accurately filter out sites sequenced at very high depth.
 +
 
 +
; Strand Bias
 +
: Check whether each variant allele is supported by reads that map to both strands. Variants that are supported by reads on only one strand are almost always false positives.
 +
 
 +
= Special Considerations for Admixed Samples =
    
; Estimate Local Ancestry Using GWAS Data
 
; Estimate Local Ancestry Using GWAS Data
 
: For studies that include admixed samples, we should estimate local ancestry using GWAS data. If GWAS are not available, it is strongly recommended that these data should be generated. In principle, local ancestry estimates can be generated even before exome sequencing is complete.
 
: For studies that include admixed samples, we should estimate local ancestry using GWAS data. If GWAS are not available, it is strongly recommended that these data should be generated. In principle, local ancestry estimates can be generated even before exome sequencing is complete.
   −
== Initial Association Analyses ==
+
; Estimate Global Ancestry Covariates Using PCA or MDS Analysis
 +
 
 +
= Initial Association Analyses =
 +
 
 +
We anticipate that, at least early on, the initial association analysis of whole exome datasets in the context of complex trait association studies will focus on identifying and resolving quality control issues that might result in unexpected artifacts.
 +
 
 +
== Initial Single SNP Tests ==
 +
 
 +
In principle, these tests only have power for common variants. In practice, particularly when permutation based methods are used to assess significance, there should be little loss in power by testing all variants using single SNP tests. These tests should include:
 +
 
 +
; Logistic Regression Based Tests for Discrete Traits
 +
: Discrete outcomes should be evaluated using logistic regression. It is important to include appropriate covariates. For most traits, these might include age and sex and, potentially, principal components of ancestry.
 +
 
 +
; Linear Regression Based Tests for Quantitative Traits
 +
: For quantitative traits that are not strongly selected, linear regression based tests can also be used. Again, it is important to include an appropriate set of covariates. For many quantitative traits, it may be a very good idea to normalize traits to minimize the impact of outliers on association results.
 +
 
 +
; Using genotypes as outcomes for Selected Quantitative Traits
 +
: For both discrete and quantitative traits, analysis can be repeated using genotypes (scored as 0, 1 and 2) as outcomes and phenotypes as predictors.
 +
 
 +
== Q-Q Plots ==
 +
 
 +
After carrying out initial single SNP tests, generate Q-Q plots for each analysis. Verify that Q-Q plots are reasonable and that genomic control value is close to 1.0. If not, refine sample and variant filters as needed.
 +
 
 +
== Burden Tests ==
 +
 
 +
The same analyses that were originally carried for single variants should be carried out for groups of rare variants. In principle, one could simply use the presence of a rare variant (or a particular class of rare variant, such as a non-synonymous variant or a newly discovered variant) as a predictor and repeat the logistic regression, linear regression or genotype regression described above. For an initial pass, I think the precise form of this analysis is not critical, because the next step is to...
 +
 
 +
== More Q-Q Plots ==
 +
 
 +
After carrying out initial burden tests, generate Q-Q plots for each analysis. Verify that Q-Q plots are reasonable and that genomic control value is close to 1.0. If not, refine sample and variant filters as needed.
 +
 
 +
= Visualize Results =
 +
 
 +
A number of displays will likely be useful. Probably these should include:
 +
 
 +
* Manhattan Plots
 +
* [[LocusZoom]] Plots
 +
* Q-Q Plots
 +
 
 +
= Think You Are Done? =
 +
 
 +
'''No way!!!'''
 +
 
 +
== Indels and Structural Variants ==
 +
 
 +
You still need a plan to call and evaluate short insertions and deletions as well as larger structural variants.
 +
 
 +
== Pathway Based Analyses ==
   −
We anticipate that, at least early on, the initial association analysis of whole genome datasets in the context of complex trait association studies will focus on identifying and resolving quality control issues that might result in unexpected artifacts.
+
Carry out analyses that include groups of genes with similar biological function (for example, according to [[Gene Ontology]] or [[Kyoto Encyclopedia of Genes and Genomes]] annotations.

Navigation menu