Sequencing Workshop Analysis of Indels

From Genome Analysis Wiki
Jump to: navigation, search


This wiki page details some standard Indel analyses for the sequencing workshop in the example indel data set.

Viewing the BCF file

The file generated from the indel calling is a binary version [BCFv2.1] of the Variant Call Format (VCF). BCFv2.1 is more efficient to process as the data is already stored in computer readable format on the hard disk. It is however not necessarily more compact than VCF4.2 especially when the format fields are rich in details.


You can access the header by running the command:

 vt view -H all.genotypes.bcf.

The header is as follows:

 ##FILTER=<ID=PASS,Description="All filters passed">
 ##FORMAT=<ID=PL,Number=G,Type=Integer,Description="Normalized, Phred-scaled likelihoods for genotypes">
 ##FORMAT=<ID=AD,Number=3,Type=Integer,Description="Allele Depth">
 ##FORMAT=<ID=GQ,Number=1,Type=Integer,Description="Genotype Quality">
 ##INFO=<ID=AC,Number=A,Type=Integer,Description="Alternate Allele Counts">
 ##INFO=<ID=AN,Number=1,Type=Integer,Description="Total Number Allele Counts">
 ##INFO=<ID=NS,Number=1,Type=Integer,Description="Number of Samples With Data">
 ##INFO=<ID=AF,Number=A,Type=Float,Description="Alternate Allele Frequency">
 ##INFO=<ID=GC,Number=G,Type=Integer,Description="Genotype Counts">
 ##INFO=<ID=GN,Number=1,Type=Integer,Description="Total Number of Genotypes Counts">
 ##INFO=<ID=GF,Number=G,Type=Float,Description="Genotype Frequency">
 ##INFO=<ID=HWEAF,Number=A,Type=Float,Description="Genotype likelihood based MLE Allele Frequency assuming HWE">
 ##INFO=<ID=HWEGF,Number=G,Type=Float,Description="Genotype likelihood based MLE Genotype Frequency assuming HWE">
 ##INFO=<ID=MLEAF,Number=A,Type=Float,Description="Genotype likelihood based MLE Allele Frequency">
 ##INFO=<ID=MLEGF,Number=G,Type=Float,Description="Genotype likelihood based MLE Genotype Frequency">
 ##INFO=<ID=HWE_LLR,Number=1,Type=Float,Description="Genotype likelihood based Hardy Weinberg ln(Likelihood Ratio)">
 ##INFO=<ID=HWE_LPVAL,Number=1,Type=Float,Description="Genotype likelihood based Hardy Weinberg Likelihood Ratio Test Statistic ln(p-value)">
 ##INFO=<ID=HWE_DF,Number=1,Type=Integer,Description="Degrees of freedom for Genotype likelihood based Hardy Weinberg Likelihood Ratio Test Statistic">
 ##INFO=<ID=FIC,Number=1,Type=Float,Description="Genotype likelihood based Inbreeding Coefficient">
 ##INFO=<ID=AB,Number=1,Type=Float,Description="Genotype likelihood based Allele Balance">
 ##FILTER=<ID=PASS,Description="Temporary pass">
 ##FILTER=<ID=overlap,Description="Overlapping variant">


To view the records:

 vt view all.genotypes.bcf.

The columns are CHROM, POS, ID, REF, ALT, QUAL, FILTER, INFO, FORMAT, Genotype fields denoted by the sample name.

 22	36990878	.	GGT	G	455	PASS	AC=32;AN=116;AF=0.275862;GC=32,20,6;GN=58;
                                                                MLEAF=0.27366; MLEGF=0.494275,0.464129,0.0415952;
                                                        GT:PL:DP:AD:GQ	0/0:0,9,108:9:3,0,6:10

Let's look at the record's fields.

 22             : chromosome
 36990878       : genome position
 .              : this is the ID field that is left blank.
 GGT            : the reference sequence that is replaced by the alternative sequence below.
 G              : so this is basically a deletion of GT
 455            : QUAL field denoting validity of this variant, higher the better.
 PASS           : a passed variant.
 INFO           : fields containing information about the variant.
 FORMAT         : format field labels for the genotype columns.
 0/0:0,9,108:9:3,0,6:10 :  genotype information.

INFO field

The information field are as follows:

 AC=32                : alternate allele count
 AN=116               : total number of alleles
 AF=0.28              : allele frequency based on AC/AN
 GC=32,20,6           : genotype counts for 0/0, 0/1, 1/1
 GN=58;               : total number of genotypes   
 GF=0.55,0.34,0.10    : genotype frequencies based on GC
 NS=58                : no. of samples
 HWEAF=0.28           : genotype likelihood based estimation of the allele frequency assuming Hardy Weinberg equilibrium
 HWEGF=0.52,0.40,0.08 : genotype frequency derived from HWEAF
 MLEAF=0.27           : genotype likelihood based estimation of the genotype frequency
 MLEGF=0.49,0.46,0.04 : genotype frequency derived from MLEAF
 HWE_LLR=-0.45        : log likelihood ratio of HWE test
 HWE_LPVAL=-1.08      : log p value of HWE test
 HWE_DF=1             : degrees of freedom of the HWE test
 FIC=-0.07            : genotype likelihood based inbreeding coefficient
 AB=0.61              : genotype likelihood based allele balance


The genotype fields are described as follows:

 0/0      : homozygous reference chosen based on PL
 0,9,108  : PHRED scaled genotype likelihoods
 9        : no. of reads covering this variant
 3,0,6    : allele depth
            counts of reads supporting the reference allele, 
            the alternative allele and neither alleles respectively.
            The last category might be due to insufficient
            coverage of the read over the locus 
            or simply an allele that is not accounted for.
 10       : genotype quality


The following section details some simple analyses we can perform.


First you want to know what is in the bcf file.

 vt peek all.genotypes.bcf 
 stats: no. of samples                     :         62
        no. of chromosomes                 :          1 
no. Indels  : 720 2 alleles (ins/del)  : 720 (0.84) [328/392] #(insertion deletion ratio) [#insertions, #deletions] >=3 alleles (ins/del)  : 0 (-nan) [0/0]
no. of observed variants  : 720

The variants have filter labels PASS meaning a temporary pass and overlap, meaning that the variants are overlapping with another variant, implying multiallelicity. We can count the number of variants with the following commands.

 vt peek all.genotypes.bcf -f "FILTER.PASS"
 stats: no. of samples                     :         62
        no. of chromosomes                 :          1 
no. Indels  : 584 2 alleles (ins/del)  : 584 (0.69) [239/345] >=3 alleles (ins/del)  : 0 (-nan) [0/0]
 vt peek all.genotypes.bcf -f "FILTER.overlap"
 stats: no. of samples                     :         62
        no. of chromosomes                 :          1 
no. Indels  : 136 2 alleles (ins/del)  : 136 (1.89) [89/47] #notice the difference in insertion deletion ratios >=3 alleles (ins/del)  : 0 (-nan) [0/0]
 #passed singletons only
 vt peek all.genotypes.bcf -f "FILTER.PASS&&INFO.AC==1"

 #passed indels of length 1 only
 vt peek all.genotypes.bcf -f "FILTER.PASS&&LEN==1"

 #passed indels of length >4 
 vt peek all.genotypes.bcf -f "FILTER.PASS&&LEN>1"
 #passed singletons of length 4 or insertions of length 3
 vt peek all.genotypes.bcf -f "FILTER.PASS&&(LEN==4||DLEN==3)"

Comparison with other data sets

It is usually useful to examine the call sets against known data sets for the passed variants.

 vt profile_indels -g indel.reference.txt  -r hs37d5.fa all.genotypes.bcf -i 22:36000000-37000000 -f "PASS"
 data set
   No Indels         :        613 [0.72]    #613 passed variants with an insertion deletion ratio of 0.72
      FS/NFS         :       0.50 (2/2)     #frame shift / non frameshift indels proportion, the bracket gives the counts of the frame shift and non frameshift indels
      Low complexity :       0.46 (283/613) #fraction of indels in low complexity region, the bracket gives the counts of the indels 
1000G #1000 Genomes Phase 1 data set A-B 371 [0.76] #variants found in call set only, square brackets contain insertion deletion ratio A&B 242 [0.66] #variants found in both data sets B-A 276 [0.89] #variants found in 1000G phase 1 data set only Precision 39.5% #39.5% of the call set are previously known, so 60.5% are novel variants. Sensitivity 46.7% #sensitivity of variant calling, 46,7% of known variants from 1000 Genomes were rediscovered
mills #The gold standard Mills et al. indel set A-B 542 [0.68] A&B 71 [1.03] B-A 31 [1.07] Precision 11.6% Sensitivity 69.6%
dbsnp #Indels from dbSNP A-B 405 [0.68] A&B 208 [0.79] B-A 494 [2.03] Precision 33.9% Sensitivity 29.6%

Ins/Del ratios: Reference alignment based methods tend to be biased towards the detection of deletions. This provides a useful measure for discovery Indel sets to show the varying degree of biasness. It also appears that as coverage increases, the ins/del ratio tends to 1.

Coding region analysis: Coding region Indels may be categorised as Frame shift Indels and Non frameshift Indels. A lower proportion of Frameshift Indels may indicate a better quality data set but this depends also on the individuals sequenced.

Complexity region analysis: Indels in regions marked by DUST - a low complexity region masker used in the NCBI pipeline.

Overlap analysis: overlap analysis with other data sets is an indicator of sensitivity.

  • 1000G: contains Indels from 1000 Genomes, represent a wide spectrum of variants from many different populations. Variants here have an allele frequency above 0.005.
  • Mills: contains doublehit common indels from the Mills. et al paper and is a relatively good measure of sensitivity for common variants. Because not all Indels in this set is expected to be present in your sample, this actually gives you an underestimate of sensitivity.
  • dbsnp: contains Indels submitted from everywhere, I am not sure what does this represent exactly. But assuming most are real, then precision is a useful estimated quantity from this reference data set.

We perform the same analysis for the failed variants again, the relatively low overlap with known data sets imply a reasonable tradeoff in sensitivity and specificity.

 vt profile_indels -g indel.reference.txt  -r hs37d5.fa all.genotypes.bcf -i 22:36000000-37000000 -f "~PASS"
 data set
   No Indels         :        107 [2.06]
      FS/NFS         :       -nan (0/0)
      Low complexity :       0.79 (85/107) 
1000G A-B 107 [2.06] A&B 0 [-nan] B-A 518 [0.77] Precision 0.0% Sensitivity 0.0%
mills A-B 105 [2.09] A&B 2 [1.00] B-A 100 [1.04] Precision 1.9% Sensitivity 2.0%
dbsnp A-B 102 [2.00] A&B 5 [4.00] B-A 697 [1.51] Precision 4.7% Sensitivity 0.7%

This analysis supports filters too.


A slight digression here, when analyzing indels, it is important to normalize it. While it is a simple concept, it is hardly standardized. The call set here had already been normalized but we feel that this is an important concept so we discuss this a bit here.

Indel representation is not unique, you should normalize them and remove duplicates.

  Variant normalization is implemented in vt and this page explains the algorithm 
  and also provides a simple proof of correctness - Variant Normalization

The following table shows the number of variants that had to be normalized and the corresponding type of normalization performed and the ensuing number of duplicate variants found for some of the 1000 Genomes Trio High Coverage call sets. Although left alignment seems to be a trivial concept, it is easily overlooked and remain a common mistake.

Dataset Freebayes Haplotyecaller PINDEL Platypus RTG Samtools SGA
Left trim 27069 1 0 0 0 0 15047
Left aligned 3 1 1 0 12262 2 1892
Left trim 40782 0 0 0 374 0 0
Left aligned 1892 0 0 0 1329 1 0
Right trimmed 0 0 0 25393 0 11 0
Duplicate variants 0 1 155 3143 286 8 7541

Another example is the Mills et al. data set which followed up with 10004 Indels for validation. Out of 9996 passed variants, it was found that after normalization, only 8904 distinct Indels remain - about a loss of 11% of variant thought distinct.

To normalize and remove duplicate variants:

 vt normalize  mills.genotypes.bcf -r hs37d5.fa  | vt mergedups - -o mills.normalized.genotypes.bcf 

and you will observe that 3994 variants had to be left aligned and 1092 variants were removed.

 stats: biallelic
         no. left trimmed                      : 0
         no. left trimmed and left aligned     : 0
         no. left trimmed and right trimmed    : 0
         no. left aligned                      : 3994
         no. right trimmed                     : 0 
multiallelic no. left trimmed  : 0 no. left trimmed and left aligned  : 0 no. left trimmed and right trimmed  : 0 no. left aligned  : 0 no. right trimmed  : 0
no. variants observed  : 9996

stats: Total number of observed variants 9996 Total number of unique variants 8904

UMICH's algorithm for normalization has been adopted by Petr Danecek in bcftools and is also used in GKNO.