Changes

From Genome Analysis Wiki
Jump to navigationJump to search
no edit summary
Line 1: Line 1: −
This page documents how to impute 1000 Genome SNPs using  
+
This page documents how to impute 1000 Genome SNPs using [[Minimac]], which is typically the preferred approach for imputation using large reference panels such as the 1000 Genomes data. For information on how to carry out 1000 Genomes Imputation using [[MaCH]], see [[MaCH: 1000 Genomes Imputation Cookbook]].
* [[MaCH]] (concurrent phasing approach).
  −
OR
  −
* [[Minimac]] (pre-phasing / 2-step approach).  
  −
 
      
== Shoud I use MaCH or minimac? ==
 
== Shoud I use MaCH or minimac? ==
 
The imputation quality of both approaches is comparable.  
 
The imputation quality of both approaches is comparable.  
   −
If you are carrying out only an one-time imputation, using a reference panel with <200 haplotypes and focusing only on autosomes - then go with MaCH. If you are using a large reference panel, planning to re-impute your data as new reference panels become available or interested in X-chromosome imputation - then minimac is the right choice for you.
+
If you are carrying out only an one-time imputation, using a relatively small reference panel (with <200 haplotypes) and focusing only on autosomes - then go with MaCH. If you are using a large reference panel, planning to re-impute your data as new reference panels become available or interested in X-chromosome imputation - then minimac is the right choice for you.
   −
Since the current 1000G reference set includes 500+ European haplotypes, you'll probably be better off using [[minimac]].
+
As of June 2011, the 1000 Genomes Panel set of reference haplotypes includes >2000 haplotypes, and you are likely to be better off using [[minimac]].
    
== Getting Started ==
 
== Getting Started ==
Line 24: Line 20:  
=== Quality Control ===
 
=== Quality Control ===
   −
You should apply standard quality control filters to the set of SNPs that you use as input to the imputation procedure. These filters are typically study specific but usually include per marker checks for genotype completeness (markers with relatively high missingness rates are typically excluded), hardy weinberg equilibrium and duplicate concordance (markers with high discordance rates are excluded) and per sample checks for completeness and heterozygosity. With older genotyping platforms, low frequency SNPs are also often excluded. Two good ways to verify that you have used appropriate quality control steps are to generate a Q-Q plot for your dataset and to calculate a genomic control parameter.
+
You should apply standard quality control filters to the set of SNPs that you use as input to the imputation procedure. These filters are typically study specific but usually a series of per marker and per individual quality checks.  
   −
=== Reference Haplotypes ===
+
Per marker quality checks typically evaluate:
   −
Reference haplotypes generated by the 1000 Genomes project and formatted so that they are ready for analysis are available from the [http://www.sph.umich.edu/csg/abecasis/MaCH/download/1000G-2010-08.html MaCH download page]. In our hands, this August 2010 release is substantially better than previous 1000 Genome Project genotype call sets.
+
* Genotype completeness (markers with relatively high missingness rates are typically excluded)
 +
* [[Hardy Weinberg equilibrium]] (markers with clear deviations from Hardy Weinberg proportions are typically excluded)
 +
* Duplicate concordance (markers with high discordance rates are excluded)
 +
* Mendelian inconsistencies (when parent-offspring pairs are available, markers with an excess of Mendelian inconsistencies are typically excluded).
 +
* Polymorphism check (monomorphic markers can represent assay failures and are typically excluded).
    +
Per individual quality checks typically include:
   −
== MaCH Imputation ==
+
* Check for per sample completeness (samples with high missingness rates are typically excluded)
 +
* Check for per sample heterozygosity (samples with unusual heterozygosity are typically excluded)
 +
* Inspection of principal component or multi-dimensional scaling plots (genetic ancestry outliers are usually excluded)
 +
* Check for related individuals (if related samples are identified, the analysis plan must be adjusted or they must be removed)
   −
=== Estimating Model Parameters ===
+
With older genotyping platforms, low frequency SNPs are also often excluded because they are hard to genotype accurately. With more modern genotyping arrays, the accuracy of genotype calls for low frequency SNPs is less of a concern.
   −
The first step for genotype imputation analyses using MaCH is to build a model that relates your dataset to a reference set of haplotypes. This model will include information on the length of haplotype stretches shared between your sample and the reference panel (in a ''.rec'' file) and information on the similarity of marker genotypes between your sample and the reference panel (in a ''.err'' file). The contents of the second file reflect the combined effects of genotyping error and differences in genotyping assays between the two samples.
+
Two good ways to verify that you have used appropriate quality control steps are to generate a Q-Q plot for your dataset and to calculate a genomic control parameter. If these are not satisfactory, it may be useful to investigate before proceeding to imputation.
   −
In our view, MaCH's ability to estimate these parameters makes it especially robust to problems in genotyping, differences in genotyping assays and in the ability to adjust to varying degrees of similarity between the sample being imputed and a reference panel.
+
=== Reference Haplotypes ===
 
  −
==== Selecting a Subset of Samples ====
  −
 
  −
Using all available samples to estimate model parameters can be quite time consuming and is typically not necessary. Instead, we recommend that you select a set of 200 samples to use in this step of analysis. If you are not worried about batch effects, you could just take the first 200 samples in your pedigree file, for example:
  −
 
  −
  head -200 chr1.ped > chr1.first200.ped
     −
And then run mach as:
+
Reference haplotypes generated by the 1000 Genomes project and formatted so that they are ready for analysis are available from the [http://www.sph.umich.edu/csg/abecasis/MaCH/download/ MaCH download page]. In our hands, it is ideal to always use the most recent release since generation of additional sequence data, improvements in variant discovery, genotyping and haplotyping strategies typically result in noticeable improvements in data quality.
 
  −
  mach -d chr1.dat -p chr1.first200.ped -s CEU.1000G/chr1.snps -h CEU.1000G/chr1.hap --greedy -r 100 -o chr1.parameters
  −
 
  −
Optionally, you can add the <code>--compact</code> option to the end of the command line to reduce memory use.
  −
 
  −
If you'd like to play it safe and sample 200 individuals at random, you could use [[PedScript]] and a series of command similar to this one:
  −
 
  −
<source lang="text">
  −
  pedscript << EOF
  −
  READ DAT chr1.dat
  −
  READ PED chr1.ped
  −
 
  −
  NUKE
  −
  TRIM
  −
 
  −
  WRITE DAT chr1.rand200.dat
  −
  SAMPLE 200 PERSONS TO chr1.rand200.ped
  −
 
  −
  QUIT
  −
  EOF
  −
</source>
  −
 
  −
Then, run mach just as before:
  −
 
  −
  mach -d chr1.rand200.dat -p chr1.rand200.ped -s CEU.1000G/chr1.snps -h CEU.1000G/chr1.hap --greedy -r 100 -o chr1.parameters
  −
 
  −
You can decide on the exact number of samples to use for this analysis ... A number of between 100-500 individuals should be plenty. On a compute cluster, you should be able to run this step for all chromosomes in parallel; the exact syntax will depend on your cluster. In ours, we'd try something like:
  −
 
  −
<source lang="text">
  −
  foreach chr (`seq 1 22`)
  −
 
  −
    pedscript << EOF
  −
        READ DAT chr$chr.dat
  −
        READ PED chr$chr.ped
  −
        NUKE
  −
        TRIM
  −
        WRITE DAT chr$chr.rand200.dat
  −
        SAMPLE 200 PERSONS TO chr$chr.rand200.ped
  −
        QUIT
  −
EOF
  −
 
  −
    runon -m 4096 mach -d chr$chr.rand200.dat -p chr$chr.rand200.ped -s CEU.1000G/chr$chr.snps -h CEU.1000G/chr$chr.hap \
  −
                        --greedy -r 100 -o chr$chr.parameters --compact >& chr$chr.model.log &
  −
 
  −
  end
  −
</source>
  −
 
  −
=== Imputation ===
  −
 
  −
Once model parameters have been estimated, you are ready to impute. This step should be comparatively fast, at least on a per individual basis.
  −
 
  −
Your command line should look like this:
  −
 
  −
    mach -d chr1.dat -p chr1.ped -s CEU.1000G/chr1.snps -h CEU.1000G/chr1.hap \
  −
        --errorMap chr1.parameters.erate --crossoverMap chr1.parameters.rec \
  −
        --compact --greedy --autoFlip --mle --mldetails >& chr$chr.impute.log
  −
 
  −
Again, you should be able to run this step in parallel and in our cluster we'd use:
  −
 
  −
<source lang="text">
  −
  foreach chr (`seq 1 22`)
  −
 
  −
    runon -m 4096 mach -d chr$chr.dat -p chr$chr.ped -s CEU.1000G/chr$chr.snps -h CEU.1000G/chr$chr.hap \
  −
          --errorMap chr$chr.parameters.erate --crossoverMap chr$chr.parameters.rec \
  −
          --compact --greedy --autoFlip --mle --mldetails >& chr$chr.impute.log
  −
 
  −
  end
  −
</source>
      +
== Minimac Imputation ==
   −
== Pre-phasing / 2-Step Imputation ==
+
[[Minimac]] relies on a two step approach. First, the samples that are to be analyzed must be phased into a series of estimated haplotypes. Second, imputation is carried out directly into these phased haplotypes. As newer reference panels become available, only the second step must be repeated.
    
=== Pre-phasing - MaCH ===
 
=== Pre-phasing - MaCH ===
   −
Pre-phasing / 2-Step imputation starts with the pre-phasing of your genotypes using MaCH. A typical MaCH command line to estimate phased haplotypes might look like this:
+
A convenient way to haplotype your sample is to use MaCH. A typical MaCH command line to estimate phased haplotypes might look like this:
    
   mach1 -d chr1.dat -p chr1.ped --rounds 20 --states 200 --phase --interim 5 --sample 5
 
   mach1 -d chr1.dat -p chr1.ped --rounds 20 --states 200 --phase --interim 5 --sample 5
   −
This will request that MaCH estimate haplotypes for your sample, using 20 iterations of its Markov sampler and conditioning each update on up to 200 haplotypes.  
+
This will request that MaCH estimate haplotypes for your sample, using 20 iterations of its Markov sampler and conditioning each update on up to 200 haplotypes. A summary description of these parameters follows (but for a more complete description, you should go to the MaCH website):
A summary description of these parameters follows (but for a more complete description, you should go to the MaCH website):
      
{| class="wikitable" border="1" cellpadding="2"
 
{| class="wikitable" border="1" cellpadding="2"
Line 139: Line 69:  
|-
 
|-
 
| <code>--states 200</code>
 
| <code>--states 200</code>
| Number of haplotypes to consider during each update. Increasing this value will typically lead to better haplotypes, but can dramatically increase computing time and memory use. A value of 200 - 400 is typical.  
+
| Number of haplotypes to consider during each update. Increasing this value will typically lead to better haplotypes, but can dramatically increase computing time and memory use. A value of 200 - 400 is typical. If computing time is a constraints, you may consider decreasing this parameter to 100. If you have substantial computing resources, consider increasing this value to 600 or even 800.
 
|-
 
|-
 
| <code>--rounds 20</code>
 
| <code>--rounds 20</code>
Line 148: Line 78:  
|-
 
|-
 
| <code>--sample 5</code>
 
| <code>--sample 5</code>
| Request that random (but plausible) sets of haplotypes for each individual should be drawn every 5 iterations. This parameter is optional, but for some rare variant analyses, these alternative haplotypes can be very useful.
+
| Request that random (but plausible) sets of haplotypes for each individual should be drawn every 5 iterations. This parameter is optional, but for some rare variant analyses, these alternative haplotypes can be very useful. If you are not short of disk space, you should consider enabling this parameter.
 
|-  
 
|-  
 
| <code>--phase</code>  
 
| <code>--phase</code>  
Line 170: Line 100:  
=== Imputation into Phased Haplotypes - minimac ===
 
=== Imputation into Phased Haplotypes - minimac ===
   −
Imputing genotypes using '''minimac''' is an easy straightforward process: after selecting a set of reference haplotypes, plugging-in the target haplotypes from the previous step and setting the number of rounds to use for the model parameter estimation, imputation should proceed rapidly.
+
Imputing genotypes using '''minimac''' is a straightforward process: after selecting a set of reference haplotypes, plugging-in the target haplotypes from the previous step and setting the number of rounds to use for estimating model parameters (which describe the length and conservation of haplotype stretches shared between the reference panel and your study samples), imputation should proceed rapidly.
      Line 246: Line 176:     
=== Imputation quality evaluation ===
 
=== Imputation quality evaluation ===
Minimac drops each of the genotyped SNPs in turn and then calculates 3 statistics:
+
Minimac hides each of the genotyped SNPs in turn and then calculates 3 statistics:
 
* looRSQ - this is the estimated rsq for that SNP (as if SNP weren't typed).  
 
* looRSQ - this is the estimated rsq for that SNP (as if SNP weren't typed).  
* empR - this is the empirical correlation between true and imputed genotypes for the SNP. If this is negative, the SNP is probably flipped.  
+
* empR - this is the empirical correlation between true and imputed genotypes for the SNP. If this is negative, the SNP alleles are probably flipped.  
 
* empRSQ - this is the actual R2 value, comparing imputed and true genotypes.  
 
* empRSQ - this is the actual R2 value, comparing imputed and true genotypes.  
  

Navigation menu