Difference between revisions of "Minimac: 1000 Genomes Imputation Cookbook"

From Genome Analysis Wiki
Jump to navigationJump to search
m (moved 1000G Cookbook to Minimac: 1000 Genomes Imputation Cookbook: Switched to more informative title)
Line 1: Line 1:
This page documents how to impute 1000 Genome SNPs using  
+
This page documents how to impute 1000 Genome SNPs using [[Minimac]], which is typically the preferred approach for imputation using large reference panels such as the 1000 Genomes data. For information on how to carry out 1000 Genomes Imputation using [[MaCH]], see [[MaCH: 1000 Genomes Imputation Cookbook]].
* [[MaCH]] (concurrent phasing approach).
 
OR
 
* [[Minimac]] (pre-phasing / 2-step approach).  
 
 
 
  
 
== Shoud I use MaCH or minimac? ==
 
== Shoud I use MaCH or minimac? ==
 
The imputation quality of both approaches is comparable.  
 
The imputation quality of both approaches is comparable.  
  
If you are carrying out only an one-time imputation, using a reference panel with <200 haplotypes and focusing only on autosomes - then go with MaCH. If you are using a large reference panel, planning to re-impute your data as new reference panels become available or interested in X-chromosome imputation - then minimac is the right choice for you.
+
If you are carrying out only an one-time imputation, using a relatively small reference panel (with <200 haplotypes) and focusing only on autosomes - then go with MaCH. If you are using a large reference panel, planning to re-impute your data as new reference panels become available or interested in X-chromosome imputation - then minimac is the right choice for you.
  
Since the current 1000G reference set includes 500+ European haplotypes, you'll probably be better off using [[minimac]].
+
As of June 2011, the 1000 Genomes Panel set of reference haplotypes includes >2000 haplotypes, and you are likely to be better off using [[minimac]].
  
 
== Getting Started ==
 
== Getting Started ==
Line 24: Line 20:
 
=== Quality Control ===
 
=== Quality Control ===
  
You should apply standard quality control filters to the set of SNPs that you use as input to the imputation procedure. These filters are typically study specific but usually include per marker checks for genotype completeness (markers with relatively high missingness rates are typically excluded), hardy weinberg equilibrium and duplicate concordance (markers with high discordance rates are excluded) and per sample checks for completeness and heterozygosity. With older genotyping platforms, low frequency SNPs are also often excluded. Two good ways to verify that you have used appropriate quality control steps are to generate a Q-Q plot for your dataset and to calculate a genomic control parameter.
+
You should apply standard quality control filters to the set of SNPs that you use as input to the imputation procedure. These filters are typically study specific but usually a series of per marker and per individual quality checks.  
  
=== Reference Haplotypes ===
+
Per marker quality checks typically evaluate:
  
Reference haplotypes generated by the 1000 Genomes project and formatted so that they are ready for analysis are available from the [http://www.sph.umich.edu/csg/abecasis/MaCH/download/1000G-2010-08.html MaCH download page]. In our hands, this August 2010 release is substantially better than previous 1000 Genome Project genotype call sets.
+
* Genotype completeness (markers with relatively high missingness rates are typically excluded)
 +
* [[Hardy Weinberg equilibrium]] (markers with clear deviations from Hardy Weinberg proportions are typically excluded)
 +
* Duplicate concordance (markers with high discordance rates are excluded)
 +
* Mendelian inconsistencies (when parent-offspring pairs are available, markers with an excess of Mendelian inconsistencies are typically excluded).
 +
* Polymorphism check (monomorphic markers can represent assay failures and are typically excluded).
  
 +
Per individual quality checks typically include:
  
== MaCH Imputation ==
+
* Check for per sample completeness (samples with high missingness rates are typically excluded)
 +
* Check for per sample heterozygosity (samples with unusual heterozygosity are typically excluded)
 +
* Inspection of principal component or multi-dimensional scaling plots (genetic ancestry outliers are usually excluded)
 +
* Check for related individuals (if related samples are identified, the analysis plan must be adjusted or they must be removed)
  
=== Estimating Model Parameters ===
+
With older genotyping platforms, low frequency SNPs are also often excluded because they are hard to genotype accurately. With more modern genotyping arrays, the accuracy of genotype calls for low frequency SNPs is less of a concern.
  
The first step for genotype imputation analyses using MaCH is to build a model that relates your dataset to a reference set of haplotypes. This model will include information on the length of haplotype stretches shared between your sample and the reference panel (in a ''.rec'' file) and information on the similarity of marker genotypes between your sample and the reference panel (in a ''.err'' file). The contents of the second file reflect the combined effects of genotyping error and differences in genotyping assays between the two samples.
+
Two good ways to verify that you have used appropriate quality control steps are to generate a Q-Q plot for your dataset and to calculate a genomic control parameter. If these are not satisfactory, it may be useful to investigate before proceeding to imputation.
  
In our view, MaCH's ability to estimate these parameters makes it especially robust to problems in genotyping, differences in genotyping assays and in the ability to adjust to varying degrees of similarity between the sample being imputed and a reference panel.
+
=== Reference Haplotypes ===
 
 
==== Selecting a Subset of Samples ====
 
 
 
Using all available samples to estimate model parameters can be quite time consuming and is typically not necessary. Instead, we recommend that you select a set of 200 samples to use in this step of analysis. If you are not worried about batch effects, you could just take the first 200 samples in your pedigree file, for example:
 
 
 
  head -200 chr1.ped > chr1.first200.ped
 
  
And then run mach as:
+
Reference haplotypes generated by the 1000 Genomes project and formatted so that they are ready for analysis are available from the [http://www.sph.umich.edu/csg/abecasis/MaCH/download/ MaCH download page]. In our hands, it is ideal to always use the most recent release since generation of additional sequence data, improvements in variant discovery, genotyping and haplotyping strategies typically result in noticeable improvements in data quality.
 
 
  mach -d chr1.dat -p chr1.first200.ped -s CEU.1000G/chr1.snps -h CEU.1000G/chr1.hap --greedy -r 100 -o chr1.parameters
 
 
 
Optionally, you can add the <code>--compact</code> option to the end of the command line to reduce memory use.
 
 
 
If you'd like to play it safe and sample 200 individuals at random, you could use [[PedScript]] and a series of command similar to this one:
 
 
 
<source lang="text">
 
  pedscript << EOF
 
  READ DAT chr1.dat
 
  READ PED chr1.ped
 
 
 
  NUKE
 
  TRIM
 
 
 
  WRITE DAT chr1.rand200.dat
 
  SAMPLE 200 PERSONS TO chr1.rand200.ped
 
 
 
  QUIT
 
  EOF
 
</source>
 
 
 
Then, run mach just as before:
 
 
 
  mach -d chr1.rand200.dat -p chr1.rand200.ped -s CEU.1000G/chr1.snps -h CEU.1000G/chr1.hap --greedy -r 100 -o chr1.parameters
 
 
 
You can decide on the exact number of samples to use for this analysis ... A number of between 100-500 individuals should be plenty. On a compute cluster, you should be able to run this step for all chromosomes in parallel; the exact syntax will depend on your cluster. In ours, we'd try something like:
 
 
 
<source lang="text">
 
  foreach chr (`seq 1 22`)
 
 
 
    pedscript << EOF
 
        READ DAT chr$chr.dat
 
        READ PED chr$chr.ped
 
        NUKE
 
        TRIM
 
        WRITE DAT chr$chr.rand200.dat
 
        SAMPLE 200 PERSONS TO chr$chr.rand200.ped
 
        QUIT
 
EOF
 
 
 
    runon -m 4096 mach -d chr$chr.rand200.dat -p chr$chr.rand200.ped -s CEU.1000G/chr$chr.snps -h CEU.1000G/chr$chr.hap \
 
                        --greedy -r 100 -o chr$chr.parameters --compact >& chr$chr.model.log &
 
 
 
  end
 
</source>
 
 
 
=== Imputation ===
 
 
 
Once model parameters have been estimated, you are ready to impute. This step should be comparatively fast, at least on a per individual basis.
 
 
 
Your command line should look like this:
 
 
 
    mach -d chr1.dat -p chr1.ped -s CEU.1000G/chr1.snps -h CEU.1000G/chr1.hap \
 
        --errorMap chr1.parameters.erate --crossoverMap chr1.parameters.rec \
 
        --compact --greedy --autoFlip --mle --mldetails >& chr$chr.impute.log
 
 
 
Again, you should be able to run this step in parallel and in our cluster we'd use:
 
 
 
<source lang="text">
 
  foreach chr (`seq 1 22`)
 
 
 
    runon -m 4096 mach -d chr$chr.dat -p chr$chr.ped -s CEU.1000G/chr$chr.snps -h CEU.1000G/chr$chr.hap \
 
          --errorMap chr$chr.parameters.erate --crossoverMap chr$chr.parameters.rec \
 
          --compact --greedy --autoFlip --mle --mldetails >& chr$chr.impute.log
 
 
 
  end
 
</source>
 
  
 +
== Minimac Imputation ==
  
== Pre-phasing / 2-Step Imputation ==
+
[[Minimac]] relies on a two step approach. First, the samples that are to be analyzed must be phased into a series of estimated haplotypes. Second, imputation is carried out directly into these phased haplotypes. As newer reference panels become available, only the second step must be repeated.
  
 
=== Pre-phasing - MaCH ===
 
=== Pre-phasing - MaCH ===
  
Pre-phasing / 2-Step imputation starts with the pre-phasing of your genotypes using MaCH. A typical MaCH command line to estimate phased haplotypes might look like this:
+
A convenient way to haplotype your sample is to use MaCH. A typical MaCH command line to estimate phased haplotypes might look like this:
  
 
   mach1 -d chr1.dat -p chr1.ped --rounds 20 --states 200 --phase --interim 5 --sample 5
 
   mach1 -d chr1.dat -p chr1.ped --rounds 20 --states 200 --phase --interim 5 --sample 5
  
This will request that MaCH estimate haplotypes for your sample, using 20 iterations of its Markov sampler and conditioning each update on up to 200 haplotypes.  
+
This will request that MaCH estimate haplotypes for your sample, using 20 iterations of its Markov sampler and conditioning each update on up to 200 haplotypes. A summary description of these parameters follows (but for a more complete description, you should go to the MaCH website):
A summary description of these parameters follows (but for a more complete description, you should go to the MaCH website):
 
  
 
{| class="wikitable" border="1" cellpadding="2"
 
{| class="wikitable" border="1" cellpadding="2"
Line 139: Line 69:
 
|-
 
|-
 
| <code>--states 200</code>
 
| <code>--states 200</code>
| Number of haplotypes to consider during each update. Increasing this value will typically lead to better haplotypes, but can dramatically increase computing time and memory use. A value of 200 - 400 is typical.  
+
| Number of haplotypes to consider during each update. Increasing this value will typically lead to better haplotypes, but can dramatically increase computing time and memory use. A value of 200 - 400 is typical. If computing time is a constraints, you may consider decreasing this parameter to 100. If you have substantial computing resources, consider increasing this value to 600 or even 800.
 
|-
 
|-
 
| <code>--rounds 20</code>
 
| <code>--rounds 20</code>
Line 148: Line 78:
 
|-
 
|-
 
| <code>--sample 5</code>
 
| <code>--sample 5</code>
| Request that random (but plausible) sets of haplotypes for each individual should be drawn every 5 iterations. This parameter is optional, but for some rare variant analyses, these alternative haplotypes can be very useful.
+
| Request that random (but plausible) sets of haplotypes for each individual should be drawn every 5 iterations. This parameter is optional, but for some rare variant analyses, these alternative haplotypes can be very useful. If you are not short of disk space, you should consider enabling this parameter.
 
|-  
 
|-  
 
| <code>--phase</code>  
 
| <code>--phase</code>  
Line 170: Line 100:
 
=== Imputation into Phased Haplotypes - minimac ===
 
=== Imputation into Phased Haplotypes - minimac ===
  
Imputing genotypes using '''minimac''' is an easy straightforward process: after selecting a set of reference haplotypes, plugging-in the target haplotypes from the previous step and setting the number of rounds to use for the model parameter estimation, imputation should proceed rapidly.
+
Imputing genotypes using '''minimac''' is a straightforward process: after selecting a set of reference haplotypes, plugging-in the target haplotypes from the previous step and setting the number of rounds to use for estimating model parameters (which describe the length and conservation of haplotype stretches shared between the reference panel and your study samples), imputation should proceed rapidly.
  
  
Line 246: Line 176:
  
 
=== Imputation quality evaluation ===
 
=== Imputation quality evaluation ===
Minimac drops each of the genotyped SNPs in turn and then calculates 3 statistics:
+
Minimac hides each of the genotyped SNPs in turn and then calculates 3 statistics:
 
* looRSQ - this is the estimated rsq for that SNP (as if SNP weren't typed).  
 
* looRSQ - this is the estimated rsq for that SNP (as if SNP weren't typed).  
* empR - this is the empirical correlation between true and imputed genotypes for the SNP. If this is negative, the SNP is probably flipped.  
+
* empR - this is the empirical correlation between true and imputed genotypes for the SNP. If this is negative, the SNP alleles are probably flipped.  
 
* empRSQ - this is the actual R2 value, comparing imputed and true genotypes.  
 
* empRSQ - this is the actual R2 value, comparing imputed and true genotypes.  
  

Revision as of 03:29, 28 July 2011

This page documents how to impute 1000 Genome SNPs using Minimac, which is typically the preferred approach for imputation using large reference panels such as the 1000 Genomes data. For information on how to carry out 1000 Genomes Imputation using MaCH, see MaCH: 1000 Genomes Imputation Cookbook.

Shoud I use MaCH or minimac?

The imputation quality of both approaches is comparable.

If you are carrying out only an one-time imputation, using a relatively small reference panel (with <200 haplotypes) and focusing only on autosomes - then go with MaCH. If you are using a large reference panel, planning to re-impute your data as new reference panels become available or interested in X-chromosome imputation - then minimac is the right choice for you.

As of June 2011, the 1000 Genomes Panel set of reference haplotypes includes >2000 haplotypes, and you are likely to be better off using minimac.

Getting Started

Your Own Data

To get started, you will need to store your data in Merlin format pedigree and data files, one per chromosome. For details, of the Merlin file format, see the Merlin tutorial.

Within each file, markers should be stored by chromosome position. Alleles should be stored in the forward strand and can be encoded as 'A', 'C', 'G' or 'T' (there is no need to use numeric identifiers for each allele).

The latest reference panel generated by the 1000 Genomes project uses NCBI Build 37 (HG 19). Make sure that your data is on Build 37, otherwise lift your data over.

Quality Control

You should apply standard quality control filters to the set of SNPs that you use as input to the imputation procedure. These filters are typically study specific but usually a series of per marker and per individual quality checks.

Per marker quality checks typically evaluate:

  • Genotype completeness (markers with relatively high missingness rates are typically excluded)
  • Hardy Weinberg equilibrium (markers with clear deviations from Hardy Weinberg proportions are typically excluded)
  • Duplicate concordance (markers with high discordance rates are excluded)
  • Mendelian inconsistencies (when parent-offspring pairs are available, markers with an excess of Mendelian inconsistencies are typically excluded).
  • Polymorphism check (monomorphic markers can represent assay failures and are typically excluded).

Per individual quality checks typically include:

  • Check for per sample completeness (samples with high missingness rates are typically excluded)
  • Check for per sample heterozygosity (samples with unusual heterozygosity are typically excluded)
  • Inspection of principal component or multi-dimensional scaling plots (genetic ancestry outliers are usually excluded)
  • Check for related individuals (if related samples are identified, the analysis plan must be adjusted or they must be removed)

With older genotyping platforms, low frequency SNPs are also often excluded because they are hard to genotype accurately. With more modern genotyping arrays, the accuracy of genotype calls for low frequency SNPs is less of a concern.

Two good ways to verify that you have used appropriate quality control steps are to generate a Q-Q plot for your dataset and to calculate a genomic control parameter. If these are not satisfactory, it may be useful to investigate before proceeding to imputation.

Reference Haplotypes

Reference haplotypes generated by the 1000 Genomes project and formatted so that they are ready for analysis are available from the MaCH download page. In our hands, it is ideal to always use the most recent release since generation of additional sequence data, improvements in variant discovery, genotyping and haplotyping strategies typically result in noticeable improvements in data quality.

Minimac Imputation

Minimac relies on a two step approach. First, the samples that are to be analyzed must be phased into a series of estimated haplotypes. Second, imputation is carried out directly into these phased haplotypes. As newer reference panels become available, only the second step must be repeated.

Pre-phasing - MaCH

A convenient way to haplotype your sample is to use MaCH. A typical MaCH command line to estimate phased haplotypes might look like this:

 mach1 -d chr1.dat -p chr1.ped --rounds 20 --states 200 --phase --interim 5 --sample 5

This will request that MaCH estimate haplotypes for your sample, using 20 iterations of its Markov sampler and conditioning each update on up to 200 haplotypes. A summary description of these parameters follows (but for a more complete description, you should go to the MaCH website):

Parameter Description
-d sample.dat Data file in Merlin format. Markers should be listed according to their order along the chromosome.
-p sample.ped Pedigree file in Merlin format. Alleles should be labeled on the forward strand.
--states 200 Number of haplotypes to consider during each update. Increasing this value will typically lead to better haplotypes, but can dramatically increase computing time and memory use. A value of 200 - 400 is typical. If computing time is a constraints, you may consider decreasing this parameter to 100. If you have substantial computing resources, consider increasing this value to 600 or even 800.
--rounds 20 Iterations of the Markov sampler to use for haplotyping. Typically, using 20-30 rounds should give good results. To obtain better results, it is usually better to increase the --states parameter.
--interim 5 Request that intermediate results should be saved to disk periodically. These will facilitate analyses in case a run doesn't complete.
--sample 5 Request that random (but plausible) sets of haplotypes for each individual should be drawn every 5 iterations. This parameter is optional, but for some rare variant analyses, these alternative haplotypes can be very useful. If you are not short of disk space, you should consider enabling this parameter.
--phase Tell MaCH to estimate phased haplotypes for each individual.
--compact Reduce memory use at the cost of approximately doubling runtime.


You should be able to run this step in parallel and in our cluster we'd use:

   foreach chr (`seq 1 22`)

     runon -m 4096 mach -d chr$chr.dat -p chr$chr.ped --rounds 20 --states 200 --phase --interim 5 --sample 5

   end

Imputation into Phased Haplotypes - minimac

Imputing genotypes using minimac is a straightforward process: after selecting a set of reference haplotypes, plugging-in the target haplotypes from the previous step and setting the number of rounds to use for estimating model parameters (which describe the length and conservation of haplotype stretches shared between the reference panel and your study samples), imputation should proceed rapidly.


A typical minimac command line might look like this:

 minimac --refHaps ref.hap.$chr.gz --refSnps ref.snps.$chr.gz --haps target.hap.$chr.gz --snps target.snps.$chr.gz --rounds 5 --states 200 --prefix results

A detailed description of all minimac options is available elsewhere. Here is a brief description of the above parameters:

Parameter Description
--refSnps ref.snps.gz List of SNPs in the reference panel
--refHaps ref.hap.gz Reference haplotypes (e.g. from MaCH download page)
--snps target.snps.gz SNPs in phased haplotypes. These should largely be a subset of the SNPs in the reference panel.
--haps target.hap.gz Phased haplotypes where missing genotypes will be imputed.
--rounds 5 Rounds of optimization for model parameters, which describe population recombination rates and per SNP error rates.
--states 200 Maximum number of reference (or target) haplotypes to be examined during parameter optimization.
--prefix imputed Optionally, a string that is used to help generate output file names.


Again, you can speed-up things by running this step in parallel and in our cluster we'd use:

   foreach chr (`seq 1 22`)

     runon -m 1024 minimac --refHaps ref.hap.$chr.gz --refSnps ref.snps.$chr.gz --haps target.hap.$chr.gz --snps target.snps.$chr.gz --rounds 5 --states 200 --prefix results

   end


X Chromosome Imputation

minimac supports the imputation of genotypes on the X chromosome (non-pseudo-autosomal part).

  1. Split the X chromosome pedigree file by sex.
    1. For females: follow the same protocol as for autosomes (phase with MaCH and impute with minimac).
    2. For males
      1. Remove the pseudo-autosomal part (for build hg18: chrX:1-2709520 and chrX:154584238-154913754 ; for build hg19 chrX:60001-2699520 and chrX:154931044-155260560)
      2. Convert the pedigree file into a MaCH haplotype file (missing genotypes should be encoded as: "0" or "." or "N" ) and impute using minimac as described above.


<Example of a male only pedigree file >
FAM1003 ID1234 0 0 M A/0 A/0 C/0
FAM1004 ID5678 0 0 M 0/0 C/0 G/0
...
<End of pedigree file>


<Example of the corresponding haplotype file>
FAM1003->ID1234 HAPLO1 AAC
FAM1003->ID1234 HAPLO2 AAC
FAM1004->ID5678 HAPLO1 0CG
FAM1004->ID5678 HAPLO2 0CG
...
<End of the corresponding haplotype file>


Imputation quality evaluation

Minimac hides each of the genotyped SNPs in turn and then calculates 3 statistics:

  • looRSQ - this is the estimated rsq for that SNP (as if SNP weren't typed).
  • empR - this is the empirical correlation between true and imputed genotypes for the SNP. If this is negative, the SNP alleles are probably flipped.
  • empRSQ - this is the actual R2 value, comparing imputed and true genotypes.

These statistics can be found in the *.info file

Be aware that imputation quality statistics are not directly comparable between different imputation programs (MaCH/minimac vs. Impute vs. Beagle etc.).

Questions and Comments

Please contact Goncalo Abecasis or Christian Fuchsberger (minimac) or Yun Li (MaCH).