Difference between revisions of "Minimac: 1000 Genomes Imputation Cookbook"

From Genome Analysis Wiki
Jump to navigationJump to search
Line 10: Line 10:
  
 
If you are planning to re-impute into new reference panels and your are interested into the X-chromosome - then minimac is the right choice for you.
 
If you are planning to re-impute into new reference panels and your are interested into the X-chromosome - then minimac is the right choice for you.
 
  
 
== Getting Started ==
 
== Getting Started ==
Line 25: Line 24:
  
 
Reference haplotypes generated by the 1000 Genomes project and formatted so that they are ready for analysis are available from the [http://www.sph.umich.edu/csg/abecasis/MaCH/download/1000G-2010-06.html MaCH download page]. The most recent set of haplotypes were generated in June 2010 by combining genotype calls generated at the Broad, Sanger and the University of Michigan. In our hands, this June 2010 release is substantially better than previous 1000 Genome Project genotype call sets.
 
Reference haplotypes generated by the 1000 Genomes project and formatted so that they are ready for analysis are available from the [http://www.sph.umich.edu/csg/abecasis/MaCH/download/1000G-2010-06.html MaCH download page]. The most recent set of haplotypes were generated in June 2010 by combining genotype calls generated at the Broad, Sanger and the University of Michigan. In our hands, this June 2010 release is substantially better than previous 1000 Genome Project genotype call sets.
 +
 +
=== MaCH Imputation ===
 +
  
 
== Estimating Model Parameters ==
 
== Estimating Model Parameters ==
Line 107: Line 109:
 
   end
 
   end
 
</source>
 
</source>
 +
 +
 +
=== minimac Imputation ===
 +
 +
  
 
== Quality Filtering ==
 
== Quality Filtering ==
  
 
For 1000 Genome SNPs, we currently recommend that any markers with estimated r<sup>2</sup> of <0.5 should be treated with caution. This is a bit more conservative than the threshold of 0.3 we recommend for HapMap; but 1000 Genome SNP genotypes are also (as of June 2010) of lower quality.
 
For 1000 Genome SNPs, we currently recommend that any markers with estimated r<sup>2</sup> of <0.5 should be treated with caution. This is a bit more conservative than the threshold of 0.3 we recommend for HapMap; but 1000 Genome SNP genotypes are also (as of June 2010) of lower quality.

Revision as of 18:07, 10 February 2011

This page documents how to impute 1000 Genome SNPs using

  • MaCH (concurrent phasing approach).
  • or minimac (pre-phasing approach).


Shoud I use MaCH or minimac?

The imputation quality of both approaches is comparable.

If you are carrying out only an one-time imputation and you are focusing only on autosomes - then go with MaCH.

If you are planning to re-impute into new reference panels and your are interested into the X-chromosome - then minimac is the right choice for you.

Getting Started

Your Own Data

To get started, you will need to store your data in Merlin format pedigree and data files, one per chromosome. For details, of the Merlin file format, see the [http:/www.sph.umich.edu/csg/abecasis/Merlin/tour/input_files.html Merlin tutorial].

Within each file, markers should be stored by chromosome position. Alleles should be stored in the forward strand and can be encoded as 'A', 'C', 'G' or 'T' (there is no need to use numeric identifiers for each allele).

The 1000 Genome pilot project genotypes use NCBI Build 36.

Reference Haplotypes

Reference haplotypes generated by the 1000 Genomes project and formatted so that they are ready for analysis are available from the MaCH download page. The most recent set of haplotypes were generated in June 2010 by combining genotype calls generated at the Broad, Sanger and the University of Michigan. In our hands, this June 2010 release is substantially better than previous 1000 Genome Project genotype call sets.

MaCH Imputation

Estimating Model Parameters

The first step for genotype imputation analyses using MaCH is to build a model that relates your dataset to a reference set of haplotypes. This model will include information on the length of haplotype stretches shared between your sample and the reference panel (in a .rec file) and information on the similarity of marker genotypes between your sample and the reference panel (in a .err file). The contents of the second file reflect the combined effects of genotyping error and differences in genotyping assays between the two samples.

In our view, MaCH's ability to estimate these parameters makes it especially robust to problems in genotyping, differences in genotyping assays and in the ability to adjust to varying degrees of similarity between the sample being imputed and a reference panel.

Selecting a Subset of Samples

Using all availalble samples to estimate model parameters can be quite time consuming and is typically not necessary. Instead, we recommend that you select a set of 200 samples to use in this step of analysis. If you are not worried about batch effects, you could just take the first 200 samples in your pedigree file, for example:

  head -200 chr1.ped > chr1.first200.ped

And then run mach as:

  mach -d chr1.dat -p chr1.first200.ped -s CEU.1000G/chr1.snps -h CEU.1000G/chr1.hap --greedy -r 100 -o chr1.parameters 

Optionally, you can add the --compact option to the end of the command line to reduce memory use.

If you'd like to play it safe and sample 200 individuals at random, you could use PedScript and a series of command similar to this one:

   pedscript << EOF
   READ DAT chr1.dat
   READ PED chr1.ped
   
   NUKE
   TRIM

   WRITE DAT chr1.rand200.dat
   SAMPLE 200 PERSONS TO chr1.rand200.ped

   QUIT
   EOF

Then, run mach just as before:

  mach -d chr1.rand200.dat -p chr1.rand200.ped -s CEU.1000G/chr1.snps -h CEU.1000G/chr1.hap --greedy -r 100 -o chr1.parameters 

You can decide on the exact number of samples to use for this analysis ... A number of between 100-500 individuals should be plenty. On a compute cluster, you should be able to run this step for all chromosomes in parallel; the exact syntax will depend on your cluster. In ours, we'd try something like:

   foreach chr (`seq 1 22`)

     pedscript << EOF
        READ DAT chr$chr.dat
        READ PED chr$chr.ped
        NUKE
        TRIM
        WRITE DAT chr$chr.rand200.dat
        SAMPLE 200 PERSONS TO chr$chr.rand200.ped
        QUIT
EOF

     runon -m 4096 mach -d chr$chr.rand200.dat -p chr$chr.rand200.ped -s CEU.1000G/chr$chr.snps -h CEU.1000G/chr$chr.hap \
                        --greedy -r 100 -o chr$chr.parameters --compact >& chr$chr.model.log &

   end

Imputation

Once model parameters have been estimated, you are ready to impute. This step should be comparatively fast, at least on a per individual basis.

Your command line should look like this:

   mach -d chr1.dat -p chr1.ped -s CEU.1000G/chr1.snps -h CEU.1000G/chr1.hap \
        --errorMap chr1.parameters.erate --crossoverMap chr1.parameters.rec \
        --compact --greedy --autoFlip --mle --mldetails >& chr$chr.impute.log 

Again, you should be able to run this step in parallel and in our cluster we'd use:

   foreach chr (`seq 1 22`)

     runon -m 4096 mach -d chr$chr.dat -p chr$chr.ped -s CEU.1000G/chr$chr.snps -h CEU.1000G/chr$chr.hap \
           --errorMap chr$chr.parameters.erate --crossoverMap chr$chr.parameters.rec \
           --compact --greedy --autoFlip --mle --mldetails >& chr$chr.impute.log 

   end


minimac Imputation

Quality Filtering

For 1000 Genome SNPs, we currently recommend that any markers with estimated r2 of <0.5 should be treated with caution. This is a bit more conservative than the threshold of 0.3 we recommend for HapMap; but 1000 Genome SNP genotypes are also (as of June 2010) of lower quality.