Changes

From Genome Analysis Wiki
Jump to navigationJump to search
no edit summary
Line 1: Line 1: −
This page documents how to impute 1000 Genome SNPs using MaCH.  
+
This page documents how to impute 1000 Genome SNPs using MaCH. It will almost always be more efficient to use [[Minimac]] to carry out imputation using large reference panels, such as the 1000 Genomes Project data. So, you are probably better off looking at the [[Minimac: 1000 Genomes Imputation Cookbook]].
    
== Getting Started ==
 
== Getting Started ==
Line 9: Line 9:  
Within each file, markers should be stored by chromosome position. Alleles should be stored in the forward strand and can be encoded as 'A', 'C', 'G' or 'T' (there is no need to use numeric identifiers for each allele).  
 
Within each file, markers should be stored by chromosome position. Alleles should be stored in the forward strand and can be encoded as 'A', 'C', 'G' or 'T' (there is no need to use numeric identifiers for each allele).  
   −
The 1000 Genome pilot project genotypes use NCBI Build 36.
+
The 1000 Genome pilot project genotypes use NCBI Build 36; more recent releases use NCBI Build 37.
    
=== Reference Haplotypes ===
 
=== Reference Haplotypes ===
   −
Reference haplotypes generated by the 1000 Genomes project and formatted so that they are ready for analysis are available from the [http://www.sph.umich.edu/csg/abecasis/MaCH/download/ MaCH download page]. The most recent set of haplotypes were generated in March 2010 by combining genotype calls generated at the Broad, Sanger and the University of Michigan. In our hands, this March 2010 release is substantially better than previous 1000 Genome Project genotype call sets.
+
Reference haplotypes generated by the 1000 Genomes project and formatted so that they are ready for analysis are available from the [http://www.sph.umich.edu/csg/abecasis/MaCH/download/ MaCH download page]. The most recent set of haplotypes is usually available from the MaCH download page. Improvements in genotype calling and haplotyping methods, together with generation of progressively larger amounts of sequence data mean that, typically, it is better to use the most recent set of 1000 Genomes Project Haplotypes.
    
== Estimating Model Parameters ==
 
== Estimating Model Parameters ==
 +
 +
The first step for genotype imputation analyses using MaCH is to build a model that relates your dataset to a reference set of haplotypes. This model will include information on the length of haplotype stretches shared between your sample and the reference panel (in a ''.rec'' file) and information on the similarity of marker genotypes between your sample and the reference panel (in a ''.err'' file). The contents of the second file reflect the combined effects of genotyping error and differences in genotyping assays between the two samples.
 +
 +
In our view, MaCH's ability to estimate these parameters makes it especially robust to problems in genotyping, differences in genotyping assays and in the ability to adjust to varying degrees of similarity between the sample being imputed and a reference panel.
 +
 +
=== Selecting a Subset of Samples ===
 +
 +
Using all availalble samples to estimate model parameters can be quite time consuming and is typically not necessary. Instead, we recommend that you select a set of 200 samples to use in this step of analysis. If you are not worried about batch effects, you could just take the first 200 samples in your pedigree file, for example:
 +
 +
  head -200 chr1.ped > chr1.first200.ped
 +
 +
And then run mach as:
 +
 +
  mach -d chr1.dat -p chr1.first200.ped -s CEU.1000G/chr1.snps -h CEU.1000G/chr1.hap --greedy -r 100 -o chr1.parameters
 +
 +
Optionally, you can add the <code>--compact</code> option to the end of the command line to reduce memory use.
 +
 +
If you'd like to play it safe and sample 200 individuals at random, you could use [[PedScript]] and a series of command similar to this one:
 +
 +
<source lang="text">
 +
  pedscript << EOF
 +
  READ DAT chr1.dat
 +
  READ PED chr1.ped
 +
 
 +
  NUKE
 +
  TRIM
 +
 +
  WRITE DAT chr1.rand200.dat
 +
  SAMPLE 200 PERSONS TO chr1.rand200.ped
 +
 +
  QUIT
 +
  EOF
 +
</source>
 +
 +
Then, run mach just as before:
 +
 +
  mach -d chr1.rand200.dat -p chr1.rand200.ped -s CEU.1000G/chr1.snps -h CEU.1000G/chr1.hap --greedy -r 100 -o chr1.parameters
 +
 +
You can decide on the exact number of samples to use for this analysis ... A number of between 100-500 individuals should be plenty. On a compute cluster, you should be able to run this step for all chromosomes in parallel; the exact syntax will depend on your cluster. In ours, we'd try something like:
 +
 +
<source lang="text">
 +
  foreach chr (`seq 1 22`)
 +
 +
    pedscript << EOF
 +
        READ DAT chr$chr.dat
 +
        READ PED chr$chr.ped
 +
        NUKE
 +
        TRIM
 +
        WRITE DAT chr$chr.rand200.dat
 +
        SAMPLE 200 PERSONS TO chr$chr.rand200.ped
 +
        QUIT
 +
EOF
 +
 +
    runon -m 4096 mach -d chr$chr.rand200.dat -p chr$chr.rand200.ped -s CEU.1000G/chr$chr.snps -h CEU.1000G/chr$chr.hap \
 +
                        --greedy -r 100 -o chr$chr.parameters --compact >& chr$chr.model.log &
 +
 +
  end
 +
</source>
 +
 +
== Imputation ==
 +
 +
Once model parameters have been estimated, you are ready to impute. This step should be comparatively fast, at least on a per individual basis.
 +
 +
Your command line should look like this:
 +
 +
    mach -d chr1.dat -p chr1.ped -s CEU.1000G/chr1.snps -h CEU.1000G/chr1.hap \
 +
        --errorMap chr1.parameters.erate --crossoverMap chr1.parameters.rec \
 +
        --compact --greedy --autoFlip --mle --mldetails >& chr$chr.impute.log
 +
 +
Again, you should be able to run this step in parallel and in our cluster we'd use:
 +
 +
<source lang="text">
 +
  foreach chr (`seq 1 22`)
 +
 +
    runon -m 4096 mach -d chr$chr.dat -p chr$chr.ped -s CEU.1000G/chr$chr.snps -h CEU.1000G/chr$chr.hap \
 +
          --errorMap chr$chr.parameters.erate --crossoverMap chr$chr.parameters.rec \
 +
          --compact --greedy --autoFlip --mle --mldetails >& chr$chr.impute.log
 +
 +
  end
 +
</source>
 +
 +
== Quality Filtering ==
 +
 +
For the original 1000 Genome SNP sets, we typically recommended that any markers with estimated r<sup>2</sup> of <0.5 should be treated with caution. This is a bit more conservative than the threshold of 0.3 we recommended for HapMap; but early 1000 Genome Project haplotype sets were also (as of June 2010) of lower quality. In current iterations of the project data, it should be safe to use a less conservative r<sup>2</sup> cut-off.

Navigation menu