From Genome Analysis Wiki
Jump to: navigation, search


362 bytes added, 11:22, 2 February 2017
How do I get imputation quality estimates?
This is the new 2-step procedure we are recommending, particularly considering people that are performing imputation multiple times (using HapMap as reference, or using updated releases of the 1000 Genomes data as reference). <br>
The first step is a pre-phasing step using MaCH. This step does not need external reference. This is a time-consuming step BUT is a one-time investment. For computational reason, we recommend breaking the genome into small overlapping segments ( [ Divide-and-Conquer]) for this step. In general, we recommend >500Kb overlapping region on each side. For example, for Affymetrix 6.0 panel, if we use core region of 10Mb and flanking/overlapping region of 1Mb on each side, it will correspond to ~3500 SNps in the core region and ~350 SNPs on each side. For 2000 individuals, one job with ~4,200 SNPs running with --states 200 and -r 50, this would take ~40 hours. For other combinations, using the following link to estimate computing time [ runtime estimate]. <br>
The second step is the actual imputation step using minimac. This step can run on whole chromosomes. Regarding computing time, one million markers for 1000 individuals using 100 reference haplotypes takes ~ 1 hour; and computing time increases linearly with all the above three parameters. See [ minimac] for details.
=== MaCH-Admix ===
If you are doing imputation only once a few (<5) times (think twice if this is really true) or under an immediate time pressure, you can use MaCH-Admix, which does not require pre-phased data and takes ~1/7 of the computing time of the original MaCHthat typically needed for pre-phasing. For large dataset, we recommend breaking the genome into small overlapping segments ( [ Divide-and-Conquer]). Details see [].
=== Divide and Conquer ===
== Where can I find combined HapMap reference files? ==
You can find them at or on the HapMap Project website.
== Where can I find HapMap III / 1000 Genomes reference files? ==
You can find these at the MaCH download page, which is at
== Does --mle overwrite input genotypes? ==
Estimated per allele error rate is 0.0293
A better approach is to mask a small proportion of SNPs (vs. genotypes in the above simple approach). One can generate a mask.dat from the original .dat file by simply changing the flag of a subset of markers from M to S2 without duplicating the .ped file. Post-imputation, one can use&nbsp;&nbsp; [ CalcMatch ]and [ ]to estimate genotypic/allelic error rate and correlation respectively. Both programs can be downloaded from [].
'''Warning''': Imputation involving masked datasets should be performed separately for imputation quality estimation. For production, one should use all available information.
In the simple approach, you will only get concordance/error estimates. There are two aspects to check. (1) the ratio between the genotypic error and allelic error. We expect that only a small proportion of errors where one homozygote is imputed as the other homozygote. Therefore, a ~2:1 ratio is expected. (2) the absolute error rate. There are several factors influencing imputation quality including the population to be imputed, the reference population and the genotyping panel used. Typically, we expect <2% allelic error rate among Caucasians and East Asians; 3-5% among Africans and African Americans. Figure below show imputation quality from the Human Genome Diversity Project (HGDP) for 52 populations across the world and by different HapMap reference panel.
Table 3 in the MaCH 1.0 paper tabulates imputation quality by commercial panel in CEU, YRI, and CHB+JPT.
== How do I get reference files for an region of interest? ==
Note that you do not need to extract regional pedigree files for your own samples because SNPs in pedigree but not in reference will be automatically discarded. <br> 1. For HapMapII format, download haplotypes from <br> 2. For MACH format, you can do the following:
*First, find the first and last SNP in the region you are interested in. Say "rsFIRST" and "rsLAST", defined according to position.
*Then, under csh @ first = `grep -nw rsFIRST orig.snps | cut -f1 -d ':'`
@ last = `grep -nw rsLAST orig.snps | cut -f1 -d ':'`
under bash:
first=`grep -nw rsFIRST orig.snps | cut -f1 -d ':'`
last=`grep -nw rsLAST orig.snps | cut -f1 -d ':'`
*Then find out the field that contains the actual haplotypes, where alleles are separated by whitespace
head -1 orig.hap | wc -w
Note: if the haplotypes are gz compressed, do:
zcat orig.hap.gz | head -1 | wc -w
* Finally (say you got 3 from the above wc -w command. If you got other numbers, replace the 3 in bold below with the number you got):
awk '{print $'''3'''}' orig.hap | cut -c${first}-${last} &gt; region.hap
Note: if the haplotypes are gz compressed, do:
zcat orig.hap.gz | awk '{print $'''3'''}' | cut -c${first}-${last} &gt; region.hap
The created reference files are in MaCH format. You do NOT need to turn on --hapmapFormat option.
== Install MaCH ==
We have source codes available through the MaCH download page: <br>
== More questions? ==
Email [ Yun Li] or [ Goncalo Abecasis].

Navigation menu