Changes

From Genome Analysis Wiki
Jump to navigationJump to search
3,617 bytes added ,  11:22, 2 February 2017
Line 1: Line 1:  +
== How to speed up? ==
 +
 +
=== minimac ===
 +
 +
This is the new 2-step procedure we are recommending, particularly considering people that are performing imputation multiple times (using HapMap as reference, or using updated releases of the 1000 Genomes data as reference). <br>
 +
The first step is a pre-phasing step using MaCH. This step does not need external reference. This is a time-consuming step BUT is a one-time investment. For computational reason, we recommend breaking the genome into small overlapping segments ( [http://genome.sph.umich.edu/wiki/MaCH_FAQ#Divide_and_Conquer Divide-and-Conquer]) for this step. In general, we recommend >500Kb overlapping region on each side. For example, for Affymetrix 6.0 panel, if we use core region of 10Mb and flanking/overlapping region of 1Mb on each side, it will correspond to ~3500 SNps in the core region and ~350 SNPs on each side. For 2000 individuals, one job with ~4,200 SNPs running with --states 200 and -r 50, this would take ~40 hours. For other combinations, using the following link to estimate computing time [http://csg.sph.umich.edu//yli/MaCH-Admix/runtime.php#est runtime estimate]. <br>
 +
The second step is the actual imputation step using minimac. This step can run on whole chromosomes. Regarding computing time, one million markers for 1000 individuals using 100 reference haplotypes takes ~ 1 hour; and computing time increases linearly with all the above three parameters. See [http://genome.sph.umich.edu/wiki/Minimac minimac] for details.
 +
 +
=== MaCH-Admix ===
 +
 +
If you are doing imputation only a few (<5) times (think twice if this is really true) or under an immediate time pressure, you can use MaCH-Admix, which does not require pre-phased data and takes ~1/7 of the computing time of that typically needed for pre-phasing. For large dataset, we recommend breaking the genome into small overlapping segments ( [http://genome.sph.umich.edu/wiki/MaCH_FAQ#Divide_and_Conquer Divide-and-Conquer]). Details see [http://www.unc.edu/~yunmli/MaCH-Admix/ MaCH-Admix].
 +
 +
=== Divide and Conquer ===
 +
See [http://genome.sph.umich.edu/wiki/Mach_DAC MaCH Divide and Conquer] for details.
 +
 +
=== 2-step imputation ===
 +
See [http://genome.sph.umich.edu/wiki/MaCH_FAQ#Why_and_how_to_perform_a_2-step_imputation.3F 2-step imputation] for details.
 +
 
== Why and how to perform a 2-step imputation?  ==
 
== Why and how to perform a 2-step imputation?  ==
   Line 14: Line 32:  
  # step 2:
 
  # step 2:
 
  mach1 -d sample.dat -p sample.ped -s chr20.snps -h chr20.hap --compact --greedy --autoFlip --errorMap par_infer.erate --crossoverMap par_infer.rec --mle --mldetails &gt; mach.imp.log
 
  mach1 -d sample.dat -p sample.ped -s chr20.snps -h chr20.hap --compact --greedy --autoFlip --errorMap par_infer.erate --crossoverMap par_infer.rec --mle --mldetails &gt; mach.imp.log
 +
 +
In step1, one can use --greedy in combination with --states XX in MaCH versions 16.b and above. We have found that using 1/3 of the reference haplotypes (with 1/9 computational time) results in almost no power loss for the current HapMap and 1000G reference panels.<br>
    
In step2, each individual is imputed independently and can therefore be split into as many as n (sample size) jobs for each chromosome for parallelism.
 
In step2, each individual is imputed independently and can therefore be split into as many as n (sample size) jobs for each chromosome for parallelism.
 +
 +
For other approaches to speed up, see [how to speed up].
 +
 +
== Can MaCH perform imputation for chromosome X? ==
 +
Yes. See [http://genome.sph.umich.edu/wiki/MaCH:_machX MaCH X Chromosome] for details.
    
== Where can I find combined HapMap reference files?  ==
 
== Where can I find combined HapMap reference files?  ==
   −
You can find them at http://www.sph.umich.edu/csg/yli/mach/download/HapMap-r21.html or on the HapMap Project website.
+
You can find them at http://csg.sph.umich.edu//yli/mach/download/HapMap-r21.html or on the HapMap Project website.
    
== Where can I find HapMap III / 1000 Genomes reference files? ==
 
== Where can I find HapMap III / 1000 Genomes reference files? ==
   −
You can find these at the MaCH download page, which is at http://www.sph.umich.edu/csg/yli/mach/download/
+
You can find these at the MaCH download page, which is at http://csg.sph.umich.edu//yli/mach/download/
    
== Does --mle overwrite input genotypes?  ==
 
== Does --mle overwrite input genotypes?  ==
Line 37: Line 62:  
   Estimated per allele error rate is 0.0293  
 
   Estimated per allele error rate is 0.0293  
   −
A better approach is to mask a small proportion of SNPs (vs. genotypes in the above simple approach). One can generate a mask.dat from the original .dat file by simply changing the flag of a subset of markers from M to S2 without duplicating the .ped file. Post-imputation, one can use&nbsp;&nbsp; [http://www.sph.umich.edu/csg/ylwtx/CalcMatch.1.0.5.tgz CalcMatch ]and [http://www.sph.umich.edu/csg/ylwtx/doseR2.tgz doseR2.pl ]to estimate genotypic/allelic error rate and correlation respectively. Both programs can be downloaded from [http://www.sph.umich.edu/csg/ylwtx/software.html http://www.sph.umich.edu/csg/ylwtx/software.html].  
+
A better approach is to mask a small proportion of SNPs (vs. genotypes in the above simple approach). One can generate a mask.dat from the original .dat file by simply changing the flag of a subset of markers from M to S2 without duplicating the .ped file. Post-imputation, one can use&nbsp;&nbsp; [http://genome.sph.umich.edu/wiki/CalcMatch CalcMatch ]and [http://csg.sph.umich.edu//ylwtx/doseR2.tgz doseR2.pl ]to estimate genotypic/allelic error rate and correlation respectively. Both programs can be downloaded from [http://csg.sph.umich.edu//ylwtx/software.html http://csg.sph.umich.edu//ylwtx/software.html].  
    
'''Warning''': Imputation involving masked datasets should be performed separately for imputation quality estimation. For production, one should use all available information.
 
'''Warning''': Imputation involving masked datasets should be performed separately for imputation quality estimation. For production, one should use all available information.
Line 44: Line 69:  
In the simple approach, you will only get concordance/error estimates. There are two aspects to check. (1) the ratio between the genotypic error and allelic error. We expect that only a small proportion of errors where one homozygote is imputed as the other homozygote. Therefore, a ~2:1 ratio is expected. (2) the absolute error rate. There are several factors influencing imputation quality including the population to be imputed, the reference population and the genotyping panel used. Typically, we expect <2% allelic error rate among Caucasians and East Asians; 3-5% among Africans and African Americans. Figure below show imputation quality from the Human Genome Diversity Project (HGDP) for 52 populations across the world and by different HapMap reference panel.
 
In the simple approach, you will only get concordance/error estimates. There are two aspects to check. (1) the ratio between the genotypic error and allelic error. We expect that only a small proportion of errors where one homozygote is imputed as the other homozygote. Therefore, a ~2:1 ratio is expected. (2) the absolute error rate. There are several factors influencing imputation quality including the population to be imputed, the reference population and the genotyping panel used. Typically, we expect <2% allelic error rate among Caucasians and East Asians; 3-5% among Africans and African Americans. Figure below show imputation quality from the Human Genome Diversity Project (HGDP) for 52 populations across the world and by different HapMap reference panel.
   −
[[File:figure3.pdf]]
+
http://csg.sph.umich.edu//yli/figure3.gif
    
Table 3 in the MaCH 1.0 paper  tabulates imputation quality by commercial panel in CEU, YRI, and CHB+JPT.
 
Table 3 in the MaCH 1.0 paper  tabulates imputation quality by commercial panel in CEU, YRI, and CHB+JPT.
Line 52: Line 77:  
We strongly recommend QC both before and after imputation. Before imputation, we recommend the standard battery of QC filters including HWE, MAF (recommended cutoff is 1% for genotyping-based GWAS), completeness, Mendelian inconsistency etc. Post-imputation, we recommend Rsq 0.3 (which removes &gt;70% of poorly-imputed SNPs at the cost of &lt;0.5% well-imputed SNPs) and MAF of 1%.  
 
We strongly recommend QC both before and after imputation. Before imputation, we recommend the standard battery of QC filters including HWE, MAF (recommended cutoff is 1% for genotyping-based GWAS), completeness, Mendelian inconsistency etc. Post-imputation, we recommend Rsq 0.3 (which removes &gt;70% of poorly-imputed SNPs at the cost of &lt;0.5% well-imputed SNPs) and MAF of 1%.  
   −
== How do I get reference files for an region of interest? ==
+
== How do I get reference files for an region of interest? ==
   −
1. For HapMapII format, download haplotypes from http://www.sph.umich.edu/csg/ylwtx/HapMapForMach.tgz 2. For MACH format, you can do the following:  
+
Note that you do not need to extract regional pedigree files for your own samples because SNPs in pedigree but not in reference will be automatically discarded. <br> 1. For HapMapII format, download haplotypes from http://csg.sph.umich.edu//ylwtx/HapMapForMach.tgz <br> 2. For MACH format, you can do the following:  
    
*First, find the first and last SNP in the region you are interested in. Say "rsFIRST" and "rsLAST", defined according to position.  
 
*First, find the first and last SNP in the region you are interested in. Say "rsFIRST" and "rsLAST", defined according to position.  
*Then:
+
*Then, under csh:
 +
@ first = `grep -nw rsFIRST orig.snps | cut -f1 -d ':'`
 +
@ last = `grep -nw rsLAST orig.snps | cut -f1 -d ':'`
 +
under bash:
 +
first=`grep -nw rsFIRST orig.snps | cut -f1 -d ':'`
 +
last=`grep -nw rsLAST orig.snps | cut -f1 -d ':'`
 +
 
 +
*Then find out the field that contains the actual haplotypes, where alleles are separated by whitespace
 +
  head -1 orig.hap | wc -w
 +
Note: if the haplotypes are gz compressed, do:
 +
  zcat orig.hap.gz | head -1 | wc -w
   −
  @ first = `grep -n rsFIRST orig.snps | cut -f1 -d ':'`
+
* Finally (say you got 3 from the above wc -w command. If you got other numbers, replace the 3 in bold below with the number you got):
  @ last = `grep -n rsLAST orig.snps | cut -f1 -d ':'`
     −
*Finally (assuming the third field contains the actual haplotypes, where alleles are separated by whitespace):
+
  awk '{print $'''3'''}' orig.hap | cut -c${first}-${last} &gt; region.hap
   −
   awk '{print $3}' orig.hap | cut -c${first}-${last} &gt; region.hap
+
Note: if the haplotypes are gz compressed, do:
 +
   zcat orig.hap.gz | awk '{print $'''3'''}' | cut -c${first}-${last} &gt; region.hap
 +
 
 +
The created reference files are in MaCH format. You do NOT need to turn on --hapmapFormat option.
    
== Do I always have to sort the pedigree file by marker position?  ==
 
== Do I always have to sort the pedigree file by marker position?  ==
   −
If you use a reference set of haplotypes, you do not have to as long as the external reference is in correct order. **HOWEVER**, you will probably avoid problems by including markers in the pedigree file sorted in chromosome order.  
+
If you use a reference set of haplotypes, you do not have to as long as the external reference is in correct order.
    
== What if I specify ''--states R'' where ''R'' exceeds the maximum possible (2*number diploid individuals - 2 + number_haplotypes)?  ==
 
== What if I specify ''--states R'' where ''R'' exceeds the maximum possible (2*number diploid individuals - 2 + number_haplotypes)?  ==
Line 76: Line 113:  
== How is AL1 defined? Which allele dosage is .dose/.mldose counting?  ==
 
== How is AL1 defined? Which allele dosage is .dose/.mldose counting?  ==
   −
AL1 is an arbitrary allele. Typically, it is the first allele read in the reference haplotypes (file fed to -h or --haps). The earliest versions (prior to April 2007) of mach counted the expected number copies of AL2 and more recent versions count the number of AL1. One can find out which allele is counted following the steps below.  
+
AL1 is an arbitrary allele. Typically, it is the first allele read in the reference haplotypes. The earliest versions (prior to April 2007) of mach counted the expected number copies of AL2 and more recent versions count the number of AL1. One can find out which allele is counted following the steps below.  
    
#. First, find the two alleles for one of the markers in your data
 
#. First, find the two alleles for one of the markers in your data
Line 108: Line 145:  
Note that, on the example above, .mldose could be replaced with .dose and .mlgeno could be replaced with .geno.  
 
Note that, on the example above, .mldose could be replaced with .dose and .mlgeno could be replaced with .geno.  
   −
Based on the three files above, we've confirmed that dosage is the number of AL1 copies: you will only to check for one informative case (i.e, dosage values close to 0 or 2) since it's consistent across all individuals and all SNPs.  
+
Based on the three files above, we've confirmed that dosage is the number of AL1 copies: you will only to check for one informative case (i.e, dosage values close to 0 or 2) since it's consistent across all individuals and all SNPs.
    
== Can I used an unphased reference?  ==
 
== Can I used an unphased reference?  ==
Line 166: Line 203:  
#r, # of rounds (-r or --rounds, --mle corresponds to 1-2 rounds)
 
#r, # of rounds (-r or --rounds, --mle corresponds to 1-2 rounds)
   −
Computational time increases linearly with m, n, r and quadratically with h. On our Xeon 3.0GHz machine, imputation with m=25K, n=250, h=120, and r=100 takes ~20 hours.
+
Computational time increases linearly with m, n, r and quadratically with h. On our Xeon 3.0GHz machine, imputation with m=25K, n=250, h=120, and r=100 takes ~20 hours (25000*250*120^2*100/4.5/10^11).  
    
If you have a larger number of individuals to impute (e.g., > 1,000), we recommend a 2-step imputation manner http://genome.sph.umich.edu/wiki/MaCH_FAQ#Why_and_how_to_perform_a_2-step_imputation.3F.
 
If you have a larger number of individuals to impute (e.g., > 1,000), we recommend a 2-step imputation manner http://genome.sph.umich.edu/wiki/MaCH_FAQ#Why_and_how_to_perform_a_2-step_imputation.3F.
Line 175: Line 212:  
   make clear
 
   make clear
 
   make all
 
   make all
 +
 +
New executables mach1 and thunder will then be generated under folder executables/
 +
 +
== Install MaCH ==
 +
We have source codes available through the MaCH download page: http://csg.sph.umich.edu//yli/mach/download/ <br>
    
== More questions?  ==
 
== More questions?  ==
    
Email [mailto:yunli@med.unc.edu Yun Li] or [mailto:goncalo@umich.edu Goncalo Abecasis].
 
Email [mailto:yunli@med.unc.edu Yun Li] or [mailto:goncalo@umich.edu Goncalo Abecasis].
  −
[[Category:Software]]
 
96

edits

Navigation menu