Difference between revisions of "MaCH: 1000 Genomes Imputation Cookbook"
Line 18: | Line 18: | ||
The first step for genotype imputation analyses using MaCH is to build a model that relates your dataset to a reference set of haplotypes. This model will include information on the length of haplotype stretches shared between your sample and the reference panel (in a ''.rec'' file) and information on the similarity of marker genotypes between your sample and the reference panel (in a ''.err'' file). The contents of the second file reflect the combined effects of genotyping error and differences in genotyping assays between the two samples. | The first step for genotype imputation analyses using MaCH is to build a model that relates your dataset to a reference set of haplotypes. This model will include information on the length of haplotype stretches shared between your sample and the reference panel (in a ''.rec'' file) and information on the similarity of marker genotypes between your sample and the reference panel (in a ''.err'' file). The contents of the second file reflect the combined effects of genotyping error and differences in genotyping assays between the two samples. | ||
+ | |||
+ | In our view, MaCH's ability to estimate these parameters makes it especially robust to problems in genotyping, differences in genotyping assays and in the ability to adjust to varying degrees of similarity between the sample being imputed and a reference panel. | ||
+ | |||
+ | === Selecting a Subset of Samples === | ||
+ | |||
+ | Using all availalble samples to estimate model parameters can be quite time consuming and is typically not necessary. Instead, we recommend that you select a set of 200 samples to use in this step of analysis. If you are not worried about batch effects, you could just take the first 200 samples in your pedigree file, for example: | ||
+ | |||
+ | head -200 chr1.ped > chr1.first200.ped | ||
+ | |||
+ | And then run mach as: | ||
+ | |||
+ | mach -d chr1.dat -p chr1.first200.ped -s CEU.1000G/chr1.snps -h CEU.1000G/chr1.hap --greedy -r 100 -o chr1.parameters | ||
+ | |||
+ | Optionally, you can add the <code>--compact</code> option to the end of the command line to reduce memory use. | ||
+ | |||
+ | If you'd like to play it safe and sample 200 individuals at random, you could use [[PedScript] and a series of command similar to this one: | ||
+ | |||
+ | <source lang="text"> | ||
+ | pedscript << EOF | ||
+ | READ DAT chr1.dat | ||
+ | READ PED chr1.ped | ||
+ | |||
+ | NUKE | ||
+ | TRIM | ||
+ | |||
+ | WRITE DAT chr1.rand200.dat | ||
+ | SAMPLE 200 PERSONS TO chr1.rand200.ped | ||
+ | |||
+ | QUIT | ||
+ | EOF | ||
+ | </source> | ||
+ | |||
+ | Then, run mach just as before: | ||
+ | |||
+ | mach -d chr1.rand200.dat -p chr1.rand200.ped -s CEU.1000G/chr1.snps -h CEU.1000G/chr1.hap --greedy -r 100 -o chr1.parameters | ||
+ | |||
+ | You can decide on the exact number of samples to use for this analysis ... A number of between 100-500 individuals should be plenty. On a compute cluster, you should be able to run this step for all chromosomes in parallel; the exact syntax will depend on your cluster. In ours, we'd try something like: | ||
+ | |||
+ | <source lang="text"> | ||
+ | foreach chr (`seq 1 22`) | ||
+ | |||
+ | pedscript << EOF | ||
+ | READ DAT chr$chr.dat | ||
+ | READ PED chr$chr.ped | ||
+ | NUKE | ||
+ | TRIM | ||
+ | WRITE DAT chr$chr.rand200.dat | ||
+ | SAMPLE 200 PERSONS TO chr$chr.rand200.ped | ||
+ | QUIT | ||
+ | EOF | ||
+ | |||
+ | runon mach -d chr$chr.rand200.dat -p chr$chr.rand200.ped -s CEU.1000G/chr$chr.snps -h CEU.1000G/chr$chr.hap \ | ||
+ | --greedy -r 100 -o chr$chr.parameters >& chr$chr.model.log & | ||
+ | |||
+ | end | ||
+ | </source> |
Revision as of 07:08, 4 June 2010
This page documents how to impute 1000 Genome SNPs using MaCH.
Getting Started
Your Own Data
To get started, you will need to store your data in Merlin format pedigree and data files, one per chromosome. For details, of the Merlin file format, see the [http:/www.sph.umich.edu/csg/abecasis/Merlin/tour/input_files.html Merlin tutorial].
Within each file, markers should be stored by chromosome position. Alleles should be stored in the forward strand and can be encoded as 'A', 'C', 'G' or 'T' (there is no need to use numeric identifiers for each allele).
The 1000 Genome pilot project genotypes use NCBI Build 36.
Reference Haplotypes
Reference haplotypes generated by the 1000 Genomes project and formatted so that they are ready for analysis are available from the MaCH download page. The most recent set of haplotypes were generated in March 2010 by combining genotype calls generated at the Broad, Sanger and the University of Michigan. In our hands, this March 2010 release is substantially better than previous 1000 Genome Project genotype call sets.
Estimating Model Parameters
The first step for genotype imputation analyses using MaCH is to build a model that relates your dataset to a reference set of haplotypes. This model will include information on the length of haplotype stretches shared between your sample and the reference panel (in a .rec file) and information on the similarity of marker genotypes between your sample and the reference panel (in a .err file). The contents of the second file reflect the combined effects of genotyping error and differences in genotyping assays between the two samples.
In our view, MaCH's ability to estimate these parameters makes it especially robust to problems in genotyping, differences in genotyping assays and in the ability to adjust to varying degrees of similarity between the sample being imputed and a reference panel.
Selecting a Subset of Samples
Using all availalble samples to estimate model parameters can be quite time consuming and is typically not necessary. Instead, we recommend that you select a set of 200 samples to use in this step of analysis. If you are not worried about batch effects, you could just take the first 200 samples in your pedigree file, for example:
head -200 chr1.ped > chr1.first200.ped
And then run mach as:
mach -d chr1.dat -p chr1.first200.ped -s CEU.1000G/chr1.snps -h CEU.1000G/chr1.hap --greedy -r 100 -o chr1.parameters
Optionally, you can add the --compact
option to the end of the command line to reduce memory use.
If you'd like to play it safe and sample 200 individuals at random, you could use [[PedScript] and a series of command similar to this one:
pedscript << EOF
READ DAT chr1.dat
READ PED chr1.ped
NUKE
TRIM
WRITE DAT chr1.rand200.dat
SAMPLE 200 PERSONS TO chr1.rand200.ped
QUIT
EOF
Then, run mach just as before:
mach -d chr1.rand200.dat -p chr1.rand200.ped -s CEU.1000G/chr1.snps -h CEU.1000G/chr1.hap --greedy -r 100 -o chr1.parameters
You can decide on the exact number of samples to use for this analysis ... A number of between 100-500 individuals should be plenty. On a compute cluster, you should be able to run this step for all chromosomes in parallel; the exact syntax will depend on your cluster. In ours, we'd try something like:
foreach chr (`seq 1 22`)
pedscript << EOF
READ DAT chr$chr.dat
READ PED chr$chr.ped
NUKE
TRIM
WRITE DAT chr$chr.rand200.dat
SAMPLE 200 PERSONS TO chr$chr.rand200.ped
QUIT
EOF
runon mach -d chr$chr.rand200.dat -p chr$chr.rand200.ped -s CEU.1000G/chr$chr.snps -h CEU.1000G/chr$chr.hap \
--greedy -r 100 -o chr$chr.parameters >& chr$chr.model.log &
end