Changes

From Genome Analysis Wiki
Jump to navigationJump to search
Line 18: Line 18:     
The first step for genotype imputation analyses using MaCH is to build a model that relates your dataset to a reference set of haplotypes. This model will include information on the length of haplotype stretches shared between your sample and the reference panel (in a ''.rec'' file) and information on the similarity of marker genotypes between your sample and the reference panel (in a ''.err'' file). The contents of the second file reflect the combined effects of genotyping error and differences in genotyping assays between the two samples.
 
The first step for genotype imputation analyses using MaCH is to build a model that relates your dataset to a reference set of haplotypes. This model will include information on the length of haplotype stretches shared between your sample and the reference panel (in a ''.rec'' file) and information on the similarity of marker genotypes between your sample and the reference panel (in a ''.err'' file). The contents of the second file reflect the combined effects of genotyping error and differences in genotyping assays between the two samples.
 +
 +
In our view, MaCH's ability to estimate these parameters makes it especially robust to problems in genotyping, differences in genotyping assays and in the ability to adjust to varying degrees of similarity between the sample being imputed and a reference panel.
 +
 +
=== Selecting a Subset of Samples ===
 +
 +
Using all availalble samples to estimate model parameters can be quite time consuming and is typically not necessary. Instead, we recommend that you select a set of 200 samples to use in this step of analysis. If you are not worried about batch effects, you could just take the first 200 samples in your pedigree file, for example:
 +
 +
  head -200 chr1.ped > chr1.first200.ped
 +
 +
And then run mach as:
 +
 +
  mach -d chr1.dat -p chr1.first200.ped -s CEU.1000G/chr1.snps -h CEU.1000G/chr1.hap --greedy -r 100 -o chr1.parameters
 +
 +
Optionally, you can add the <code>--compact</code> option to the end of the command line to reduce memory use.
 +
 +
If you'd like to play it safe and sample 200 individuals at random, you could use [[PedScript] and a series of command similar to this one:
 +
 +
<source lang="text">
 +
  pedscript << EOF
 +
  READ DAT chr1.dat
 +
  READ PED chr1.ped
 +
 
 +
  NUKE
 +
  TRIM
 +
 +
  WRITE DAT chr1.rand200.dat
 +
  SAMPLE 200 PERSONS TO chr1.rand200.ped
 +
 +
    QUIT
 +
    EOF
 +
</source>
 +
 +
Then, run mach just as before:
 +
 +
  mach -d chr1.rand200.dat -p chr1.rand200.ped -s CEU.1000G/chr1.snps -h CEU.1000G/chr1.hap --greedy -r 100 -o chr1.parameters
 +
 +
You can decide on the exact number of samples to use for this analysis ... A number of between 100-500 individuals should be plenty. On a compute cluster, you should be able to run this step for all chromosomes in parallel; the exact syntax will depend on your cluster. In ours, we'd try something like:
 +
 +
<source lang="text">
 +
  foreach chr (`seq 1 22`)
 +
 +
    pedscript << EOF
 +
        READ DAT chr$chr.dat
 +
        READ PED chr$chr.ped
 +
        NUKE
 +
        TRIM
 +
        WRITE DAT chr$chr.rand200.dat
 +
        SAMPLE 200 PERSONS TO chr$chr.rand200.ped
 +
        QUIT
 +
EOF
 +
 +
    runon mach -d chr$chr.rand200.dat -p chr$chr.rand200.ped -s CEU.1000G/chr$chr.snps -h CEU.1000G/chr$chr.hap \
 +
                --greedy -r 100 -o chr$chr.parameters >& chr$chr.model.log &
 +
 +
  end
 +
</source>

Navigation menu