scholarly journals A Comparison of Bayesian Methods for Haplotype Reconstruction from Population Genotype Data

2003 ◽  
Vol 73 (5) ◽  
pp. 1162-1169 ◽  
Author(s):  
Matthew Stephens ◽  
Peter Donnelly
2004 ◽  
Vol 20 (12) ◽  
pp. 1842-1849 ◽  
Author(s):  
E. Halperin ◽  
E. Eskin

2008 ◽  
Vol 06 (01) ◽  
pp. 241-259 ◽  
Author(s):  
JING LI ◽  
TAO JIANG

Two grand challenges in the postgenomic era are to develop a detailed understanding of heritable variation in the human genome, and to develop robust strategies for identifying the genetic contribution to diseases and drug responses. Haplotypes of single nucleotide polymorphisms (SNPs) have been suggested as an effective representation of human variation, and various haplotype-based association mapping methods for complex traits have been proposed in the literature. However, humans are diploid and, in practice, genotype data instead of haplotype data are collected directly. Therefore, efficient and accurate computational methods for haplotype reconstruction are needed and have recently been investigated intensively, especially for tightly linked markers such as SNPs. This paper reviews statistical and combinatorial haplotyping algorithms using pedigree data, unrelated individuals, or pooled samples.


Author(s):  
Heather Manching ◽  
Randall J Wisser

Abstract Motivation Ancestral haplotype maps provide useful information about genomic variation and insights into biological processes. Reconstructing the descendent haplotype structure of homologous chromosomes, particularly for large numbers of individuals, can help with characterizing the recombination landscape, elucidating genotype-to-phenotype relationships, improving genomic predictions and more. Inferring haplotype maps from sparse genotype data is an efficient approach to whole-genome haplotyping, but this is a non-trivial problem. A standardized approach is needed to validate whether haplotype reconstruction software, conceived population designs and existing data for a given population provides accurate haplotype information for further inference. Results We introduce SPEARS, a pipeline for the simulation-based appraisal of genome-wide haplotype maps constructed from sparse genotype data. Using a specified pedigree, the pipeline generates virtual genotypes (known data) with genotyping errors and missing data structure. It then proceeds to mimic analysis in practice, capturing sources of error due to genotyping, imputation and haplotype inference. Standard metrics allow researchers to assess different population designs and which features of haplotype structure or regions of the genome are sufficiently accurate for analysis. Haplotype maps for 1000 outcross progeny from a multi-parent population of maize are used to demonstrate SPEARS. Availabilityand implementation SPEARS, the protocol and suite of scripts, are publicly available under an MIT license at GitHub (https://github.com/maizeatlas/spears).. Supplementary information Supplementary data are available at Bioinformatics online.


2019 ◽  
Vol 62 (3) ◽  
pp. 577-586 ◽  
Author(s):  
Garnett P. McMillan ◽  
John B. Cannon

Purpose This article presents a basic exploration of Bayesian inference to inform researchers unfamiliar to this type of analysis of the many advantages this readily available approach provides. Method First, we demonstrate the development of Bayes' theorem, the cornerstone of Bayesian statistics, into an iterative process of updating priors. Working with a few assumptions, including normalcy and conjugacy of prior distribution, we express how one would calculate the posterior distribution using the prior distribution and the likelihood of the parameter. Next, we move to an example in auditory research by considering the effect of sound therapy for reducing the perceived loudness of tinnitus. In this case, as well as most real-world settings, we turn to Markov chain simulations because the assumptions allowing for easy calculations no longer hold. Using Markov chain Monte Carlo methods, we can illustrate several analysis solutions given by a straightforward Bayesian approach. Conclusion Bayesian methods are widely applicable and can help scientists overcome analysis problems, including how to include existing information, run interim analysis, achieve consensus through measurement, and, most importantly, interpret results correctly. Supplemental Material https://doi.org/10.23641/asha.7822592


2005 ◽  
Vol 25 (1_suppl) ◽  
pp. S627-S627
Author(s):  
Mary E Spilker ◽  
Gjermund Henriksen ◽  
Till Sprenger ◽  
Michael Valet ◽  
Isabelle Stangier ◽  
...  
Keyword(s):  

1982 ◽  
Vol 21 (1) ◽  
pp. 83-84
Author(s):  
Karol J. Krotki

The publication reviewed is number 9 in the series" Applied Statistics and Econometrics" edited by Gerhard Tintner, Pierre Desire Truonet, and Heinrich Strecker. The purpose of the series is to publish papers " too long for ordinary journal articles, but not long enough for books . ... . . Upon acceptance, speedy publication can be promised". The abstracts in English, French, and German, usual in this series, are missing from the copy reviewed. The book consists of ten chapters: sampling theory; multi -stage sampling and other fundamental problems; optimum stratification; variances; sampling with replacement and other theoretical issues; experimental design; information theory; a posteriori raising factors ; order statistics; Bayesian methods. Such an ambitious content within 130 pages requires parsimonious presentation. One chapter has been squeezed into hardly more than four pages. The chapter on a posteriori raising factors will be useful in developing countries and particularly when samples do not work out as designed. It will also be refreshing to those limited to the literature in the English language.


2008 ◽  
Vol 2 (2) ◽  
pp. 100-114 ◽  
Author(s):  
John Cashman ◽  
Jun Zhang ◽  
Matthew Nelson ◽  
Andreas Braun
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document