scholarly journals New Generation Pharmacogenomic Tools: A SNP Linkage Disequilibrium Map, Validated SNP Assay Resource, and High-Throughput Instrumentation System for Large-Scale Genetic Studies

BioTechniques ◽  
2002 ◽  
Vol 32 (6S) ◽  
pp. S48-S54 ◽  
Author(s):  
Francisco M. De La Vega ◽  
David Dailey ◽  
Janet Ziegle ◽  
Julie Williams ◽  
Dawn Madden ◽  
...  
Bone ◽  
2013 ◽  
Vol 55 (1) ◽  
pp. 216-221 ◽  
Author(s):  
D. Ruffoni ◽  
T. Kohler ◽  
R. Voide ◽  
A.J. Wirth ◽  
L.R. Donahue ◽  
...  

2018 ◽  
Author(s):  
Lisa Komoroske ◽  
Michael Miller ◽  
Sean O’Rourke ◽  
Kelly R. Stewart ◽  
Michael P. Jensen ◽  
...  

AbstractAdvances in high-throughput sequencing (HTS) technologies coupled with increased interdisciplinary collaboration is rapidly expanding capacity in the scope and scale of wildlife genetic studies. While existing HTS methods can be directly applied to address some evolutionary and ecological questions, certain research goals necessitate tailoring methods to specific study organisms, such as high-throughput genotyping of the same loci that are comparable over large spatial and temporal scales. These needs are particularly common for studies of highly mobile species of conservation concern like marine turtles, where life history traits, limited financial resources and other constraints require affordable, adaptable methods for HTS genotyping to meet a variety of study goals. Here, we present a versatile marine turtle HTS targeted enrichment platform adapted from the recently developed Rapture (RAD-Capture) method specifically designed to meet these research needs. Our results demonstrate consistent enrichment of targeted regions throughout the genome and discovery of candidate variants in all species examined for use in various conservation genetics applications. Accurate species identification confirmed the ability of our platform to genotype over 1,000 multiplexed samples, and identified areas for future methodological improvement such as optimization for low initial concentration samples. Finally, analyses within green turtles supported the ability of this platform to identify informative SNPs for stock structure, population assignment and other applications over a broad geographic range of interest to management. This platform provides an additional tool for marine turtle genetic studies and broadens capacity for future large-scale initiatives such as collaborative global marine turtle genetic databases.


2009 ◽  
Vol 91 (1) ◽  
pp. 9-21 ◽  
Author(s):  
JIAHAN LI ◽  
QIN LI ◽  
WEI HOU ◽  
KUN HAN ◽  
YAO LI ◽  
...  

SummaryA linkage–linkage disequilibrium map that describes the pattern and extent of linkage dis-equilibrium (LD) decay with genomic distance has now emerged as a viable tool to unravel the genetic structure of population differentiation and fine-map genes for complex traits. The prerequisite for constructing such a map is the simultaneous estimation of the linkage and LD between different loci. Here, we develop a computational algorithm for simultaneously estimating the recombination fraction and LD in a natural outcrossing population with multilocus marker data, which are often estimated separately in most molecular genetic studies. The algorithm is founded on a commonly used progeny test with open-pollinated offspring sampled from a natural population. The information about LD is reflected in the co-segregation of alleles at different loci among parents in the population. Open mating of parents will reveal the genetic linkage of alleles during meiosis. The algorithm was constructed within the polynomial-based mixture framework and implemented with the Expectation–Maximization (EM) algorithm. The by-product of the derivation of this algorithm is the estimation of outcrossing rate, a parameter useful to explore the genetic diversity of the population. We performed computer simulation to investigate the influences of different sampling strategies and different values of parameters on parameter estimation. By providing a number of testable hypotheses about population genetic parameters, this algorithmic model will open a broad gateway to understand the genetic structure and dynamics of an outcrossing population under natural selection.


1969 ◽  
Vol 08 (01) ◽  
pp. 07-11 ◽  
Author(s):  
H. B. Newcombe

Methods are described for deriving personal and family histories of birth, marriage, procreation, ill health and death, for large populations, from existing civil registrations of vital events and the routine records of ill health. Computers have been used to group together and »link« the separately derived records pertaining to successive events in the lives of the same individuals and families, rapidly and on a large scale. Most of the records employed are already available as machine readable punchcards and magnetic tapes, for statistical and administrative purposes, and only minor modifications have been made to the manner in which these are produced.As applied to the population of the Canadian province of British Columbia (currently about 2 million people) these methods have already yielded substantial information on the risks of disease: a) in the population, b) in relation to various parental characteristics, and c) as correlated with previous occurrences in the family histories.


2019 ◽  
Author(s):  
Mohammad Atif Faiz Afzal ◽  
Mojtaba Haghighatlari ◽  
Sai Prasad Ganesh ◽  
Chong Cheng ◽  
Johannes Hachmann

<div>We present a high-throughput computational study to identify novel polyimides (PIs) with exceptional refractive index (RI) values for use as optic or optoelectronic materials. Our study utilizes an RI prediction protocol based on a combination of first-principles and data modeling developed in previous work, which we employ on a large-scale PI candidate library generated with the ChemLG code. We deploy the virtual screening software ChemHTPS to automate the assessment of this extensive pool of PI structures in order to determine the performance potential of each candidate. This rapid and efficient approach yields a number of highly promising leads compounds. Using the data mining and machine learning program package ChemML, we analyze the top candidates with respect to prevalent structural features and feature combinations that distinguish them from less promising ones. In particular, we explore the utility of various strategies that introduce highly polarizable moieties into the PI backbone to increase its RI yield. The derived insights provide a foundation for rational and targeted design that goes beyond traditional trial-and-error searches.</div>


2020 ◽  
Author(s):  
Lungwani Muungo

The purpose of this review is to evaluate progress inmolecular epidemiology over the past 24 years in canceretiology and prevention to draw lessons for futureresearch incorporating the new generation of biomarkers.Molecular epidemiology was introduced inthe study of cancer in the early 1980s, with theexpectation that it would help overcome some majorlimitations of epidemiology and facilitate cancerprevention. The expectation was that biomarkerswould improve exposure assessment, document earlychanges preceding disease, and identify subgroupsin the population with greater susceptibility to cancer,thereby increasing the ability of epidemiologic studiesto identify causes and elucidate mechanisms incarcinogenesis. The first generation of biomarkers hasindeed contributed to our understanding of riskandsusceptibility related largely to genotoxic carcinogens.Consequently, interventions and policy changes havebeen mounted to reduce riskfrom several importantenvironmental carcinogens. Several new and promisingbiomarkers are now becoming available for epidemiologicstudies, thanks to the development of highthroughputtechnologies and theoretical advances inbiology. These include toxicogenomics, alterations ingene methylation and gene expression, proteomics, andmetabonomics, which allow large-scale studies, includingdiscovery-oriented as well as hypothesis-testinginvestigations. However, most of these newer biomarkershave not been adequately validated, and theirrole in the causal paradigm is not clear. There is a needfor their systematic validation using principles andcriteria established over the past several decades inmolecular cancer epidemiology.


2019 ◽  
Vol 25 (31) ◽  
pp. 3350-3357 ◽  
Author(s):  
Pooja Tripathi ◽  
Jyotsna Singh ◽  
Jonathan A. Lal ◽  
Vijay Tripathi

Background: With the outbreak of high throughput next-generation sequencing (NGS), the biological research of drug discovery has been directed towards the oncology and infectious disease therapeutic areas, with extensive use in biopharmaceutical development and vaccine production. Method: In this review, an effort was made to address the basic background of NGS technologies, potential applications of NGS in drug designing. Our purpose is also to provide a brief introduction of various Nextgeneration sequencing techniques. Discussions: The high-throughput methods execute Large-scale Unbiased Sequencing (LUS) which comprises of Massively Parallel Sequencing (MPS) or NGS technologies. The Next geneinvolved necessarily executes Largescale Unbiased Sequencing (LUS) which comprises of MPS or NGS technologies. These are related terms that describe a DNA sequencing technology which has revolutionized genomic research. Using NGS, an entire human genome can be sequenced within a single day. Conclusion: Analysis of NGS data unravels important clues in the quest for the treatment of various lifethreatening diseases and other related scientific problems related to human welfare.


Sign in / Sign up

Export Citation Format

Share Document