scholarly journals Data Analysis Approaches in High Throughput Screening

10.5772/52508 ◽  
2013 ◽  
Author(s):  
Asli N. ◽  
Sergio C. ◽  
Taosheng Che
2002 ◽  
Vol 45 (14) ◽  
pp. 3082-3093 ◽  
Author(s):  
Susan Y. Tamura ◽  
Patricia A. Bacha ◽  
Heather S. Gruver ◽  
Ruth F. Nutt

2007 ◽  
Vol 12 (2) ◽  
pp. 229-234 ◽  
Author(s):  
Yunxia Sui ◽  
Zhijin Wu

High-throughput screening is an essential process in drug discovery. The ability to identify true active compounds depends on the high quality of assays and proper analysis of data. The Z factor, presented by Zhang et al. in 1999, provides an easy and useful summary of assay quality and has been a widely accepted standard. However, as data analysis has undergone much improvement recently, the assessment of assay quality has not evolved in parallel. In this article, the authors study the implications of Z factor values under different conditions and link the Z factor with the power of discovering true active compounds. They discuss the different interpretations of Z factor depending on error distributions and advocate direct analysis of power as assay quality assessment. They also propose that in estimating assay quality parameters, adjustments in data analysis should be taken into account. Studying the power of identifying true “hits” gives a more direct interpretation of assay quality and may provide guidance in assay optimization on some occasions.


2003 ◽  
Vol 8 (6) ◽  
pp. 634-647 ◽  
Author(s):  
Christine Brideau ◽  
Bert Gunter ◽  
Bill Pikounis ◽  
Andy Liaw

High-throughput screening (HTS) plays a central role in modern drug discovery, allowing the rapid screening of large compound collections against a variety of putative drug targets. HTS is an industrial-scale process, relying on sophisticated auto mation, control, and state-of-the art detection technologies to organize, test, and measure hundreds of thousands to millions of compounds in nano-to microliter volumes. Despite this high technology, hit selection for HTS is still typically done using simple data analysis and basic statistical methods. The authors discuss in this article some shortcomings of these methods and present alternatives based on modern methods of statistical data analysis. Most important, they describe and show numerous real examples from the biologist-friendly Stat Server® HTS application (SHS), a custom-developed software tool built on the commercially available S-PLUS® and StatServer® statistical analysis and server software. This system remotely processes HTS data using powerful and sophisticated statistical methodology but insulates users from the technical details by outputting results in a variety of readily interpretable graphs and tables.


Author(s):  
Daniel Conole ◽  
James H Hunter ◽  
Michael J Waring

DNA-encoded combinatorial libraries (DECLs) represent an exciting new technology for high-throughput screening, significantly increasing its capacity and cost–effectiveness. Historically, DECLs have been the domain of specialized academic groups and industry; however, there has recently been a shift toward more drug discovery academic centers and institutes adopting this technology. Key to this development has been the simplification, characterization and standardization of various DECL subprotocols, such as library design, affinity screening and data analysis of hits. This review examines the feasibility of implementing DECL screening technology as a first-time user, particularly in academia, exploring the some important considerations for this, and outlines some applications of the technology that academia could contribute to the field.


2021 ◽  
Author(s):  
Carolina Nunes ◽  
Jasper Anckaert ◽  
Fanny De Vloed ◽  
Jolien De Wyn ◽  
Kaat Durinck ◽  
...  

Biomedical researchers are moving towards high-throughput screening, as this allows for automatization, better reproducibility and more and faster results. High-throughput screening experiments encompass drug, drug combination, genetic perturbagen or a combination of genetic and chemical perturbagen screens. These experiments are conducted in real-time assays over time or in an endpoint assay. The data analysis consists of data cleaning and structuring, as well as further data processing and visualisation, which, due to the amount of data, can easily become laborious, time consuming, and error-prone. Therefore, several tools have been developed to aid researchers in this data analysis, but they focus on specific experimental set-ups and are unable to process data of several time points and genetic-chemical perturbagen screens together. To meet these needs, we developed HTSplotter, available as web tool and Python module, that performs automatic data analysis and visualisation of either endpoint or real-time assays from different high-throughput screening experiments: drug, drug combination, genetic perturbagen and genetic-chemical perturbagen screens. HTSplotter implements an algorithm based on conditional statements in order to identify experiment type and controls. After appropriate data normalization, HTSplotter executes downstream analyses such as dose-response relationship and drug synergism by the Bliss independence method. All results are exported as a text file and plots are saved in a PDF file. The main advantage of HTSplotter over other available tools is the automatic analysis of genetic-chemical perturbagen screens and real-time assays where results are plotted over time. In conclusion, HTSplotter allows for the automatic end-to-end data processing, analysis and visualisation of various high-throughput in vitro cell culture screens, offering major improvements in terms of versatility, convenience and time over existing tools.


2015 ◽  
Vol 20 (7) ◽  
pp. 887-897 ◽  
Author(s):  
Jui-Hua Hsieh ◽  
Alexander Sedykh ◽  
Ruili Huang ◽  
Menghang Xia ◽  
Raymond R. Tice

A main goal of the U.S. Tox21 program is to profile a 10K-compound library for activity against a panel of stress-related and nuclear receptor signaling pathway assays using a quantitative high-throughput screening (qHTS) approach. However, assay artifacts, including nonreproducible signals and assay interference (e.g., autofluorescence), complicate compound activity interpretation. To address these issues, we have developed a data analysis pipeline that includes an updated signal noise–filtering/curation protocol and an assay interference flagging system. To better characterize various types of signals, we adopted a weighted version of the area under the curve (wAUC) to quantify the amount of activity across the tested concentration range in combination with the assay-dependent point-of-departure (POD) concentration. Based on the 32 Tox21 qHTS assays analyzed, we demonstrate that signal profiling using wAUC affords the best reproducibility (Pearson’s r = 0.91) in comparison with the POD (0.82) only or the AC50 (i.e., half-maximal activity concentration, 0.81). Among the activity artifacts characterized, cytotoxicity is the major confounding factor; on average, about 8% of Tox21 compounds are affected, whereas autofluorescence affects less than 0.5%. To facilitate data evaluation, we implemented two graphical user interface applications, allowing users to rapidly evaluate the in vitro activity of Tox21 compounds.


Biotechnology ◽  
2019 ◽  
pp. 185-209
Author(s):  
Paraskevi Papadopoulou ◽  
Miltiadis Lytras ◽  
Christina Marouli

The emerging advances of Bioinformatics have already contributed toward the establishment of better next generation medicine and medical systems by putting emphasis on improvement of prognosis, diagnosis and therapy of diseases including better management of medical systems. The purpose of this chapter is to explore ways by which the use of Bioinformatics and Smart Data Analysis will provide an overview and solutions to challenges in the fields of genomics, medicine and Health Informatics. The focus of this chapter would be on Smart Data Analysis and ways needed to filter out the noise. The chapter addresses challenges researchers and data analysts are facing in terms of the developed computational methods used to extract insights from NGS and high-throughput screening data. In this chapter the concept “Wise Data” is proposed reflecting the distinction between individual health and wellness on the one hand, and social improvement, cohesion and sustainability on the other, leading to more effective medical systems, healthier individuals and more socially cohesive societies.


Sign in / Sign up

Export Citation Format

Share Document