Development of a Portable Tissue Micro Array Instrument

2011 ◽  
Vol 5 (4) ◽  
Author(s):  
K. K. Tan ◽  
A. S. Putra ◽  
L. P. Pham ◽  
T. H. Lee ◽  
M. Salto-Tellez ◽  
...  

Tissue micro array (TMA) is based on the idea of applying miniaturization and a high throughput approach to hybridization-based analyses of tissues. It facilitates biomedical research on a large scale in a single experiment; thus representing one of the most commonly used technologies in translational research. A critical analysis of the existing TMA instruments indicates that there are potential constraints in terms of portability, apart from costs and complexity. This paper will present the development of an affordable, configurable, and portable TMA instrument to allow an efficient collection of tissues, especially in instrument-to-tissue scenarios. The purely mechanical instrument requires no energy sources other than the user, is light weight, portable, and simple to use.

Author(s):  
Mingxuan Gao ◽  
Mingyi Ling ◽  
Xinwei Tang ◽  
Shun Wang ◽  
Xu Xiao ◽  
...  

Abstract With the development of single-cell RNA sequencing (scRNA-seq) technology, it has become possible to perform large-scale transcript profiling for tens of thousands of cells in a single experiment. Many analysis pipelines have been developed for data generated from different high-throughput scRNA-seq platforms, bringing a new challenge to users to choose a proper workflow that is efficient, robust and reliable for a specific sequencing platform. Moreover, as the amount of public scRNA-seq data has increased rapidly, integrated analysis of scRNA-seq data from different sources has become increasingly popular. However, it remains unclear whether such integrated analysis would be biassed if the data were processed by different upstream pipelines. In this study, we encapsulated seven existing high-throughput scRNA-seq data processing pipelines with Nextflow, a general integrative workflow management framework, and evaluated their performance in terms of running time, computational resource consumption and data analysis consistency using eight public datasets generated from five different high-throughput scRNA-seq platforms. Our work provides a useful guideline for the selection of scRNA-seq data processing pipelines based on their performance on different real datasets. In addition, these guidelines can serve as a performance evaluation framework for future developments in high-throughput scRNA-seq data processing.


Author(s):  
Mingxuan Gao ◽  
Mingyi Ling ◽  
Xinwei Tang ◽  
Shun Wang ◽  
Xu Xiao ◽  
...  

AbstractWith the development of single-cell RNA sequencing (scRNA-seq) technology, it has become possible to perform large-scale transcript profiling for tens of thousands of cells in a single experiment. Many analysis pipelines have been developed for data generated from different high-throughput scRNA-seq platforms, bringing a new challenge to users to choose a proper workflow that is efficient, robust and reliable for a specific sequencing platform. Moreover, as the amount of public scRNA-seq data has increased rapidly, integrated analysis of scRNA-seq data from different sources has become increasingly popular. How-ever, it remains unclear whether such integrated analysis would be biased if the data were processed by different upstream pipelines. In this study, we encapsulated seven existing high-throughput scRNA-seq data processing pipelines with Nextflow, a general integrative workflow management framework, and evaluated their performances in terms of running time, computational resource consumption, and data processing consistency using nine public datasets generated from five different high-throughput scRNA-seq platforms. Our work provides a useful guideline for the selection of scRNA-seq data processing pipelines based on their performances on different real datasets. In addition, these guidelines can serve as a performance evaluation framework for future developments in high-throughput scRNA-seq data processing.


2009 ◽  
Vol 39 (3) ◽  
pp. 131-140 ◽  
Author(s):  
Philip R. O. Payne ◽  
Peter J. Embi ◽  
Chandan K. Sen

A common thread throughout the clinical and translational research domains is the need to collect, manage, integrate, analyze, and disseminate large-scale, heterogeneous biomedical data sets. However, well-established and broadly adopted theoretical and practical frameworks and models intended to address such needs are conspicuously absent in the published literature or other reputable knowledge sources. Instead, the development and execution of multidisciplinary, clinical, or translational studies are significantly limited by the propagation of “silos” of both data and expertise. Motivated by this fundamental challenge, we report upon the current state and evolution of biomedical informatics as it pertains to the conduct of high-throughput clinical and translational research and will present both a conceptual and practical framework for the design and execution of informatics-enabled studies. The objective of presenting such findings and constructs is to provide the clinical and translational research community with a common frame of reference for discussing and expanding upon such models and methodologies.


2005 ◽  
Vol 33 (1) ◽  
pp. 89-101 ◽  
Author(s):  
Mark A. Rothstein

Biobanks are repositories of human biological materials collected for biomedical research. There are over 300 million stored specimens in the United States, and the number grows by 20 million per year. In the post-genome world of high throughput gene sequencing and computational biology, biobanks hold the promise of facilitating large-scale research studies. New organizational and operational models of research repositories also raise complex issues of big science, big business, and big ethical concerns.


2013 ◽  
Vol 19 (5) ◽  
pp. 651-660 ◽  
Author(s):  
Ji-Hu Zhang ◽  
Zhao B. Kang ◽  
Ophelia Ardayfio ◽  
Pei-i Ho ◽  
Thomas Smith ◽  
...  

Pilot testing of an assay intended for high-throughput screening (HTS) with small compound sets is a necessary but often time-consuming step in the validation of an assay protocol. When the initial testing concentration is less than optimal, this can involve iterative testing at different concentrations to further evaluate the pilot outcome, which can be even more time-consuming. Quantitative HTS (qHTS) enables flexible and rapid collection of assay performance statistics, hits at different concentrations, and concentration-response curves in a single experiment. Here we describe the qHTS process for pilot testing in which eight-point concentration-response curves are produced using an interplate asymmetric dilution protocol in which the first four concentrations are used to represent the range of typical HTS screening concentrations and the last four concentrations are added for robust curve fitting to determine potency/efficacy values. We also describe how these data can be analyzed to predict the frequency of false-positives, false-negatives, hit rates, and confirmation rates for the HTS process as a function of screening concentration. By taking into account the compound pharmacology, this pilot-testing paradigm enables rapid assessment of the assay performance and choosing the optimal concentration for the large-scale HTS in one experiment.


2014 ◽  
Author(s):  
Àlex Bravo ◽  
Janet Piñero ◽  
Núria Queralt ◽  
Michael Rautschka ◽  
Laura I. Furlong

Background Current biomedical research needs to leverage and exploit the large amount of information reported in publications. Automated text mining approaches, in particular those aimed at finding relationships between entities, are key for identification of actionable knowledge from free text repositories. We present the BeFree system aimed at identifying relationships between biomedical entities with a special focus on genes and their associated diseases. Results By exploiting morpho-syntactic information of the text BeFree is able to identify gene-disease, drug-disease and drug-target associations with state-of-the-art performance. The application of BeFree to real-case scenarios shows its effectiveness in extracting information relevant for translational research. We show the value of the gene-disease associations extracted by BeFree through a number of analyses and integration with other data sources. BeFree succeeds in identifying genes associated to a major cause of morbidity worldwide, depression, which are not present in other public resources. Moreover, large-scale extraction and analysis of gene-disease associations, and integration with current biomedical knowledge, provided interesting insights on the kind of information that can be found in the literature, and raised challenges regarding data prioritization and curation. We found that only a small proportion of the gene-disease associations discovered by using BeFree is collected in expert-curated databases. Thus, there is a pressing need to find alternative strategies to manual curation to review, prioritize and curate text-mining data and incorporate it into domain-specific databases. We present our strategy for data prioritization and discuss its implications for supporting biomedical research and applications. Conclusions BeFree is a novel text mining system that performs competitively for the identification of gene-disease, drug-disease and drug-target associations. Our analyses show that mining only a small fraction of MEDLINE results in a large dataset of gene-disease associations, and only a small proportion of this dataset is actually recorded in curated resources, raising several issues on data prioritization and curation. We propose that joint analysis of text mined data with data curated by experts appears as a suitable approach to both assess data quality and highlight novel and interesting information.


2019 ◽  
Author(s):  
Mohammad Atif Faiz Afzal ◽  
Mojtaba Haghighatlari ◽  
Sai Prasad Ganesh ◽  
Chong Cheng ◽  
Johannes Hachmann

<div>We present a high-throughput computational study to identify novel polyimides (PIs) with exceptional refractive index (RI) values for use as optic or optoelectronic materials. Our study utilizes an RI prediction protocol based on a combination of first-principles and data modeling developed in previous work, which we employ on a large-scale PI candidate library generated with the ChemLG code. We deploy the virtual screening software ChemHTPS to automate the assessment of this extensive pool of PI structures in order to determine the performance potential of each candidate. This rapid and efficient approach yields a number of highly promising leads compounds. Using the data mining and machine learning program package ChemML, we analyze the top candidates with respect to prevalent structural features and feature combinations that distinguish them from less promising ones. In particular, we explore the utility of various strategies that introduce highly polarizable moieties into the PI backbone to increase its RI yield. The derived insights provide a foundation for rational and targeted design that goes beyond traditional trial-and-error searches.</div>


Sign in / Sign up

Export Citation Format

Share Document