From random Brownian motion of particles to high automation laboratory: a brief history of correlation time (Preprint)

2019 ◽  
Author(s):  
Enrico Di Stasio ◽  
Federico Berruti ◽  
Alessandro Arcovito ◽  
Federica Romitelli ◽  
Mirca Antenucci ◽  
...  

BACKGROUND Laboratory automation is the actual frontier for the increase of productivity and reduction of samples turnaround time (TAT), in turn used as a key indicator of laboratory performance. However, due to the statistical distribution of TAT values, classical parameters (mean, standard deviation, percentiles) fail to describe each single sample processing “story”. The driving idea of the present work is to assimilate the samples flow in an automation laboratory to the movement of molecules in solution by means of Dynamic Light Scattering Correlation Function analysis expansion. OBJECTIVE The aim of the approach is the increase of productivity and the reduction of laboratory process cycle times thus improving data quality level. The most widely known application of laboratory automation technology is robotics, based on many different automated laboratory instruments, devices (the most common being autosamplers), software algorithms and methodologies assembled together to form an unique production chain starting from the arrival of the biological sample in the lab to the output of clinical useful final results. METHODS TAT values from 10000 samples were used to build a correlation function. Through a time course, each sample perfectly correlates with its initial status (no results available) until its specific TAT value is reached and assumes a value of 1; after the TAT is reached (produced results) it no more correlates and its status value becomes 0. The generated correlation function is simply the normalized progressive timing sum of all analyzed samples status conditions at each specific time. RESULTS By correlation function analysis, several parameters to describe the general performance of the system as well as each individual sample status are derived and applied to monitor the efficiency of the automation chain in real time mode. CONCLUSIONS Our original approach to laboratory automation leads to the possibility of determining measurable criteria able to describe the entire system capacity to buffer and reduce problems both on the full performance or on spot samples, consequently developing a new tool to evaluate different or improved performing systems CLINICALTRIAL none

2019 ◽  
Author(s):  
Federico Berruti ◽  
Enrico Di Stasio ◽  
Alessandro Arcovito ◽  
Federica Romitelli ◽  
Mirca Antenucci ◽  
...  

BACKGROUND Laboratory automation is the actual frontier for the increase of productivity and reduction of samples turnaround time (TAT), in turn used as a key indicator of laboratory performance. However, due to the statistical distribution of TAT values, classical parameters (mean, standard deviation, percentiles) fail to describe each single sample processing “story”. OBJECTIVE The driving idea of the present work is to assimilate the samples flow in an automation laboratory to the movement of molecules in solution by means of Dynamic Light Scattering Correlation Function analysis expansion METHODS TAT values from 10000 samples were used to build a correlation function. Through a time course, each sample perfectly correlates with its initial status (no results available) until its specific TAT value is reached and assumes a value of 1; after the TAT is reached (produced results) it no more correlates and its status value becomes 0. The generated correlation function is simply the normalized progressive timing sum of all analyzed samples status conditions at each specific time. RESULTS By correlation function analysis, several parameters to describe the general performance of the system as well as each individual sample status are derived and applied to monitor the efficiency of the automation chain in real time mode CONCLUSIONS Our original approach to laboratory automation leads to the possibility of determining measurable criteria able to describe the entire system capacity to buffer and reduce problems both on the full performance or on spot samples, consequently developing a new tool to evaluate different or improved performing systems. CLINICALTRIAL none


2019 ◽  
Vol 65 (2) ◽  
pp. 263-271 ◽  
Author(s):  
Joseph T Myrick ◽  
Robert J Pryor ◽  
Robert A Palais ◽  
Sean J Ison ◽  
Lindsay Sanford ◽  
...  

Abstract BACKGROUND Extreme PCR in <30 s and high-speed melting of PCR products in <5 s are recent advances in the turnaround time of DNA analysis. Previously, these steps had been performed on different specialized instruments. Integration of both extreme PCR and high-speed melting with real-time fluorescence monitoring for detection and genotyping is presented here. METHODS A microfluidic platform was enhanced for speed using cycle times as fast as 1.05 s between 66.4 °C and 93.7 °C, with end point melting rates of 8 °C/s. Primer and polymerase concentrations were increased to allow short cycle times. Synthetic sequences were used to amplify fragments of hepatitis B virus (70 bp) and Clostridium difficile (83 bp) by real-time PCR and high-speed melting on the same instrument. A blinded genotyping study of 30 human genomic samples at F2 c.*97, F5 c.1601, MTHFR c.665, and MTHFR c.1286 was also performed. RESULTS Standard rapid-cycle PCR chemistry did not produce any product when total cycling times were reduced to <1 min. However, efficient amplification was possible with increased primer (5 μmol/L) and polymerase (0.45 U/μL) concentrations. Infectious targets were amplified and identified in 52 to 71 s. Real-time PCR and genotyping of single-nucleotide variants from human DNA was achieved in 75 to 87 s and was 100% concordant to known genotypes. CONCLUSIONS Extreme PCR with high-speed melting can be performed in about 1 min. The integration of extreme PCR and high-speed melting shows that future molecular assays at the point of care for identification, quantification, and variant typing are feasible.


Author(s):  
Arpad Kelemen ◽  
Yulan Liang

Pattern differentiations and formulations are two main research tracks for heterogeneous genomic data pattern analysis. In this chapter, we develop hybrid methods to tackle the major challenges of power and reproducibility of the dynamic differential gene temporal patterns. The significant differentially expressed genes are selected not only from significant statistical analysis of microarrays but also supergenes resulting from singular value decomposition for extracting the gene components which can maximize the total predictor variability. Furthermore, hybrid clustering methods are developed based on resulting profiles from several clustering methods. We demonstrate the developed hybrid analysis through an application to a time course gene expression data from interferon-b-1a treated multiple sclerosis patients. The resulting integrated-condensed clusters and overrepresented gene lists demonstrate that the hybrid methods can successfully be applied. The post analysis includes function analysis and pathway discovery to validate the findings of the hybrid methods.


2006 ◽  
Vol 367 (3) ◽  
pp. 1039-1049 ◽  
Author(s):  
G. Harker ◽  
S. Cole ◽  
J. Helly ◽  
C. Frenk ◽  
A. Jenkins

Sign in / Sign up

Export Citation Format

Share Document