scholarly journals MOrgAna: accessible quantitative analysis of organoids with machine learning

Development ◽  
2021 ◽  
Vol 148 (18) ◽  
Author(s):  
Nicola Gritti ◽  
Jia Le Lim ◽  
Kerim Anlaş ◽  
Mallica Pandya ◽  
Germaine Aalderink ◽  
...  

ABSTRACT Recent years have seen a dramatic increase in the application of organoids to developmental biology, biomedical and translational studies. Organoids are large structures with high phenotypic complexity and are imaged on a wide range of platforms, from simple benchtop stereoscopes to high-content confocal-based imaging systems. The large volumes of images, resulting from hundreds of organoids cultured at once, are becoming increasingly difficult to inspect and interpret. Hence, there is a pressing demand for a coding-free, intuitive and scalable solution that analyses such image data in an automated yet rapid manner. Here, we present MOrgAna, a Python-based software that implements machine learning to segment images, quantify and visualize morphological and fluorescence information of organoids across hundreds of images, each with one object, within minutes. Although the MOrgAna interface is developed for users with little to no programming experience, its modular structure makes it a customizable package for advanced users. We showcase the versatility of MOrgAna on several in vitro systems, each imaged with a different microscope, thus demonstrating the wide applicability of the software to diverse organoid types and biomedical studies.

2021 ◽  
Author(s):  
Yerdos A. Ordabayev ◽  
Larry J. Friedman ◽  
Jeff Gelles ◽  
Douglas L. Theobald

AbstractMulti-wavelength single-molecule fluorescence colocalization (CoSMoS) methods allow elucidation of complex biochemical reaction mechanisms. However, analysis of CoSMoS data is intrinsically challenging because of low image signal-to-noise ratios, non-specific surface binding of the fluorescent molecules, and analysis methods that require subjective inputs to achieve accurate results. Here, we use Bayesian probabilistic programming to implement Tapqir, an unsupervised machine learning method based on a holistic, physics-based causal model of CoSMoS data. This method accounts for uncertainties in image analysis due to photon and camera noise, optical non-uniformities, non-specific binding, and spot detection. Rather than merely producing a binary “spot/no spot” classification of unspecified reliability, Tapqir objectively assigns spot classification probabilities that allow accurate downstream analysis of molecular dynamics, thermodynamics, and kinetics. We both quantitatively validate Tapqir performance against simulated CoSMoS image data with known properties and also demonstrate that it implements fully objective, automated analysis of experiment-derived data sets with a wide range of signal, noise, and non-specific binding characteristics.


2020 ◽  
Author(s):  
Moritz Lürig ◽  
Seth Donoughe ◽  
Erik Svensson ◽  
Arthur Porto ◽  
Masahito Tsuboi

For centuries, ecologists and evolutionary biologists have used images such as drawings, paintings, and photographs to record and quantify the shapes and patterns of life. With the advent of digital imaging, biologists continue to collect image data at an ever-increasing rate. This immense body of data provides insight into a wide range of biological phenomena, including phenotypic trait diversity, population dynamics, mechanisms of divergence and adaptation and evolutionary change. However, the rate of image acquisition frequently outpaces our capacity to manually extract meaningful information from the images. Moreover, manual image analysis is low-throughput, difficult to reproduce, and typically measures only a few traits at a time. This has proven to be an impediment to the growing field of phenomics - the study of many phenotypic dimensions together. Computer vision (CV), the automated extraction and processing of information from digital images, is a way to alleviate this longstanding analytical bottleneck. In this review, we illustrate the capabilities of CV for fast, comprehensive, and reproducible image analysis in ecology and evolution. First, we briefly review phenomics, arguing that ecologists and evolutionary biologists can most effectively capture phenomic-level data by using CV. Next, we describe the primary types of image-based data, and review CV approaches for extracting them (including techniques that entail machine learning and others that do not). We identify common hurdles and pitfalls, and then highlight recent successful implementations of CV in the study of ecology and evolution. Finally, we outline promising future applications for CV in biology. We anticipate that CV will become a basic component of the biologist’s toolkit, further enhancing data quality and quantity, and sparking changes in how empirical ecological and evolutionary research will be conducted.


2020 ◽  
Author(s):  
Adam Pond ◽  
Seongwon Hwang ◽  
Berta Verd ◽  
Benjamin Steventon

AbstractMachine learning approaches are becoming increasingly widespread and are now present in most areas of research. Their recent surge can be explained in part due to our ability to generate and store enormous amounts of data with which to train these models. The requirement for large training sets is also responsible for limiting further potential applications of machine learning, particularly in fields where data tend to be scarce such as developmental biology. However, recent research seems to indicate that machine learning and Big Data can sometimes be decoupled to train models with modest amounts of data. In this work we set out to train a CNN-based classifier to stage zebrafish tail buds at four different stages of development using small information-rich data sets. Our results show that two and three dimensional convolutional neural networks can be trained to stage developing zebrafish tail buds based on both morphological and gene expression confocal microscopy images, achieving in each case up to 100% test accuracy scores. Importantly, we show that high accuracy can be achieved with data set sizes of under 100 images, much smaller than the typical training set size for a convolutional neural net. Furthermore, our classifier shows that it is possible to stage isolated embryonic structures without the need to refer to classic developmental landmarks in the whole embryo, which will be particularly useful to stage 3D culture in vitro systems such as organoids. We hope that this work will provide a proof of principle that will help dispel the myth that large data set sizes are always required to train CNNs, and encourage researchers in fields where data are scarce to also apply ML approaches.Author summaryThe application of machine learning approaches currently hinges on the availability of large data sets to train the models with. However, recent research has shown that large data sets might not always be required. In this work we set out to see whether we could use small confocal microscopy image data sets to train a convolutional neural network (CNN) to stage zebrafish tail buds at four different stages in their development. We found that high test accuracies can be achieved with data set sizes of under 100 images, much smaller than the typical training set size for a CNN. This work also shows that we can robustly stage the embryonic development of isolated structures, without the need to refer back to landmarks in the tail bud. This constitutes an important methodological advance for staging organoids and other 3D culture in vitro systems. This work proves that prohibitively large data sets are not always required to train CNNs, and we hope will encourage others to apply the power of machine learning to their areas of study even if data are scarce.


2018 ◽  
Vol 16 (05) ◽  
pp. 1840022 ◽  
Author(s):  
Richard Olney ◽  
Aaron Tuor ◽  
Filip Jagodzinski ◽  
Brian Hutchinson

Discerning how a mutation affects the stability of a protein is central to the study of a wide range of diseases. Mutagenesis experiments on physical proteins provide precise insights about the effects of amino acid substitutions, but such studies are time and cost prohibitive. Computational approaches for informing experimentalists where to allocate wet-lab resources are available, including a variety of machine learning models. Assessing the accuracy of machine learning models for predicting the effects of mutations is dependent on experiments for amino acid substitutions performed in vitro. When similar experiments on physical proteins have been performed by multiple laboratories, the use of the data near the juncture of stabilizing and destabilizing mutations is questionable. In this work, we explore a systematic and principled alternative to discarding experimental data close to the juncture of stabilizing and destabilizing mutations. We model the inconclusive range of experimental [Formula: see text] values via 3- and 5-way classifiers, and systematically explore potential boundaries for the range of inconclusive experimental values. We demonstrate the effectiveness of potential boundaries through confusion matrices and heat map visualizations. We explore two novel metrics for assessing viable cutoff ranges, and find that under these metrics, a lower cutoff near [Formula: see text] and an upper cutoff near [Formula: see text] are optimal across multiple machine learning models.


Nanomaterials ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 1774
Author(s):  
Mahsa Mirzaei ◽  
Irini Furxhi ◽  
Finbarr Murphy ◽  
Martin Mullins

The emergence and rapid spread of multidrug-resistant bacteria strains are a public health concern. This emergence is caused by the overuse and misuse of antibiotics leading to the evolution of antibiotic-resistant strains. Nanoparticles (NPs) are objects with all three external dimensions in the nanoscale that varies from 1 to 100 nm. Research on NPs with enhanced antimicrobial activity as alternatives to antibiotics has grown due to the increased incidence of nosocomial and community acquired infections caused by pathogens. Machine learning (ML) tools have been used in the field of nanoinformatics with promising results. As a consequence of evident achievements on a wide range of predictive tasks, ML techniques are attracting significant interest across a variety of stakeholders. In this article, we present an ML tool that successfully predicts the antibacterial capacity of NPs while the model’s validation demonstrates encouraging results (R2 = 0.78). The data were compiled after a literature review of 60 articles and consist of key physico-chemical (p-chem) properties and experimental conditions (exposure variables and bacterial clustering) from in vitro studies. Following data homogenization and pre-processing, we trained various regression algorithms and we validated them using diverse performance metrics. Finally, an important attribute evaluation, which ranks the attributes that are most important in predicting the outcome, was performed. The attribute importance revealed that NP core size, the exposure dose, and the species of bacterium are key variables in predicting the antibacterial effect of NPs. This tool assists various stakeholders and scientists in predicting the antibacterial effects of NPs based on their p-chem properties and diverse exposure settings. This concept also aids the safe-by-design paradigm by incorporating functionality tools.


2021 ◽  
Vol 9 ◽  
Author(s):  
Moritz D. Lürig ◽  
Seth Donoughe ◽  
Erik I. Svensson ◽  
Arthur Porto ◽  
Masahito Tsuboi

For centuries, ecologists and evolutionary biologists have used images such as drawings, paintings and photographs to record and quantify the shapes and patterns of life. With the advent of digital imaging, biologists continue to collect image data at an ever-increasing rate. This immense body of data provides insight into a wide range of biological phenomena, including phenotypic diversity, population dynamics, mechanisms of divergence and adaptation, and evolutionary change. However, the rate of image acquisition frequently outpaces our capacity to manually extract meaningful information from images. Moreover, manual image analysis is low-throughput, difficult to reproduce, and typically measures only a few traits at a time. This has proven to be an impediment to the growing field of phenomics – the study of many phenotypic dimensions together. Computer vision (CV), the automated extraction and processing of information from digital images, provides the opportunity to alleviate this longstanding analytical bottleneck. In this review, we illustrate the capabilities of CV as an efficient and comprehensive method to collect phenomic data in ecological and evolutionary research. First, we briefly review phenomics, arguing that ecologists and evolutionary biologists can effectively capture phenomic-level data by taking pictures and analyzing them using CV. Next we describe the primary types of image-based data, review CV approaches for extracting them (including techniques that entail machine learning and others that do not), and identify the most common hurdles and pitfalls. Finally, we highlight recent successful implementations and promising future applications of CV in the study of phenotypes. In anticipation that CV will become a basic component of the biologist’s toolkit, our review is intended as an entry point for ecologists and evolutionary biologists that are interested in extracting phenotypic information from digital images.


2004 ◽  
Vol 286 (4) ◽  
pp. C876-C892 ◽  
Author(s):  
Ali Hafezi-Moghadam ◽  
Kennard L. Thomas ◽  
Christian Cornelssen

Various in vitro and in vivo techniques exist for study of the microcirculation. Whereas in vivo systems impress with their physiological fidelity, in vitro systems excel in the amount of reduction that can be achieved. Here we introduce the autoperfused ex vivo flow chamber designed to study murine leukocytes and platelets under well-defined hemodynamic conditions. In our model, the murine heart continuously drives the blood flow through the chamber, providing a wide range of physiological shear rates. We used a balance of force approach to quantify the prevailing forces at the chamber walls. Numerical simulations show the flow characteristics in the chamber based on a shear-thinning fluid model. We demonstrate specific rolling of wild-type leukocytes on immobilized P-selectin, abolished by a blocking MAb. When uncoated, the surfaces having a constant shear rate supported individual platelet rolling, whereas on areas showing a rapid drop in shear platelets interacted in previously unreported grapelike conglomerates, suggesting an influence of shear rate on the type of platelet interaction. In summary, the ex vivo chamber amounts to an external vessel connecting the arterial and venous systems of a live mouse. This method combines the strengths of existing in vivo and in vitro systems in the study of leukocyte and platelet function.


1999 ◽  
Vol 29 (2) ◽  
pp. 85-89 ◽  
Author(s):  
L L Otis ◽  
B W Colston ◽  
M J Everett ◽  
H Nathel

1991 ◽  
Vol 30 (01) ◽  
pp. 35-39 ◽  
Author(s):  
H. S. Durak ◽  
M. Kitapgi ◽  
B. E. Caner ◽  
R. Senekowitsch ◽  
M. T. Ercan

Vitamin K4 was labelled with 99mTc with an efficiency higher than 97%. The compound was stable up to 24 h at room temperature, and its biodistribution in NMRI mice indicated its in vivo stability. Blood radioactivity levels were high over a wide range. 10% of the injected activity remained in blood after 24 h. Excretion was mostly via kidneys. Only the liver and kidneys concentrated appreciable amounts of radioactivity. Testis/soft tissue ratios were 1.4 and 1.57 at 6 and 24 h, respectively. Testis/blood ratios were lower than 1. In vitro studies with mouse blood indicated that 33.9 ±9.6% of the radioactivity was associated with RBCs; it was washed out almost completely with saline. Protein binding was 28.7 ±6.3% as determined by TCA precipitation. Blood clearance of 99mTc-l<4 in normal subjects showed a slow decrease of radioactivity, reaching a plateau after 16 h at 20% of the injected activity. In scintigraphic images in men the testes could be well visualized. The right/left testis ratio was 1.08 ±0.13. Testis/soft tissue and testis/blood activity ratios were highest at 3 h. These ratios were higher than those obtained with pertechnetate at 20 min post injection.99mTc-l<4 appears to be a promising radiopharmaceutical for the scintigraphic visualization of testes.


Sign in / Sign up

Export Citation Format

Share Document