scholarly journals Learning unsupervised feature representations for single cell microscopy images with paired cell inpainting

2019 ◽  
Vol 15 (9) ◽  
pp. e1007348 ◽  
Author(s):  
Alex X. Lu ◽  
Oren Z. Kraus ◽  
Sam Cooper ◽  
Alan M. Moses
2018 ◽  
Author(s):  
Alex X Lu ◽  
Oren Z Kraus ◽  
Sam Cooper ◽  
Alan M Moses

AbstractCellular microscopy images contain rich insights about biology. To extract this information, researchers use features, or measurements of the patterns of interest in the images. Here, we introduce a convolutional neural network (CNN) to automatically design features for fluorescence microscopy. We use a self-supervised method to learn feature representations of single cells in microscopy images without labelled training data. We train CNNs on a simple task that leverages the inherent structure of microscopy images and controls for variation in cell morphology and imaging: given one cell from an image, the CNN is asked to predict the fluorescence pattern in a second different cell from the same image. We show that our method learns high-quality features that describe protein expression patterns in single cells both yeast and human microscopy datasets. Moreover, we demonstrate that our features are useful for exploratory biological analysis, by capturing high-resolution cellular components in a proteome-wide cluster analysis of human proteins, and by quantifying multi-localized proteins and single-cell variability. We believe paired cell inpainting is a generalizable method to obtain feature representations of single cells in multichannel microscopy images.Author SummaryTo understand the cell biology captured by microscopy images, researchers use features, or measurements of relevant properties of cells, such as the shape or size of cells, or the intensity of fluorescent markers. Features are the starting point of most image analysis pipelines, so their quality in representing cells is fundamental to the success of an analysis. Classically, researchers have relied on features manually defined by imaging experts. In contrast, deep learning techniques based on convolutional neural networks (CNNs) automatically learn features, which can outperform manually-defined features at image analysis tasks. However, most CNN methods require large manually-annotated training datasets to learn useful features, limiting their practical application. Here, we developed a new CNN method that learns high-quality features for single cells in microscopy images, without the need for any labeled training data. We show that our features surpass other comparable features in identifying protein localization from images, and that our method can generalize to diverse datasets. By exploiting our method, researchers will be able to automatically obtain high-quality features customized to their own image datasets, facilitating many downstream analyses, as we highlight by demonstrating many possible use cases of our features in this study.


2019 ◽  
Author(s):  
Ruixin Wang ◽  
Dongni Wang ◽  
Dekai Kang ◽  
Xusen Guo ◽  
Chong Guo ◽  
...  

BACKGROUND In vitro human cell line models have been widely used for biomedical research to predict clinical response, identify novel mechanisms and drug response. However, one-fifth to one-third of cell lines have been cross-contaminated, which can seriously result in invalidated experimental results, unusable therapeutic products and waste of research funding. Cell line misidentification and cross-contamination may occur at any time, but authenticating cell lines is infrequent performed because the recommended genetic approaches are usually require extensive expertise and may take a few days. Conversely, the observation of live-cell morphology is a direct and real-time technique. OBJECTIVE The purpose of this study was to construct a novel computer vision technology based on deep convolutional neural networks (CNN) for “cell face” recognition. This was aimed to improve cell identification efficiency and reduce the occurrence of cell-line cross contamination. METHODS Unstained optical microscopy images of cell lines were obtained for model training (about 334 thousand patch images), and testing (about 153 thousand patch images). The AI system first trained to recognize the pure cell morphology. In order to find the most appropriate CNN model,we explored the key image features in cell morphology classification tasks using the classical CNN model-Alexnet. After that, a preferred fine-grained recognition model BCNN was used for the cell type identification (seven classifications). Next, we simulated the situation of cell cross-contamination and mixed the cells in pairs at different ratios. The detection of the cross-contamination was divided into two levels, whether the cells are mixed and what the contaminating cell is. The specificity, sensitivity, and accuracy of the model were tested separately by external validation. Finally, the segmentation model DialedNet was used to present the classification results at the single cell level. RESULTS The cell texture and density were the influencing factors that can be better recognized by the bilinear convolutional neural network (BCNN) comparing to AlexNet. The BCNN achieved 99.5% accuracy in identifying seven pure cell lines and 86.3% accuracy for detecting cross-contamination (mixing two of the seven cell lines). DilatedNet was applied to the semantic segment for analyzing in single-cell level and achieved an accuracy of 98.2%. CONCLUSIONS This study successfully demonstrated that cell lines can be morphologically identified using deep learning models. Only light-microscopy images and no reagents are required, enabling most labs to routinely perform cell identification tests.


2018 ◽  
Author(s):  
Kevin L. Hockett ◽  
Steven E. Lindow

SUMMARYMotility is generally conserved among many animal and plant pathogens. Environmental conditions, however, significantly impact expression of the motile phenotype. In this study, we describe a novel heterogeneous motility phenotype inPseudomonas syringae, where under normally suppressive incubation conditions (30°C) punctate colonies arise that are spatially isolated from the point of inoculation, giving rise to a motility pattern we term constellation swimming (CS). We demonstrate that this phenotype is reproducible, reversible, and dependent on a functioning flagellum. Mirroring the heterogeneous motility phenotype, we demonstrate the existence of a sub-population of cells under non-permissive conditions that express flagellin (fliC) at levels similar to cells incubated under permissive conditions using both quantitative single cell microscopy and flow cytometry. To understand the genetics underlying the CS phenotype, we selected for naturally arising mutants that exhibited a normal swimming phenotype at the warmer incubation temperature. Sequencing these mutants recovered several independent non-synonymous mutations within FleN (also known as FlhG) as well as mutations within the promoter region of FleQ, the master flagellum regulator inPseudomonas. We further show that nutrient depletion is the likely underlying cause of CS, as reduced nutrients will stimulate bothfliCexpression and a normal swimming phenotype at 30 °C.


2021 ◽  
pp. 108009
Author(s):  
Subbarayalu Ramalakshmi ◽  
Ramakrishnan Nagasundara Ramanan ◽  
Shanmugavel Madhavan ◽  
Chien Wei Ooi ◽  
Catherine Ching Han Chang ◽  
...  

2017 ◽  
Vol 13 (1) ◽  
pp. 170-194 ◽  
Author(s):  
Burak Okumus ◽  
Charles J Baker ◽  
Juan Carlos Arias-Castro ◽  
Ghee Chuan Lai ◽  
Emanuele Leoncini ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document