Image and video analysis techniques for cellular microscopy

2014 ◽  
Author(s):  
◽  
Ilker Ersoy

[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT AUTHOR'S REQUEST.] Advances in automated digital microscopy imaging made it possible to produce multi-dimensional image data that can capture dynamic characteristics of sub-cellular and cellular structures. Biologists routinely produce large volumes of bioimage time lapse data that necessitates automated algorithms for unbiased and repeatable quantitative analysis. These algorithms are the stepping stones in bioimage informatics to turn the image data into biological knowledge. Unique challenges posed by different imaging modalities and cell dynamics require a combination of accurate detection, segmentation, classification and tracking approaches tailored to address and exploit particular image characteristics. In this dissertation, we present algorithms for the analysis of microscopy image sequences to address these challenges. We propose a level set active contour approach to address accurate segmentation in phase-contrast as well as brightfield microscopy imaging that utilizes edge profiles. Our approach significantly outperforms traditional level set approaches. We show the applications of our approach to cell spreading analysis and red blood cell analysis with robust solutions for cell detection to delineate clustered cells. We also present two studies for automated classification of cells in fluorescence microscopy emphasizing the importance of choosing image features for the specific problem. Lastly, we present a fully automated cell detection and tracking approach tailored for muscle satellite cells that enables efficient and unbiased analysis of factors that promote cell motility.

2020 ◽  
Vol 10 (18) ◽  
pp. 6187
Author(s):  
Leonardo Rundo ◽  
Andrea Tangherloni ◽  
Darren R. Tyson ◽  
Riccardo Betta ◽  
Carmelo Militello ◽  
...  

Advances in microscopy imaging technologies have enabled the visualization of live-cell dynamic processes using time-lapse microscopy imaging. However, modern methods exhibit several limitations related to the training phases and to time constraints, hindering their application in the laboratory practice. In this work, we present a novel method, named Automated Cell Detection and Counting (ACDC), designed for activity detection of fluorescent labeled cell nuclei in time-lapse microscopy. ACDC overcomes the limitations of the literature methods, by first applying bilateral filtering on the original image to smooth the input cell images while preserving edge sharpness, and then by exploiting the watershed transform and morphological filtering. Moreover, ACDC represents a feasible solution for the laboratory practice, as it can leverage multi-core architectures in computer clusters to efficiently handle large-scale imaging datasets. Indeed, our Parent-Workers implementation of ACDC allows to obtain up to a 3.7× speed-up compared to the sequential counterpart. ACDC was tested on two distinct cell imaging datasets to assess its accuracy and effectiveness on images with different characteristics. We achieved an accurate cell-count and nuclei segmentation without relying on large-scale annotated datasets, a result confirmed by the average Dice Similarity Coefficients of 76.84 and 88.64 and the Pearson coefficients of 0.99 and 0.96, calculated against the manual cell counting, on the two tested datasets.


2020 ◽  
Author(s):  
Leonardo Rundo ◽  
Andrea Tangherloni ◽  
Darren R. Tyson ◽  
Riccardo Betta ◽  
Carmelo Militello ◽  
...  

AbstractAdvances in microscopy imaging technologies have enabled the visualization of live-cell dynamic processes using time-lapse microscopy imaging. However, modern methods exhibit several limitations related to the training phases and to time constraints, hindering their application in the laboratory practice. In this work, we present a novel method, named Automated Cell Detection and Counting (ACDC), designed for activity detection of fluorescent labeled cell nuclei in time-lapse microscopy. ACDC overcomes the limitations of the literature methods, by first applying bilateral filtering on the original image to smooth the input cell images while preserving edge sharpness, and then by exploiting the watershed transform and morphological filtering. Moreover, ACDC represents a feasible solution for the laboratory practice, as it can leverage multi-core architectures in computer clusters to efficiently handle large-scale imaging datasets. Indeed, our Parent-Workers implementation of ACDC allows to obtain up to a 3.7× speed-up compared to the sequential counterpart. ACDC was tested on two distinct cell imaging datasets to assess its accuracy and effectiveness on images with different characteristics. We achieved an accurate cell-count and nuclei segmentation without relying on large-scale annotated datasets, a result confirmed by the average Dice Similarity Coefficients of 76.84 and 88.64 and the Pearson coefficients of 0.99 and 0.96, calculated against the manual cell counting, on the two tested datasets.


2020 ◽  
Author(s):  
Deb Sankar Banerjee ◽  
Godwin Stephenson ◽  
Suman G. Das

Time-lapse imaging of bacteria growing in micro-channels in a controlled environment has been instrumental in studying the single cell dynamics of bacterial growth. This kind of a microfluidic setup with growth chambers is popularly known as mother machine [1]. In a typical experiment with such a set-up, bacterial growth can be studied for numerous generations with high resolution and temporal precision using image processing. However, as in any other experiment involving imaging, the image data from a typical mother machine experiment has considerable intensity fluctuations, cell intrusion, cell overlapping, filamentation etc. The large amount of data produced in such experiments makes it hard for manual analysis and correction of such unwanted aberrations. We have developed a modular code for segmentation and analysis of mother machine data (SAM) for rod shaped bacteria where we can detect such aberrations and correctly treat them without manual supervision. We track cumulative cell size and use an adaptive segmentation method to avoid faulty detection of cell division. SAM is currently written and compiled using MATLAB. It is fast (∼ 15 min/GB of image) and can be efficiently coupled with shell scripting to process large amount of data with systematic creation of output file structures and graphical results. It has been tested for many different experimental data and is publicly available in Github.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 863
Author(s):  
Vidas Raudonis ◽  
Agne Paulauskaite-Taraseviciene ◽  
Kristina Sutiene

Background: Cell detection and counting is of essential importance in evaluating the quality of early-stage embryo. Full automation of this process remains a challenging task due to different cell size, shape, the presence of incomplete cell boundaries, partially or fully overlapping cells. Moreover, the algorithm to be developed should process a large number of image data of different quality in a reasonable amount of time. Methods: Multi-focus image fusion approach based on deep learning U-Net architecture is proposed in the paper, which allows reducing the amount of data up to 7 times without losing spectral information required for embryo enhancement in the microscopic image. Results: The experiment includes the visual and quantitative analysis by estimating the image similarity metrics and processing times, which is compared to the results achieved by two wellknown techniques—Inverse Laplacian Pyramid Transform and Enhanced Correlation Coefficient Maximization. Conclusion: Comparatively, the image fusion time is substantially improved for different image resolutions, whilst ensuring the high quality of the fused image.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2203
Author(s):  
Antal Hiba ◽  
Attila Gáti ◽  
Augustin Manecy

Precise navigation is often performed by sensor fusion of different sensors. Among these sensors, optical sensors use image features to obtain the position and attitude of the camera. Runway relative navigation during final approach is a special case where robust and continuous detection of the runway is required. This paper presents a robust threshold marker detection method for monocular cameras and introduces an on-board real-time implementation with flight test results. Results with narrow and wide field-of-view optics are compared. The image processing approach is also evaluated on image data captured by a different on-board system. The pure optical approach of this paper increases sensor redundancy because it does not require input from an inertial sensor as most of the robust runway detectors.


Author(s):  
Daniel Overhoff ◽  
Peter Kohlmann ◽  
Alex Frydrychowicz ◽  
Sergios Gatidis ◽  
Christian Loewe ◽  
...  

Purpose The DRG-ÖRG IRP (Deutsche Röntgengesellschaft-Österreichische Röntgengesellschaft international radiomics platform) represents a web-/cloud-based radiomics platform based on a public-private partnership. It offers the possibility of data sharing, annotation, validation and certification in the field of artificial intelligence, radiomics analysis, and integrated diagnostics. In a first proof-of-concept study, automated myocardial segmentation and automated myocardial late gadolinum enhancement (LGE) detection using radiomic image features will be evaluated for myocarditis data sets. Materials and Methods The DRG-ÖRP IRP can be used to create quality-assured, structured image data in combination with clinical data and subsequent integrated data analysis and is characterized by the following performance criteria: Possibility of using multicentric networked data, automatically calculated quality parameters, processing of annotation tasks, contour recognition using conventional and artificial intelligence methods and the possibility of targeted integration of algorithms. In a first study, a neural network pre-trained using cardiac CINE data sets was evaluated for segmentation of PSIR data sets. In a second step, radiomic features were applied for segmental detection of LGE of the same data sets, which were provided multicenter via the IRP. Results First results show the advantages (data transparency, reliability, broad involvement of all members, continuous evolution as well as validation and certification) of this platform-based approach. In the proof-of-concept study, the neural network demonstrated a Dice coefficient of 0.813 compared to the expert's segmentation of the myocardium. In the segment-based myocardial LGE detection, the AUC was 0.73 and 0.79 after exclusion of segments with uncertain annotation.The evaluation and provision of the data takes place at the IRP, taking into account the FAT (fairness, accountability, transparency) and FAIR (findable, accessible, interoperable, reusable) criteria. Conclusion It could be shown that the DRG-ÖRP IRP can be used as a crystallization point for the generation of further individual and joint projects. The execution of quantitative analyses with artificial intelligence methods is greatly facilitated by the platform approach of the DRG-ÖRP IRP, since pre-trained neural networks can be integrated and scientific groups can be networked.In a first proof-of-concept study on automated segmentation of the myocardium and automated myocardial LGE detection, these advantages were successfully applied.Our study shows that with the DRG-ÖRP IRP, strategic goals can be implemented in an interdisciplinary way, that concrete proof-of-concept examples can be demonstrated, and that a large number of individual and joint projects can be realized in a participatory way involving all groups. Key Points:  Citation Format


2019 ◽  
Vol 20 (1) ◽  
Author(s):  
Fuyong Xing ◽  
Yuanpu Xie ◽  
Xiaoshuang Shi ◽  
Pingjun Chen ◽  
Zizhao Zhang ◽  
...  

Abstract Background Nucleus or cell detection is a fundamental task in microscopy image analysis and supports many other quantitative studies such as object counting, segmentation, tracking, etc. Deep neural networks are emerging as a powerful tool for biomedical image computing; in particular, convolutional neural networks have been widely applied to nucleus/cell detection in microscopy images. However, almost all models are tailored for specific datasets and their applicability to other microscopy image data remains unknown. Some existing studies casually learn and evaluate deep neural networks on multiple microscopy datasets, but there are still several critical, open questions to be addressed. Results We analyze the applicability of deep models specifically for nucleus detection across a wide variety of microscopy image data. More specifically, we present a fully convolutional network-based regression model and extensively evaluate it on large-scale digital pathology and microscopy image datasets, which consist of 23 organs (or cancer diseases) and come from multiple institutions. We demonstrate that for a specific target dataset, training with images from the same types of organs might be usually necessary for nucleus detection. Although the images can be visually similar due to the same staining technique and imaging protocol, deep models learned with images from different organs might not deliver desirable results and would require model fine-tuning to be on a par with those trained with target data. We also observe that training with a mixture of target and other/non-target data does not always mean a higher accuracy of nucleus detection, and it might require proper data manipulation during model training to achieve good performance. Conclusions We conduct a systematic case study on deep models for nucleus detection in a wide variety of microscopy images, aiming to address several important but previously understudied questions. We present and extensively evaluate an end-to-end, pixel-to-pixel fully convolutional regression network and report a few significant findings, some of which might have not been reported in previous studies. The model performance analysis and observations would be helpful to nucleus detection in microscopy images.


2020 ◽  
Vol 11 (1) ◽  
Author(s):  
Federico S. Gnesotto ◽  
Grzegorz Gradziuk ◽  
Pierre Ronceray ◽  
Chase P. Broedersz

Abstract Time-lapse microscopy imaging provides direct access to the dynamics of soft and living systems. At mesoscopic scales, such microscopy experiments reveal intrinsic thermal and non-equilibrium fluctuations. These fluctuations, together with measurement noise, pose a challenge for the dynamical analysis of these Brownian movies. Traditionally, methods to analyze such experimental data rely on tracking embedded or endogenous probes. However, it is in general unclear, especially in complex many-body systems, which degrees of freedom are the most informative about their non-equilibrium nature. Here, we introduce an alternative, tracking-free approach that overcomes these difficulties via an unsupervised analysis of the Brownian movie. We develop a dimensional reduction scheme selecting a basis of modes based on dissipation. Subsequently, we learn the non-equilibrium dynamics, thereby estimating the entropy production rate and time-resolved force maps. After benchmarking our method against a minimal model, we illustrate its broader applicability with an example inspired by active biopolymer gels.


2012 ◽  
Vol 52 (No. 4) ◽  
pp. 181-187 ◽  
Author(s):  
F. Hájek

This paper describes the automated classification of tree species composition from Ikonos 4-meter imagery using an object-oriented approach. The image was acquired over a man-planted forest area with the proportion of various forest types (conifers, broadleaved, mixed) in the Krušné hory Mts., Czech Republic. In order to enlarge the class signature space, additional channels were calculated by low-pass filtering, IHS transformation and Haralick texture measures. Employing these layers, image segmentation and classification were conducted on several levels to create a hierarchical image object network. The higher level separated the image into smaller parts regarding the stand maturity and structure, the lower (detailed) level assigned individual tree clusters into classes for the main forest species. The classification accuracy was assessed by comparing the automated technique with the field inventory using Kappa coefficient. The study aimed to create a rule-base transferable to other datasets. Moreover, the appropriate scale of common image data and utilisation in forestry management are evaluated.


Sign in / Sign up

Export Citation Format

Share Document