Fluorescence Lifetime Imaging with Compressive Sensing through Deep Convolutional Neural Network

Author(s):  
Ruoyang Yao ◽  
Marien I. Ochoa ◽  
Xavier Intes ◽  
Pingkun Yan
Author(s):  
Hong Lu ◽  
Xiaofei Zou ◽  
Longlong Liao ◽  
Kenli Li ◽  
Jie Liu

Compressive Sensing for Magnetic Resonance Imaging (CS-MRI) aims to reconstruct Magnetic Resonance (MR) images from under-sampled raw data. There are two challenges to improve CS-MRI methods, i.e. designing an under-sampling algorithm to achieve optimal sampling, as well as designing fast and small deep neural networks to obtain reconstructed MR images with superior quality. To improve the reconstruction quality of MR images, we propose a novel deep convolutional neural network architecture for CS-MRI named MRCSNet. The MRCSNet consists of three sub-networks, a compressive sensing sampling sub-network, an initial reconstruction sub-network, and a refined reconstruction sub-network. Experimental results demonstrate that MRCSNet generates high-quality reconstructed MR images at various under-sampling ratios, and also meets the requirements of real-time CS-MRI applications. Compared to state-of-the-art CS-MRI approaches, MRCSNet offers a significant improvement in reconstruction accuracies, such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM). Besides, it reduces the reconstruction error evaluated by the Normalized Root-Mean-Square Error (NRMSE). The source codes are available at https://github.com/TaihuLight/MRCSNet .


2016 ◽  
Vol 41 (11) ◽  
pp. 2561 ◽  
Author(s):  
Gang Wu ◽  
Thomas Nowotny ◽  
Yongliang Zhang ◽  
Hong-Qi Yu ◽  
David Day-Uei Li

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Laurent Héliot ◽  
Aymeric Leray

AbstractFluorescence lifetime imaging microscopy (FLIM) is a powerful technique to probe the molecular environment of fluorophores. The analysis of FLIM images is usually performed with time consuming fitting methods. For accelerating this analysis, sophisticated deep learning architectures based on convolutional neural networks have been developed for restrained lifetime ranges but they require long training time. In this work, we present a simple neural network formed only with fully connected layers able to analyze fluorescence lifetime images. It is based on the reduction of high dimensional fluorescence intensity temporal decays into four parameters which are the phasor coordinates, the mean and amplitude-weighted lifetimes. This network called Phasor-Net has been applied for a time domain FLIM system excited with an 80 MHz laser repetition frequency, with negligible jitter and afterpulsing. Due to the restricted time interval of 12.5 ns, the training range of the lifetimes was limited between 0.2 and 3.0 ns; and the total photon number was lower than 106, as encountered in live cell imaging. From simulated biexponential decays, we demonstrate that Phasor-Net is more precise and less biased than standard fitting methods. We demonstrate also that this simple architecture gives almost comparable performance than those obtained from more sophisticated networks but with a faster training process (15 min instead of 30 min). We finally apply successfully our method to determine biexponential decays parameters for FLIM experiments in living cells expressing EGFP linked to mCherry and fused to a plasma membrane protein.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Vytautas Zickus ◽  
Ming-Lo Wu ◽  
Kazuhiro Morimoto ◽  
Valentin Kapitany ◽  
Areeba Fatima ◽  
...  

AbstractFluorescence lifetime imaging microscopy (FLIM) is a key technology that provides direct insight into cell metabolism, cell dynamics and protein activity. However, determining the lifetimes of different fluorescent proteins requires the detection of a relatively large number of photons, hence slowing down total acquisition times. Moreover, there are many cases, for example in studies of cell collectives, where wide-field imaging is desired. We report scan-less wide-field FLIM based on a 0.5 MP resolution, time-gated Single Photon Avalanche Diode (SPAD) camera, with acquisition rates up to 1 Hz. Fluorescence lifetime estimation is performed via a pre-trained artificial neural network with 1000-fold improvement in processing times compared to standard least squares fitting techniques. We utilised our system to image HT1080—human fibrosarcoma cell line as well as Convallaria. The results show promise for real-time FLIM and a viable route towards multi-megapixel fluorescence lifetime images, with a proof-of-principle mosaic image shown with 3.6 MP.


2020 ◽  
Vol 2020 (4) ◽  
pp. 4-14
Author(s):  
Vladimir Budak ◽  
Ekaterina Ilyina

The article proposes the classification of lenses with different symmetrical beam angles and offers a scale as a spot-light’s palette. A collection of spotlight’s images was created and classified according to the proposed scale. The analysis of 788 pcs of existing lenses and reflectors with different LEDs and COBs carried out, and the dependence of the axial light intensity from beam angle was obtained. A transfer training of new deep convolutional neural network (CNN) based on the pre-trained GoogleNet was performed using this collection. GradCAM analysis showed that the trained network correctly identifies the features of objects. This work allows us to classify arbitrary spotlights with an accuracy of about 80 %. Thus, light designer can determine the class of spotlight and corresponding type of lens with its technical parameters using this new model based on CCN.


Sign in / Sign up

Export Citation Format

Share Document