scholarly journals Deep Learning-Based Point-Scanning Super-Resolution Imaging

2019 ◽  
Author(s):  
Linjing Fang ◽  
Fred Monroe ◽  
Sammy Weiser Novak ◽  
Lyndsey Kirk ◽  
Cara Rae Schiavon ◽  
...  

Point scanning imaging systems (e.g. scanning electron or laser scanning confocal microscopes) are perhaps the most widely used tools for high resolution cellular and tissue imaging. Like all other imaging modalities, the resolution, speed, sample preservation, and signal-to-noise ratio (SNR) of point scanning systems are difficult to optimize simultaneously. In particular, point scanning systems are uniquely constrained by an inverse relationship between imaging speed and pixel resolution. Here we show these limitations can be mitigated via the use of deep learning-based super-sampling of undersampled images acquired on a point-scanning system, which we termed point-scanning super-resolution (PSSR) imaging. Oversampled, high SNR ground truth images acquired on scanning electron or Airyscan laser scanning confocal microscopes were "crappified" to generate semi-synthetic training data for PSSR models that were then used to restore real-world undersampled images. Remarkably, our EM PSSR model could restore undersampled images acquired with different optics, detectors, samples, or sample preparation methods in other labs. PSSR enabled previously unattainable 2 nm resolution images with our serial block face scanning electron microscope system. For fluorescence, we show that undersampled confocal images combined with a multiframe PSSR model trained on Airyscan timelapses facilitates Airyscan-equivalent spatial resolution and SNR with ~100x lower laser dose and 16x higher frame rates than corresponding high-resolution acquisitions. In conclusion, PSSR facilitates point-scanning image acquisition with otherwise unattainable resolution, speed, and sensitivity.

Author(s):  
Fuqi Mao ◽  
Xiaohan Guan ◽  
Ruoyu Wang ◽  
Wen Yue

As an important tool to study the microstructure and properties of materials, High Resolution Transmission Electron Microscope (HRTEM) images can obtain the lattice fringe image (reflecting the crystal plane spacing information), structure image and individual atom image (which reflects the configuration of atoms or atomic groups in crystal structure). Despite the rapid development of HTTEM devices, HRTEM images still have limited achievable resolution for human visual system. With the rapid development of deep learning technology in recent years, researchers are actively exploring the Super-resolution (SR) model based on deep learning, and the model has reached the current best level in various SR benchmarks. Using SR to reconstruct high-resolution HRTEM image is helpful to the material science research. However, there is one core issue that has not been resolved: most of these super-resolution methods require the training data to exist in pairs. In actual scenarios, especially for HRTEM images, there are no corresponding HR images. To reconstruct high quality HRTEM image, a novel Super-Resolution architecture for HRTEM images is proposed in this paper. Borrowing the idea from Dual Regression Networks (DRN), we introduce an additional dual regression structure to ESRGAN, by training the model with unpaired HRTEM images and paired nature images. Results of extensive benchmark experiments demonstrate that the proposed method achieves better performance than the most resent SISR methods with both quantitative and visual results.


2021 ◽  
Author(s):  
Andres Munoz-Jaramillo ◽  
Anna Jungbluth ◽  
Xavier Gitiaux ◽  
Paul Wright ◽  
Carl Shneider ◽  
...  

Abstract Super-resolution techniques aim to increase the resolution of images by adding detail. Compared to upsampling techniques reliant on interpolation, deep learning-based approaches learn features and their relationships across the training data set to leverage prior knowledge on what low resolution patterns look like in higher resolution images. As an added benefit, deep neural networks can learn the systematic properties of the target images (i.e.\ texture), combining super-resolution with instrument cross-calibration. While the successful use of super-resolution algorithms for natural images is rooted in creating perceptually convincing results, super-resolution applied to scientific data requires careful quantitative evaluation of performances. In this work, we demonstrate that deep learning can increase the resolution and calibrate space- and ground-based imagers belonging to different instrumental generations. In addition, we establish a set of measurements to benchmark the performance of scientific applications of deep learning-based super-resolution and calibration. We super-resolve and calibrate solar magnetic field images taken by the Michelson Doppler Imager (MDI; resolution ~2"/pixel; science-grade, space-based) and the Global Oscillation Network Group (GONG; resolution ~2.5"/pixel; space weather operations, ground-based) to the pixel resolution of images taken by the Helioseismic and Magnetic Imager (HMI; resolution ~0.5"/pixel; last generation, science-grade, space-based).


2020 ◽  
Vol 10 (12) ◽  
pp. 4282
Author(s):  
Ghada Zamzmi ◽  
Sivaramakrishnan Rajaraman ◽  
Sameer Antani

Medical images are acquired at different resolutions based on clinical goals or available technology. In general, however, high-resolution images with fine structural details are preferred for visual task analysis. Recognizing this significance, several deep learning networks have been proposed to enhance medical images for reliable automated interpretation. These deep networks are often computationally complex and require a massive number of parameters, which restrict them to highly capable computing platforms with large memory banks. In this paper, we propose an efficient deep learning approach, called Hydra, which simultaneously reduces computational complexity and improves performance. The Hydra consists of a trunk and several computing heads. The trunk is a super-resolution model that learns the mapping from low-resolution to high-resolution images. It has a simple architecture that is trained using multiple scales at once to minimize a proposed learning-loss function. We also propose to append multiple task-specific heads to the trained Hydra trunk for simultaneous learning of multiple visual tasks in medical images. The Hydra is evaluated on publicly available chest X-ray image collections to perform image enhancement, lung segmentation, and abnormality classification. Our experimental results support our claims and demonstrate that the proposed approach can improve the performance of super-resolution and visual task analysis in medical images at a remarkably reduced computational cost.


1974 ◽  
Vol 22 (7) ◽  
pp. 751-754 ◽  
Author(s):  
MORTON L. SCHULTZ ◽  
LEWIS E. LIPKIN ◽  
MARTA J. WADE ◽  
PETER F. LEMKIN ◽  
GEORGE M. CARMAN

Quantitative cytology requires accurate representation of a specimen's optical densities. As the requirements for measurement precision increase, instrument-induced errors become increasingly more difficult to reduce to the point at which their effect on experimental data is insignificant compared to the measured parameters. Shading induces a significant amount of amplitude ambiguity to data obtained from a scanning system. A method of shading correction on single pixels is introduced as a new way to reduce some errors that currently plague scanning systems.


2020 ◽  
Vol 8 (4) ◽  
pp. 304-310
Author(s):  
Windra Swastika ◽  
Ekky Rino Fajar Sakti ◽  
Mochamad Subianto

Low-resolution images can be reconstructed into high-resolution images using the Super-resolution Convolution Neural Network (SRCNN) algorithm. This study aims to improve the vehicle license plate number's recognition accuracy by generating a high-resolution vehicle image using the SRCNN. The recognition is carried out by two types of character recognition methods: Tesseract OCR and SPNet. The training data for SRCNN uses the DIV2K dataset consisting of 900 images, while the training data for character recognition uses the Chars74 dataset. The high-resolution images constructed using SRCNN can increase the average accuracy of vehicle license plate number recognition by 16.9 % using Tesseract and 13.8 % with SPNet.


Author(s):  
M. Buyukdemircioglu ◽  
R. Can ◽  
S. Kocaman

Abstract. Automatic detection, segmentation and reconstruction of buildings in urban areas from Earth Observation (EO) data are still challenging for many researchers. Roof is one of the most important element in a building model. The three-dimensional geographical information system (3D GIS) applications generally require the roof type and roof geometry for performing various analyses on the models, such as energy efficiency. The conventional segmentation and classification methods are often based on features like corners, edges and line segments. In parallel to the developments in computer hardware and artificial intelligence (AI) methods including deep learning (DL), image features can be extracted automatically. As a DL technique, convolutional neural networks (CNNs) can also be used for image classification tasks, but require large amount of high quality training data for obtaining accurate results. The main aim of this study was to generate a roof type dataset from very high-resolution (10 cm) orthophotos of Cesme, Turkey, and to classify the roof types using a shallow CNN architecture. The training dataset consists 10,000 roof images and their labels. Six roof type classes such as flat, hip, half-hip, gable, pyramid and complex roofs were used for the classification in the study area. The prediction performance of the shallow CNN model used here was compared with the results obtained from the fine-tuning of three well-known pre-trained networks, i.e. VGG-16, EfficientNetB4, ResNet-50. The results show that although our CNN has slightly lower performance expressed with the overall accuracy, it is still acceptable for many applications using sparse data.


2020 ◽  
Vol 48 (4) ◽  
pp. 899-907
Author(s):  
Vimal Pathak ◽  
Ashish Srivastava ◽  
Sumit Gupta

This paper presents an innovative method to investigate the accuracy and capability of contactless laser scanning systems in terms of geometrical dimensioning and tolerancing (GD&T) control. The current work proposes a standard benchmark part with typical features conforming to different families of GD&T. The benchmark part designed consists of various canonical features widely used in an engineering and industrial applications. Further, the adopted approach includes the methodology for comparison of geometry using a common alignment method for contactless scanning system and a CMM. In addition, proposal of different scanning orientation methods for contactless system is also realized. Surface reconstruction of the benchmark model is achieved using different reverse engineering software, and results are analyzed to study the correlation between different geometries of contact and contactless system. Considering the contact based measurement as a reference, different models developed were analyzed and compared in terms of geometrical and dimensional tolerance. The proposal of standard benchmark part and methodology for GD&T verification will provide a simple and effective way of performance evaluation for various contactless laser-scanning systems in terms of deviations.


Author(s):  
Chinmay Belthangady ◽  
Loic A. Royer

Deep Learning is a recent and important addition to the computational toolbox available for image reconstruction in fluorescence microscopy. We review state-of-the-art applications such as image restoration, super-resolution, and light-field imaging, and discuss how the latest Deep Learning research can be applied to other image reconstruction tasks such as structured illumination, spectral deconvolution, and sample stabilisation. Despite its successes, Deep Learning also poses significant challenges, has often misunderstood capabilities, and overlooked limits. We will address key questions, such as: What are the challenges in obtaining training data? Can we discover structures not present in the training data? And, what is the danger of inferring unsubstantiated image details?


2021 ◽  
Author(s):  
Esley Torres ◽  
Raúl Pinto ◽  
Alejandro Linares ◽  
Damián Martínez ◽  
Víctor Abonza ◽  
...  

Mean-Shift Super Resolution (MSSR) is a principle based on the Mean Shift theory that improves the spatial resolution in fluorescence images beyond the diffraction limit. MSSR works on low- and high-density fluorophore images, is not limited by the architecture of the detector (EM-CCD, sCMOS, or photomultiplier-based laser scanning systems) and is applicable to single images as well as temporal series. The theoretical limit of spatial resolution, based on optimized real-world imaging conditions and analysis of temporal image series, has been measured to be 40 nm. Furthermore, MSSR has denoising capabilities that outperform other analytical super resolution image approaches. Altogether, MSSR is a powerful, flexible, and generic tool for multidimensional and live cell imaging applications.


Sign in / Sign up

Export Citation Format

Share Document