scholarly journals Deep learning based high-resolution reconstruction of trabecular bone microstructures from low-resolution CT scans using GAN-CIRCLE

Author(s):  
Indranil Guha ◽  
Syed Ahmed Nadeem ◽  
Chenyu You ◽  
Xiaoliu Zhang ◽  
Steven M. Levy ◽  
...  
2021 ◽  
Author(s):  
Huan Zhang ◽  
Zhao Zhang ◽  
Haijun Zhang ◽  
Yi Yang ◽  
Shuicheng Yan ◽  
...  

<div>Deep learning based image inpainting methods have improved the performance greatly due to powerful representation ability of deep learning. However, current deep inpainting methods still tend to produce unreasonable structure and blurry texture, implying that image inpainting is still a challenging topic due to the ill-posed property of the task. To address these issues, we propose a novel deep multi-resolution learning-based progressive image inpainting method, termed MR-InpaintNet, which takes the damaged images of different resolutions as input and then fuses the multi-resolution features for repairing the damaged images. The idea is motivated by the fact that images of different resolutions can provide different levels of feature information. Specifically, the low-resolution image provides strong semantic information and the high-resolution image offers detailed texture information. The middle-resolution image can be used to reduce the gap between low-resolution and high-resolution images, which can further refine the inpainting result. To fuse and improve the multi-resolution features, a novel multi-resolution feature learning (MRFL) process is designed, which is consisted of a multi-resolution feature fusion (MRFF) module, an adaptive feature enhancement (AFE) module and a memory enhanced mechanism (MEM) module for information preservation. Then, the refined multi-resolution features contain both rich semantic information and detailed texture information from multiple resolutions. We further handle the refined multiresolution features by the decoder to obtain the recovered image. Extensive experiments on the Paris Street View, Places2 and CelebA-HQ datasets demonstrate that our proposed MRInpaintNet can effectively recover the textures and structures, and performs favorably against state-of-the-art methods.</div>


2020 ◽  
Author(s):  
Marie Déchelle-Marquet ◽  
Marina Levy ◽  
Patrick Gallinari ◽  
Michel Crepon ◽  
Sylvie Thiria

&lt;p&gt;Ocean currents are a major source of impact on climate variability, through the heat transport they induce for instance. Ocean climate models have quite low resolution of about 50 km. Several dynamical processes such as instabilities and filaments which have a scale of 1km have a strong influence on the ocean state. We propose to observe and model these fine scale effects by a combination of satellite high resolution SST observations (1km resolution, daily observations) and mesoscale resolution altimetry observations (10km resolution, weekly observations) with deep neural networks. Whereas the downscaling of climate models has been commonly addressed with assimilation approaches, in the last few years neural networks emerged as powerful multi-scale analysis method. Besides, the large amount of available oceanic data makes attractive the use of deep learning to bridge the gap between scales variability.&lt;/p&gt;&lt;p&gt;This study aims at reconstructing the multi-scale variability of oceanic fields, based on the high resolution NATL60 model of ocean observations at different spatial resolutions: low-resolution sea surface height (SSH) and high resolution SST. As the link between residual neural networks and dynamical systems has recently been established, such a network is trained in a supervised way to reconstruct the high variability of SSH and ocean currents at submesoscale (a few kilometers). To ensure the conservation of physical aspects in the model outputs, physical knowledge is incorporated into the deep learning models training. Different validation methods are investigated and the model outputs are tested with regards to their physical plausibility. The method performance is discussed and compared to other baselines (namely convolutional neural network). The generalization of the proposed method on different ocean variables such as sea surface chlorophyll or sea surface salinity is also examined.&lt;/p&gt;


2021 ◽  
Vol 13 (18) ◽  
pp. 3568
Author(s):  
Bo Ping ◽  
Yunshan Meng ◽  
Cunjin Xue ◽  
Fenzhen Su

Meso- and fine-scale sea surface temperature (SST) is an essential parameter in oceanographic research. Remote sensing is an efficient way to acquire global SST. However, single infrared-based and microwave-based satellite-derived SST cannot obtain complete coverage and high-resolution SST simultaneously. Deep learning super-resolution (SR) techniques have exhibited the ability to enhance spatial resolution, offering the potential to reconstruct the details of SST fields. Current SR research focuses mainly on improving the structure of the SR model instead of training dataset selection. Different from generating the low-resolution images by downscaling the corresponding high-resolution images, the high- and low-resolution SST are derived from different sensors. Hence, the structure similarity of training patches may affect the SR model training and, consequently, the SST reconstruction. In this study, we first discuss the influence of training dataset selection on SST SR performance, showing that the training dataset determined by the structure similarity index (SSIM) of 0.6 can result in higher reconstruction accuracy and better image quality. In addition, in the practical stage, the spatial similarity between the low-resolution input and the objective high-resolution output is a key factor for SST SR. Moreover, the training dataset obtained from the actual AMSR2 and MODIS SST images is more suitable for SST SR because of the skin and sub-skin temperature difference. Finally, the SST reconstruction accuracies obtained from different SR models are relatively consistent, yet the differences in reconstructed image quality are rather significant.


2021 ◽  
Author(s):  
Huan Zhang ◽  
Zhao Zhang ◽  
Haijun Zhang ◽  
Yi Yang ◽  
Shuicheng Yan ◽  
...  

<div>Deep learning based image inpainting methods have improved the performance greatly due to powerful representation ability of deep learning. However, current deep inpainting methods still tend to produce unreasonable structure and blurry texture, implying that image inpainting is still a challenging topic due to the ill-posed property of the task. To address these issues, we propose a novel deep multi-resolution learning-based progressive image inpainting method, termed MR-InpaintNet, which takes the damaged images of different resolutions as input and then fuses the multi-resolution features for repairing the damaged images. The idea is motivated by the fact that images of different resolutions can provide different levels of feature information. Specifically, the low-resolution image provides strong semantic information and the high-resolution image offers detailed texture information. The middle-resolution image can be used to reduce the gap between low-resolution and high-resolution images, which can further refine the inpainting result. To fuse and improve the multi-resolution features, a novel multi-resolution feature learning (MRFL) process is designed, which is consisted of a multi-resolution feature fusion (MRFF) module, an adaptive feature enhancement (AFE) module and a memory enhanced mechanism (MEM) module for information preservation. Then, the refined multi-resolution features contain both rich semantic information and detailed texture information from multiple resolutions. We further handle the refined multiresolution features by the decoder to obtain the recovered image. Extensive experiments on the Paris Street View, Places2 and CelebA-HQ datasets demonstrate that our proposed MRInpaintNet can effectively recover the textures and structures, and performs favorably against state-of-the-art methods.</div>


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Jiansheng Peng ◽  
Kui Fu ◽  
Qingjin Wei ◽  
Yong Qin ◽  
Qiwen He

As a representative technology of artificial intelligence, 3D reconstruction based on deep learning can be integrated into the edge computing framework to form an intelligent edge and then realize the intelligent processing of the edge. Recently, high-resolution representation of 3D objects using multiview decomposition (MVD) architecture is a fast reconstruction method for generating objects with realistic details from a single RGB image. The results of high-resolution 3D object reconstruction are related to two aspects. On the one hand, a low-resolution reconstruction network represents a good 3D object from a single RGB image. On the other hand, a high-resolution reconstruction network maximizes fine low-resolution 3D objects. To improve these two aspects and further enhance the high-resolution reconstruction capabilities of the 3D object generation network, we study and improve the low-resolution 3D generation network and the depth map superresolution network. Eventually, we get an improved multiview decomposition (IMVD) network. First, we use a 2D image encoder with multifeature fusion (MFF) to enhance the feature extraction capability of the model. Second, a 3D decoder using an effective subpixel convolutional neural network (3D ESPCN) improves the decoding speed in the decoding stage. Moreover, we design a multiresidual dense block (MRDB) to optimize the depth map superresolution network, which allows the model to capture more object details and reduce the model parameters by approximately 25% when the number of network layers is doubled. The experimental results show that the proposed IMVD is better than the original MVD in the 3D object superresolution experiment and the high-resolution 3D reconstruction experiment of a single image.


2019 ◽  
Author(s):  
Zhou Hang ◽  
Li Shiwei ◽  
Huang Qing ◽  
Liu Shijie ◽  
Quan Tingwei ◽  
...  

AbstractDeep learning technology enables us acquire high resolution image from low resolution image in biological imaging free from sophisticated optical hardware. However, current methods require a huge number of the precisely registered low-resolution (LR) and high-resolution (HR) volume image pairs. This requirement is challengeable for biological volume imaging. Here, we proposed 3D deep learning network based on dual generative adversarial network (dual-GAN) framework for recovering HR volume images from LR volume images. Our network avoids learning the direct mappings from the LR and HR volume image pairs, which need precisely image registration process. And the cycle consistent network makes the predicted HR volume image faithful to its corresponding LR volume image. The proposed method achieves the recovery of 20x/1.0 NA volume images from 5x/0.16 NA volume images collected by light-sheet microscopy. In essence our method is suitable for the other imaging modalities.


Sign in / Sign up

Export Citation Format

Share Document