scholarly journals PIC-GAN: A Parallel Imaging Coupled Generative Adversarial Network for Accelerated Multi-Channel MRI Reconstruction

Diagnostics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 61
Author(s):  
Jun Lv ◽  
Chengyan Wang ◽  
Guang Yang

In this study, we proposed a model combing parallel imaging (PI) with generative adversarial network (GAN) architecture (PIC-GAN) for accelerated multi-channel magnetic resonance imaging (MRI) reconstruction. This model integrated data fidelity and regularization terms into the generator to benefit from multi-coils information and provide an “end-to-end” reconstruction. Besides, to better preserve image details during reconstruction, we combined the adversarial loss with pixel-wise loss in both image and frequency domains. The proposed PIC-GAN framework was evaluated on abdominal and knee MRI images using 2, 4 and 6-fold accelerations with different undersampling patterns. The performance of the PIC-GAN was compared to the sparsity-based parallel imaging (L1-ESPIRiT), the variational network (VN), and conventional GAN with single-channel images as input (zero-filled (ZF)-GAN). Experimental results show that our PIC-GAN can effectively reconstruct multi-channel MR images at a low noise level and improved structure similarity of the reconstructed images. PIC-GAN has yielded the lowest Normalized Mean Square Error (in ×10−5) (PIC-GAN: 0.58 ± 0.37, ZF-GAN: 1.93 ± 1.41, VN: 1.87 ± 1.28, L1-ESPIRiT: 2.49 ± 1.04 for abdominal MRI data and PIC-GAN: 0.80 ± 0.26, ZF-GAN: 0.93 ± 0.29, VN:1.18 ± 0.31, L1-ESPIRiT: 1.28 ± 0.24 for knee MRI data) and the highest Peak Signal to Noise Ratio (PIC-GAN: 34.43 ± 1.92, ZF-GAN: 31.45 ± 4.0, VN: 29.26 ± 2.98, L1-ESPIRiT: 25.40 ± 1.88 for abdominal MRI data and PIC-GAN: 34.10 ± 1.09, ZF-GAN: 31.47 ± 1.05, VN: 30.01 ± 1.01, L1-ESPIRiT: 28.01 ± 0.98 for knee MRI data) compared to ZF-GAN, VN and L1-ESPIRiT with an under-sampling factor of 6. The proposed PIC-GAN framework has shown superior reconstruction performance in terms of reducing aliasing artifacts and restoring tissue structures as compared to other conventional and state-of-the-art reconstruction methods.

2021 ◽  
Vol 263 (5) ◽  
pp. 1527-1538
Author(s):  
Xenofon Karakonstantis ◽  
Efren Fernandez Grande

The characterization of Room Impulse Responses (RIR) over an extended region in a room by means of measurements requires dense spatial with many microphones. This can often become intractable and time consuming in practice. Well established reconstruction methods such as plane wave regression show that the sound field in a room can be reconstructed from sparsely distributed measurements. However, these reconstructions usually rely on assuming physical sparsity (i.e. few waves compose the sound field) or trait in the measured sound field, making the models less generalizable and problem specific. In this paper we introduce a method to reconstruct a sound field in an enclosure with the use of a Generative Adversarial Network (GAN), which s new variants of the data distributions that it is trained upon. The goal of the proposed GAN model is to estimate the underlying distribution of plane waves in any source free region, and map these distributions from a stochastic, latent representation. A GAN is trained on a large number of synthesized sound fields represented by a random wave field and then tested on both simulated and real data sets, of lightly damped and reverberant rooms.


2021 ◽  
Author(s):  
Ziyu Li ◽  
Qiyuan Tian ◽  
Chanon Ngamsombat ◽  
Samuel Cartmell ◽  
John Conklin ◽  
...  

Purpose: To improve the signal-to-noise ratio (SNR) of highly accelerated volumetric MRI while preserve realistic textures using a generative adversarial network (GAN). Methods: A hybrid GAN for denoising entitled "HDnGAN" with a 3D generator and a 2D discriminator was proposed to denoise 3D T2-weighted fluid-attenuated inversion recovery (FLAIR) images acquired in 2.75 minutes (R=3×2) using wave-controlled aliasing in parallel imaging (Wave-CAIPI). HDnGAN was trained on data from 25 multiple sclerosis patients by minimizing a combined mean squared error and adversarial loss with adjustable weight λ. Results were evaluated on eight separate patients by comparing to standard T2-SPACE FLAIR images acquired in 7.25 minutes (R=2×2) using mean absolute error (MAE), peak SNR (PSNR), structural similarity index (SSIM), and VGG perceptual loss, and by two neuroradiologists using a five-point score regarding gray-white matter contrast, sharpness, SNR, lesion conspicuity, and overall quality. Results: HDnGAN (λ=0) produced the lowest MAE, highest PSNR and SSIM. HDnGAN (λ=10-3) produced the lowest VGG loss. In the reader study, HDnGAN (λ=10-3) significantly improved the gray-white contrast and SNR of Wave-CAIPI images, and outperformed BM4D and HDnGAN (λ=0) regarding image sharpness. The overall quality score from HDnGAN (λ=10-3) was significantly higher than those from Wave-CAIPI, BM4D, and HDnGAN (λ=0), with no significant difference compared to standard images. Conclusion: HDnGAN concurrently benefits from improved image synthesis performance of 3D convolution and increased training samples for training the 2D discriminator on limited data. HDnGAN generates images with high SNR and realistic textures, similar to those acquired in longer times and preferred by neuroradiologists.


2021 ◽  
Vol 11 (19) ◽  
pp. 9065
Author(s):  
Myungjin Choi ◽  
Jee-Hyeok Park ◽  
Qimeng Zhang ◽  
Byeung-Sun Hong ◽  
Chang-Hun Kim

We propose a novel method for addressing the problem of efficiently generating a highly refined normal map for screen-space fluid rendering. Because the process of filtering the normal map is crucially important to ensure the quality of the final screen-space fluid rendering, we employ a conditional generative adversarial network (cGAN) as a filter that learns a deep normal map representation, thereby refining the low-quality normal map. In particular, we have designed a novel loss function dedicated to refining the normal map information, and we use a specific set of auxiliary features to train the cGAN generator to learn features that are more robust with respect to edge details. Additionally, we constructed a dataset of six different typical scenes to enable effective demonstrations of multitype fluid simulation. Experiments indicated that our generator was able to infer clearer and more detailed features for this dataset than a basic screen-space fluid rendering method. Moreover, in some cases, the results generated by our method were even smoother than those generated by the conventional surface reconstruction method. Our method improves the fluid rendering results via the high-quality normal map while preserving the advantages of the screen-space fluid rendering methods and the traditional surface reconstruction methods, including that of the computation time being independent of the number of simulation particles and the spatial resolution being related only to image resolution.


2021 ◽  
Author(s):  
NAND YADAV ◽  
Satish Kumar Singh ◽  
Shiv Ram Dubey

In the recent advancement of machine learning methods for realistic image generation and image translation, Generative Adversarial Networks (GANs) play a vital role. GAN generates novel samples that look indistinguishable from the real images. The image translation using a generative adversarial network refers to unsupervised learning. In this paper, we translate the thermal images into visible images. Thermal to Visible image translation is challenging due to the non-availability of accurate semantic information and smooth textures. The thermal images contain only single-channel, holding only the images’ luminance with less feature. We develop a new Cyclic Attention-based Generative Adversarial Network for Thermal to Visible Face transformation (TVA-GAN) by incorporating a new attention-based network. We use attention guidance with a recurrent block through an Inception module to reduce the learning space towards the optimum solution.


Sign in / Sign up

Export Citation Format

Share Document