scholarly journals A Multi-Stage GAN for Multi-Organ Chest X-ray Image Generation and Segmentation

Mathematics ◽  
2021 ◽  
Vol 9 (22) ◽  
pp. 2896
Author(s):  
Giorgio Ciano ◽  
Paolo Andreini ◽  
Tommaso Mazzierli ◽  
Monica Bianchini ◽  
Franco Scarselli

Multi-organ segmentation of X-ray images is of fundamental importance for computer aided diagnosis systems. However, the most advanced semantic segmentation methods rely on deep learning and require a huge amount of labeled images, which are rarely available due to both the high cost of human resources and the time required for labeling. In this paper, we present a novel multi-stage generation algorithm based on Generative Adversarial Networks (GANs) that can produce synthetic images along with their semantic labels and can be used for data augmentation. The main feature of the method is that, unlike other approaches, generation occurs in several stages, which simplifies the procedure and allows it to be used on very small datasets. The method was evaluated on the segmentation of chest radiographic images, showing promising results. The multi-stage approach achieves state-of-the-art and, when very few images are used to train the GANs, outperforms the corresponding single-stage approach.

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 86536-86544 ◽  
Author(s):  
Yue Zhu ◽  
Yutao Zhang ◽  
Haigang Zhang ◽  
Jinfeng Yang ◽  
Zihao Zhao

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 28894-28902 ◽  
Author(s):  
Jinfeng Yang ◽  
Zihao Zhao ◽  
Haigang Zhang ◽  
Yihua Shi

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 153535-153545
Author(s):  
Faizan Munawar ◽  
Shoaib Azmat ◽  
Talha Iqbal ◽  
Christer Gronlund ◽  
Hazrat Ali

Author(s):  
Songmin Dai ◽  
Xiaoqiang Li ◽  
Lu Wang ◽  
Pin Wu ◽  
Weiqin Tong ◽  
...  

An instance with a bad mask might make a composite image that uses it look fake. This encourages us to learn segmentation by generating realistic composite images. To achieve this, we propose a novel framework that exploits a new proposed prior called the independence prior based on Generative Adversarial Networks (GANs). The generator produces an image with multiple category-specific instance providers, a layout module and a composition module. Firstly, each provider independently outputs a category-specific instance image with a soft mask. Then the provided instances’ poses are corrected by the layout module. Lastly, the composition module combines these instances into a final image. Training with adversarial loss and penalty for mask area, each provider learns a mask that is as small as possible but enough to cover a complete category-specific instance. Weakly supervised semantic segmentation methods widely use grouping cues modeling the association between image parts, which are either artificially designed or learned with costly segmentation labels or only modeled on local pairs. Unlike them, our method automatically models the dependence between any parts and learns instance segmentation. We apply our framework in two cases: (1) Foreground segmentation on category-specific images with box-level annotation. (2) Unsupervised learning of instance appearances and masks with only one image of homogeneous object cluster (HOC). We get appealing results in both tasks, which shows the independence prior is useful for instance segmentation and it is possible to unsupervisedly learn instance masks with only one image.


2020 ◽  
Vol 13 (1) ◽  
pp. 8
Author(s):  
Sagar Kora Venu ◽  
Sridhar Ravula

Medical image datasets are usually imbalanced due to the high costs of obtaining the data and time-consuming annotations. Training a deep neural network model on such datasets to accurately classify the medical condition does not yield the desired results as they often over-fit the majority class samples’ data. Data augmentation is often performed on the training data to address the issue by position augmentation techniques such as scaling, cropping, flipping, padding, rotation, translation, affine transformation, and color augmentation techniques such as brightness, contrast, saturation, and hue to increase the dataset sizes. Radiologists generally use chest X-rays for the diagnosis of pneumonia. Due to patient privacy concerns, access to such data is often protected. In this study, we performed data augmentation on the Chest X-ray dataset to generate artificial chest X-ray images of the under-represented class through generative modeling techniques such as the Deep Convolutional Generative Adversarial Network (DCGAN). With just 1341 chest X-ray images labeled as Normal, artificial samples were created by retaining similar characteristics to the original data with this technique. Evaluating the model resulted in a Fréchet Distance of Inception (FID) score of 1.289. We further show the superior performance of a CNN classifier trained on the DCGAN augmented dataset.


2021 ◽  
pp. 115681
Author(s):  
Daniel Iglesias Morís ◽  
José Joaquim de Moura Ramos ◽  
Jorge Novo Buján ◽  
Marcos Ortega Hortas

2021 ◽  
Author(s):  
Saman Motamed ◽  
Patrik Rogalla ◽  
Farzad Khalvati

Abstract Successful training of convolutional neural networks (CNNs) requires a substantial amount of data. With small datasets networks generalize poorly. Data Augmentation techniques improve the generalizability of neural networks by using existing training data more effectively. Standard data augmentation methods, however, produce limited plausible alternative data. Generative Adversarial Networks (GANs) have been utilized to generate new data and improve the performance of CNNs. Nevertheless, data augmentation techniques for training GANs are under-explored compared to CNNs. In this work, we propose a new GAN architecture for augmentation of chest X-rays for semi-supervised detection of pneumonia and COVID-19 using generative models. We show that the proposed GAN can be used to effectively augment data and improve classification accuracy of disease in chest X-rays for pneumonia and COVID-19. We compare our augmentation GAN model with Deep Convolutional GAN and traditional augmentation methods (rotate, zoom, etc) on two different X-ray datasets and show our GAN-based augmentation method surpasses other augmentation methods for training a GAN in detecting anomalies in X-ray images.


Sign in / Sign up

Export Citation Format

Share Document