scholarly journals Nuclei counting in microscopy images with three dimensional generative adversarial networks

Author(s):  
Shuo Han ◽  
Paul Salama ◽  
Kenneth W. Dunn ◽  
Edward J. Delp ◽  
Soonam Lee ◽  
...  
2022 ◽  
Vol 8 ◽  
Author(s):  
Runnan He ◽  
Shiqi Xu ◽  
Yashu Liu ◽  
Qince Li ◽  
Yang Liu ◽  
...  

Medical imaging provides a powerful tool for medical diagnosis. In the process of computer-aided diagnosis and treatment of liver cancer based on medical imaging, accurate segmentation of liver region from abdominal CT images is an important step. However, due to defects of liver tissue and limitations of CT imaging procession, the gray level of liver region in CT image is heterogeneous, and the boundary between the liver and those of adjacent tissues and organs is blurred, which makes the liver segmentation an extremely difficult task. In this study, aiming at solving the problem of low segmentation accuracy of the original 3D U-Net network, an improved network based on the three-dimensional (3D) U-Net, is proposed. Moreover, in order to solve the problem of insufficient training data caused by the difficulty of acquiring labeled 3D data, an improved 3D U-Net network is embedded into the framework of generative adversarial networks (GAN), which establishes a semi-supervised 3D liver segmentation optimization algorithm. Finally, considering the problem of poor quality of 3D abdominal fake images generated by utilizing random noise as input, deep convolutional neural networks (DCNN) based on feature restoration method is designed to generate more realistic fake images. By testing the proposed algorithm on the LiTS-2017 and KiTS19 dataset, experimental results show that the proposed semi-supervised 3D liver segmentation method can greatly improve the segmentation performance of liver, with a Dice score of 0.9424 outperforming other methods.


Symmetry ◽  
2020 ◽  
Vol 12 (10) ◽  
pp. 1705
Author(s):  
Aziz Alotaibi

Many image processing, computer graphics, and computer vision problems can be treated as image-to-image translation tasks. Such translation entails learning to map one visual representation of a given input to another representation. Image-to-image translation with generative adversarial networks (GANs) has been intensively studied and applied to various tasks, such as multimodal image-to-image translation, super-resolution translation, object transfiguration-related translation, etc. However, image-to-image translation techniques suffer from some problems, such as mode collapse, instability, and a lack of diversity. This article provides a comprehensive overview of image-to-image translation based on GAN algorithms and its variants. It also discusses and analyzes current state-of-the-art image-to-image translation techniques that are based on multimodal and multidomain representations. Finally, open issues and future research directions utilizing reinforcement learning and three-dimensional (3D) modal translation are summarized and discussed.


2018 ◽  
Vol 1085 ◽  
pp. 032016 ◽  
Author(s):  
F Carminati ◽  
A Gheata ◽  
G Khattak ◽  
P Mendez Lorenzo ◽  
S Sharan ◽  
...  

2020 ◽  
Vol 10 (2) ◽  
pp. 490 ◽  
Author(s):  
Taeksoo Kim ◽  
Youngmok Cho ◽  
Doojun Kim ◽  
Minho Chang ◽  
Yoon-Ji Kim

The use of intraoral scanners in the field of dentistry is increasing. In orthodontics, the process of tooth segmentation and rearrangement provides the orthodontist with insights into the possibilities and limitations of treatment. Although, full-arch scan data, acquired using intraoral scanners, have high dimensional accuracy, they have some limitations. Intraoral scanners use a stereo-vision system, which has difficulties scanning narrow interdental spaces. These areas, with a lack of accurate scan data, are called areas of occlusion. Owing to such occlusions, intraoral scanners often fail to acquire data, making the tooth segmentation process challenging. To solve the above problem, this study proposes a method of reconstructing occluded areas using a generative adversarial network (GAN). First, areas of occlusion are eliminated, and the scanned data are sectioned along the horizontal plane. Next, images are trained using the GAN. Finally, the reconstructed two-dimensional (2D) images are stacked to a three-dimensional (3D) image and merged with the data where the occlusion areas have been removed. Using this method, we obtained an average improvement of 0.004 mm in the tooth segmentation, as verified by the experimental results.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Bastian Rühle ◽  
Julian Frederic Krumrey ◽  
Vasile-Dan Hodoroaba

AbstractWe present a workflow for obtaining fully trained artificial neural networks that can perform automatic particle segmentations of agglomerated, non-spherical nanoparticles from scanning electron microscopy images “from scratch”, without the need for large training data sets of manually annotated images. The whole process only requires about 15 min of hands-on time by a user and can typically be finished within less than 12 h when training on a single graphics card (GPU). After training, SEM image analysis can be carried out by the artificial neural network within seconds. This is achieved by using unsupervised learning for most of the training dataset generation, making heavy use of generative adversarial networks and especially unpaired image-to-image translation via cycle-consistent adversarial networks. We compare the segmentation masks obtained with our suggested workflow qualitatively and quantitatively to state-of-the-art methods using various metrics. Finally, we used the segmentation masks for automatically extracting particle size distributions from the SEM images of TiO2 particles, which were in excellent agreement with particle size distributions obtained manually but could be obtained in a fraction of the time.


Author(s):  
Yitong Li ◽  
Yue Chen ◽  
Y. Shi

Brain tumors have high morbidity and may lead to highly lethal cancer. In clinics, accurate segmentation of tumors is the means for diagnosis and determination of subsequent treatment options. Due to the irregularity and blurring of tumor boundaries, accurately segmenting the tumor lesions has received extensive attention in medical image analysis. In view of this situation, this paper proposed a brain tumor segmentation method based on generative adversarial networks (GANs). The GAN architecture consists of a densely connected three-dimensional (3D) U-Net used for segmentation and a classification network for discrimination, both of which use 3D convolutions to fuse multi-dimensional context information. The densely connected 3D U-Net model introduces a dense connection to accelerate network convergence, extracting more detailed information. The adversarial training makes the distribution of segmentation results closer to that of labeled data, which enables the network to segment some unexpected small tumor subregions. Alternately, train two networks and finally achieve a highly accurate classification of each voxel. The experiments conducted on BraTS2017 brain tumor MRI dataset show that the proposed method has higher accuracy in brain tumor segmentation.


Sign in / Sign up

Export Citation Format

Share Document