scholarly journals A Novel Brain Image Segmentation Method Using an Improved 3D U-Net Model

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Zhuqing Yang

Medical image segmentation (IS) is a research field in image processing. Deep learning methods are used to automatically segment organs, tissues, or tumor regions in medical images, which can assist doctors in diagnosing diseases. Since most IS models based on convolutional neural network (CNN) are two-dimensional models, they are not suitable for three-dimensional medical imaging. On the contrary, the three-dimensional segmentation model has problems such as complex network structure and large amount of calculation. Therefore, this study introduces the self-excited compressed dilated convolution (SECDC) module on the basis of the 3D U-Net network and proposes an improved 3D U-Net network model. In the SECDC module, the calculation amount of the model can be reduced by 1 × 1 × 1 convolution. Combining normal convolution and cavity convolution with an expansion rate of 2 can dig out the multiview features of the image. At the same time, the 3D squeeze-and-excitation (3D-SE) module can realize automatic learning of the importance of each layer. The experimental results on the BraTS2019 dataset show that the Dice coefficient and other indicators obtained by the model used in this paper indicate that the overall tumor can reach 0.87, the tumor core can reach 0.84, and the most difficult to segment enhanced tumor can reach 0.80. From the evaluation indicators, it can be analyzed that the improved 3D U-Net model used can greatly reduce the amount of data while achieving better segmentation results, and the model has better robustness. This model can meet the clinical needs of brain tumor segmentation methods.

Author(s):  
Shaohua Li ◽  
Xiuchao Sui ◽  
Xiangde Luo ◽  
Xinxing Xu ◽  
Yong Liu ◽  
...  

Medical image segmentation is important for computer-aided diagnosis. Good segmentation demands the model to see the big picture and fine details simultaneously, i.e., to learn image features that incorporate large context while keep high spatial resolutions. To approach this goal, the most widely used methods -- U-Net and variants, extract and fuse multi-scale features. However, the fused features still have small "effective receptive fields" with a focus on local image cues, limiting their performance. In this work, we propose Segtran, an alternative segmentation framework based on transformers, which have unlimited "effective receptive fields" even at high feature resolutions. The core of Segtran is a novel Squeeze-and-Expansion transformer: a squeezed attention block regularizes the self attention of transformers, and an expansion block learns diversified representations. Additionally, we propose a new positional encoding scheme for transformers, imposing a continuity inductive bias for images. Experiments were performed on 2D and 3D medical image segmentation tasks: optic disc/cup segmentation in fundus images (REFUGE'20 challenge), polyp segmentation in colonoscopy images, and brain tumor segmentation in MRI scans (BraTS'19 challenge). Compared with representative existing methods, Segtran consistently achieved the highest segmentation accuracy, and exhibited good cross-domain generalization capabilities.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 523
Author(s):  
Kh Tohidul Islam ◽  
Sudanthi Wijewickrema ◽  
Stephen O’Leary

Multi-modal three-dimensional (3-D) image segmentation is used in many medical applications, such as disease diagnosis, treatment planning, and image-guided surgery. Although multi-modal images provide information that no single image modality alone can provide, integrating such information to be used in segmentation is a challenging task. Numerous methods have been introduced to solve the problem of multi-modal medical image segmentation in recent years. In this paper, we propose a solution for the task of brain tumor segmentation. To this end, we first introduce a method of enhancing an existing magnetic resonance imaging (MRI) dataset by generating synthetic computed tomography (CT) images. Then, we discuss a process of systematic optimization of a convolutional neural network (CNN) architecture that uses this enhanced dataset, in order to customize it for our task. Using publicly available datasets, we show that the proposed method outperforms similar existing methods.


2018 ◽  
Vol 2018 ◽  
pp. 1-15
Author(s):  
Chuin-Mu Wang ◽  
Chieh-Ling Huang ◽  
Sheng-Chih Yang

Three-dimensional (3D) medical image segmentation is used to segment the target (a lesion or an organ) in 3D medical images. Through this process, 3D target information is obtained; hence, this technology is an important auxiliary tool for medical diagnosis. Although some methods have proved to be successful for two-dimensional (2D) image segmentation, their direct use in the 3D case has been unsatisfactory. To obtain more precise tumor segmentation results from 3D MR images, in this paper, we propose a method known as the 3D shape-weighted level set method (3D-SLSM). The proposed method first converts the LSM, which is superior with respect to 2D image segmentation, into a 3D algorithm that is suitable for overall calculations in 3D image models, and which improves the efficiency and accuracy of calculations. A 3D shape-weighted value is then added for each 3D-SLSM iterative process according to the changes in volume. Besides increasing the convergence rate and eliminating background noise, this shape-weighted value also brings the segmented contour closer to the actual tumor margins. To perform a quantitative analysis of 3D-SLSM and to examine its feasibility in clinical applications, we have divided our experiments into computer-simulated sequence images and actual breast MRI cases. Subsequently, we simultaneously compared various existing 3D segmentation methods. The experimental results demonstrated that 3D-SLSM exhibited precise segmentation results for both types of experimental images. In addition, 3D-SLSM showed better results for quantitative data compared with existing 3D segmentation methods.


Author(s):  
Danbing Zou ◽  
Qikui Zhu ◽  
Pingkun Yan

Domain adaptation aims to alleviate the problem of retraining a pre-trained model when applying it to a different domain, which requires large amount of additional training data of the target domain. Such an objective is usually achieved by establishing connections between the source domain labels and target domain data. However, this imbalanced source-to-target one way pass may not eliminate the domain gap, which limits the performance of the pre-trained model. In this paper, we propose an innovative Dual-Scheme Fusion Network (DSFN) for unsupervised domain adaptation. By building both source-to-target and target-to-source connections, this balanced joint information flow helps reduce the domain gap to further improve the network performance. The mechanism is further applied to the inference stage, where both the original input target image and the generated source images are segmented with the proposed joint network. The results are fused to obtain more robust segmentation. Extensive experiments of unsupervised cross-modality medical image segmentation are conducted on two tasks -- brain tumor segmentation and cardiac structures segmentation. The experimental results show that our method achieved significant performance improvement over other state-of-the-art domain adaptation methods.


2019 ◽  
Author(s):  
Ali Hatamizadeh ◽  
Demetri Terzopoulos ◽  
Andriy Myronenko

AbstractFully convolutional neural networks (CNNs) have proven to be effective at representing and classifying textural information, thus transforming image intensity into output class masks that achieve semantic image segmentation. In medical image analysis, however, expert manual segmentation often relies on the boundaries of anatomical structures of interest. We propose boundary aware CNNs for medical image segmentation. Our networks are designed to account for organ boundary information, both by providing a special network edge branch and edge-aware loss terms, and they are trainable end-to-end. We validate their effectiveness on the task of brain tumor segmentation using the BraTS 2018 dataset. Our experiments reveal that our approach yields more accurate segmentation results, which makes it promising for more extensive application to medical image segmentation.


2021 ◽  
Author(s):  
Sheng Lu ◽  
Jungang Han ◽  
Jiantao Li ◽  
Liyang Zhu ◽  
Jiewei Jiang ◽  
...  

2014 ◽  
Vol 989-994 ◽  
pp. 1088-1092
Author(s):  
Chen Guang Zhang ◽  
Yan Zhang ◽  
Xia Huan Zhang

In this paper, a novel interactive medical image segmentation method called SMOPL is proposed. This method only needs marking some pixels on foreground region for segmentation. To do this, SMOPL characterize the inherent correlations among foreground and background pixels as Hilbert-Schmidt independence. By maximizing the independence and minimizing the smoothness of labels on instance neighbor graph simultaneously, SMOPL gets the sufficiently smooth confidences of both positive and negative classes in absence of negative training examples. Then a image segmentation can be obtained by assigning each pixel to the label for which the greatest confidence is calculated. Experiments on real-world medical images show that SMOPL is robust to get a high-quality segmentation with only positive label examples.


Sign in / Sign up

Export Citation Format

Share Document