Breast tumor segmentation in DCE-MRI using fully convolutional networks with an application in radiogenomics

Author(s):  
Jun Zhang ◽  
Ashirbani Saha ◽  
Zhe Zhu ◽  
Maciej A. Mazurowski
Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 144
Author(s):  
Yuexing Han ◽  
Xiaolong Li ◽  
Bing Wang ◽  
Lu Wang

Image segmentation plays an important role in the field of image processing, helping to understand images and recognize objects. However, most existing methods are often unable to effectively explore the spatial information in 3D image segmentation, and they neglect the information from the contours and boundaries of the observed objects. In addition, shape boundaries can help to locate the positions of the observed objects, but most of the existing loss functions neglect the information from the boundaries. To overcome these shortcomings, this paper presents a new cascaded 2.5D fully convolutional networks (FCNs) learning framework to segment 3D medical images. A new boundary loss that incorporates distance, area, and boundary information is also proposed for the cascaded FCNs to learning more boundary and contour features from the 3D medical images. Moreover, an effective post-processing method is developed to further improve the segmentation accuracy. We verified the proposed method on LITS and 3DIRCADb datasets that include the liver and tumors. The experimental results show that the performance of the proposed method is better than existing methods with a Dice Per Case score of 74.5% for tumor segmentation, indicating the effectiveness of the proposed method.


Computers ◽  
2019 ◽  
Vol 8 (3) ◽  
pp. 52 ◽  
Author(s):  
Adoui ◽  
Mahmoudi ◽  
Larhmam ◽  
Benjelloun

Breast tumor segmentation in medical images is a decisive step for diagnosis and treatment follow-up. Automating this challenging task helps radiologists to reduce the high manual workload of breast cancer analysis. In this paper, we propose two deep learning approaches to automate the breast tumor segmentation in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) by building two fully convolutional neural networks (CNN) based on SegNet and U-Net. The obtained models can handle both detection and segmentation on each single DCE-MRI slice. In this study, we used a dataset of 86 DCE-MRIs, acquired before and after two cycles of chemotherapy, of 43 patients with local advanced breast cancer, a total of 5452 slices were used to train and validate the proposed models. The data were annotated manually by an experienced radiologist. To reduce the training time, a high-performance architecture composed of graphic processing units was used. The model was trained and validated, respectively, on 85% and 15% of the data. A mean intersection over union (IoU) of 68.88 was achieved using SegNet and 76.14% using U-Net architecture.


Sign in / Sign up

Export Citation Format

Share Document