scholarly journals Convolution Neural Network Application in Kidney Tumor Segmentation on CT Images

2019 ◽  
Author(s):  
Jianping Hunag ◽  
Zefang Lin
2019 ◽  
Vol 28 (8) ◽  
pp. 4060-4074 ◽  
Author(s):  
Qian Yu ◽  
Yinghuan Shi ◽  
Jinquan Sun ◽  
Yang Gao ◽  
Jianbing Zhu ◽  
...  

2021 ◽  
pp. 20210038
Author(s):  
Wutian Gan ◽  
Hao Wang ◽  
Hengle Gu ◽  
Yanhua Duan ◽  
Yan Shao ◽  
...  

Objective: A stable and accurate automatic tumor delineation method has been developed to facilitate the intelligent design of lung cancer radiotherapy process. The purpose of this paper is to introduce an automatic tumor segmentation network for lung cancer on CT images based on deep learning. Methods: In this paper, a hybrid convolution neural network (CNN) combining 2D CNN and 3D CNN was implemented for the automatic lung tumor delineation using CT images. 3D CNN used V-Net model for the extraction of tumor context information from CT sequence images. 2D CNN used an encoder–decoder structure based on dense connection scheme, which could expand information flow and promote feature propagation. Next, 2D features and 3D features were fused through a hybrid module. Meanwhile, the hybrid CNN was compared with the individual 3D CNN and 2D CNN, and three evaluation metrics, Dice, Jaccard and Hausdorff distance (HD), were used for quantitative evaluation. The relationship between the segmentation performance of hybrid network and the GTV volume size was also explored. Results: The newly introduced hybrid CNN was trained and tested on a dataset of 260 cases, and could achieve a median value of 0.73, with mean and stand deviation of 0.72 ± 0.10 for the Dice metric, 0.58 ± 0.13 and 21.73 ± 13.30 mm for the Jaccard and HD metrics, respectively. The hybrid network significantly outperformed the individual 3D CNN and 2D CNN in the three examined evaluation metrics (p < 0.001). A larger GTV present a higher value for the Dice metric, but its delineation at the tumor boundary is unstable. Conclusions: The implemented hybrid CNN was able to achieve good lung tumor segmentation performance on CT images. Advances in knowledge: The hybrid CNN has valuable prospect with the ability to segment lung tumor.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 166823-166832 ◽  
Author(s):  
Tao Song ◽  
Fan Meng ◽  
Alfonso Rodriguez-Paton ◽  
Pibao Li ◽  
Pan Zheng ◽  
...  

2020 ◽  
Vol 10 (11) ◽  
pp. 2784-2794
Author(s):  
Mingyuan Pan ◽  
Yonghong Shi ◽  
Zhijian Song

The automatic segmentation of brain tumors in magnetic resonance (MR) images is very important in the diagnosis, radiotherapy planning, surgical navigation and several other clinical processes. As the location, size, shape, boundary of gliomas are heterogeneous, segmenting gliomas and intratumoral structures is very difficult. Besides, the multi-center issue makes it more challenging that multimodal brain gliomas images (such as T1, T2, fluid-attenuated inversion recovery (FLAIR), and T1c images) are from different radiation centers. This paper presents a multimodal, multi-scale, double-pathway, 3D residual convolution neural network (CNN) for automatic gliomas segmentation. In the pre-processing step, a robust gray-level normalization method is proposed to solve the multi-center problem, that the intensity range from deferent centers varies a lot. Then, a doublepathway 3D architecture based on DeepMedic toolkit is trained using multi-modality information to fuse the local and context features. In the post-processing step, a fully connected conditional random field (CRF) is built to improve the performance, filling and connecting the isolated segmentations and holes. Experiments on the Multimodal Brain Tumor Segmentation (BRATS) 2017 and 2019 dataset showed that this methods can delineate the whole tumor with a Dice coefficient, a sensitivity and a positive predictive value (PPV) of 0.88, 0.89 and 0.88, respectively. As for the segmentation of the tumor core and the enhancing area, the sensitivity reached 0.80. The results indicated that this method can segment gliomas and intratumoral structures from multimodal MR images accurately, and it possesses a clinical practice value.


2021 ◽  
Vol 11 ◽  
Author(s):  
Shunyao Luan ◽  
Xudong Xue ◽  
Yi Ding ◽  
Wei Wei ◽  
Benpeng Zhu

PurposeAccurate segmentation of liver and liver tumors is critical for radiotherapy. Liver tumor segmentation, however, remains a difficult and relevant problem in the field of medical image processing because of the various factors like complex and variable location, size, and shape of liver tumors, low contrast between tumors and normal tissues, and blurred or difficult-to-define lesion boundaries. In this paper, we proposed a neural network (S-Net) that can incorporate attention mechanisms to end-to-end segmentation of liver tumors from CT images.MethodsFirst, this study adopted a classical coding-decoding structure to realize end-to-end segmentation. Next, we introduced an attention mechanism between the contraction path and the expansion path so that the network could encode a longer range of semantic information in the local features and find the corresponding relationship between different channels. Then, we introduced long-hop connections between the layers of the contraction path and the expansion path, so that the semantic information extracted in both paths could be fused. Finally, the application of closed operation was used to dissipate the narrow interruptions and long, thin divide. This eliminated small cavities and produced a noise reduction effect.ResultsIn this paper, we used the MICCAI 2017 liver tumor segmentation (LiTS) challenge dataset, 3DIRCADb dataset and doctors’ manual contours of Hubei Cancer Hospital dataset to test the network architecture. We calculated the Dice Global (DG) score, Dice per Case (DC) score, volumetric overlap error (VOE), average symmetric surface distance (ASSD), and root mean square error (RMSE) to evaluate the accuracy of the architecture for liver tumor segmentation. The segmentation DG for tumor was found to be 0.7555, DC was 0.613, VOE was 0.413, ASSD was 1.186 and RMSE was 1.804. For a small tumor, DG was 0.3246 and DC was 0.3082. For a large tumor, DG was 0.7819 and DC was 0.7632.ConclusionS-Net obtained more semantic information with the introduction of an attention mechanism and long jump connection. Experimental results showed that this method effectively improved the effect of tumor recognition in CT images and could be applied to assist doctors in clinical treatment.


2020 ◽  
Vol 38 (6_suppl) ◽  
pp. 626-626
Author(s):  
Nicholas Heller ◽  
Sean McSweeney ◽  
Matthew Thomas Peterson ◽  
Sarah Peterson ◽  
Jack Rickman ◽  
...  

626 Background: The 2019 Kidney and Kidney Tumor Segmentation challenge (KiTS19) was an international competition held in conjunction with the 2019 International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) and sought to stimulate progress on this automatic segmentation frontier. Growing rates of kidney tumor incidence led to research into the use of artificial inteligence (AI) to radiographically differentiate and objectively characterize these tumors. Automated segmentation using AI objectively quantifies complexity and aggression of renal tumors to better differentiate and describe the tumors for improved treatment decision making. Methods: A training set of over 31,000 CT images from 210 patients with kidney tumors was publicly released with corresponding semantic segmentation masks. 106 teams from five continents used this data to develop automated deep learning systems to predict the true segmentation masks on a test set of an additional 13,500 CT images in 90 patients for which the corresponding ground truth segmentations were kept private. These predictions were scored and ranked according to their average Sørensen-Dice coefficient between kidney and tumor across the 90 test cases. Results: The winning team achieved a Dice of 0.974 for kidney and 0.851 for tumor, approaching the human inter-annotator performance on kidney (0.983) but falling short on tumor (0.923). This challenge has now entered an “open leaderboard” phase where it serves as a challenging benchmark in 3D semantic segmentation. Conclusions: Results of the KiTS19 challenge show deep learning methods are fully capable of reliable segmentation of kidneys and kidney tumors. The KiTS19 challenge attracted a high number of submissions and serves as an important and challenging benchmark in 3D segmentation. The publicly available data will further propel the use of automated 3D segmentation analysis. Fully segmented kidneys and tumors allow for automated calculation of all types of nephrometry, tumor textural variation and discovery of new predictive features important for personalized medicine and accurate prediction of patient relevant outcomes.


Sign in / Sign up

Export Citation Format

Share Document