scholarly journals Point-Sampling Method Based on 3D U-Net Architecture to Reduce the Influence of False Positive and Solve Boundary Blur Problem in 3D CT Image Segmentation

2020 ◽  
Vol 10 (19) ◽  
pp. 6838
Author(s):  
Chen Li ◽  
Wei Chen ◽  
Yusong Tan

Malignant lesions are a huge threat to human health and have a high mortality rate. Locating the contour of organs is a preparation step, and it helps doctors diagnose correctly. Therefore, there is an urgent clinical need for a segmentation model specifically designed for medical imaging. However, most current medical image segmentation models directly migrate from natural image segmentation models, thus ignoring some characteristic features for medical images, such as false positive phenomena and the blurred boundary problem in 3D volume data. The research on organ segmentation models for medical images is still challenging and demanding. As a consequence, we redesign a 3D convolutional neural network (CNN) based on 3D U-Net and adopted the render method from computer graphics for 3D medical images segmentation, named Render 3D U-Net. This network adapts a subdivision-based point-sampling method to replace the original upsampling method for rendering high-quality boundaries. Besides, Render 3D U-Net integrates the point-sampling method into 3D ANU-Net architecture under deep supervision. Meanwhile, to reduce false positive phenomena in clinical diagnosis and to achieve more accurate segmentation, Render 3D U-Net specially designs a module for screening false positive. Finally, three public challenge datasets (MICCAI 2017 LiTS, MICCAI 2019 KiTS, and ISBI 2019 segTHOR) were selected as experiment datasets and to evaluate the performance on target organs. Compared with other models, Render 3D U-Net improved the performance on both overall organ and boundary in the CT image segmentation tasks, including in the liver, kidney, and heart.

2020 ◽  
Vol 10 (18) ◽  
pp. 6439
Author(s):  
Chen Li ◽  
Wei Chen ◽  
Yusong Tan

Organ lesions have a high mortality rate, and pose a serious threat to people’s lives. Segmenting organs accurately is helpful for doctors to diagnose. There is a demand for the advanced segmentation model for medical images. However, most segmentation models directly migrated from natural image segmentation models. These models usually ignore the importance of the boundary. To solve this difficulty, in this paper, we provided a unique perspective on rendering to explore accurate medical image segmentation. We adapt a subdivision-based point-sampling method to get high-quality boundaries. In addition, we integrated the attention mechanism and nested U-Net architecture into the proposed network Render U-Net.Render U-Net was evaluated on three public datasets, including LiTS, CHAOS, and DSB. This model obtained the best performance on five medical image segmentation tasks.


2021 ◽  
Author(s):  
Aditi Iyer ◽  
Eve Locastro ◽  
Aditya Apte ◽  
Harini Veeraraghavan ◽  
Joseph O Deasy

Purpose: This work presents a framework for deployment of deep learning image segmentation models for medical images across different operating systems and programming languages. Methods: Computational Environment for Radiological Research (CERR) platform was extended for deploying deep learning-based segmentation models to leverage CERR's existing functionality for radiological data import, transformation, management, and visualization. The framework is compatible with MATLAB as well as GNU Octave and Python for license-free use. Pre and post processing configurations including parameters for pre-processing images, population of channels, and post-processing segmentations was standardized using JSON format. CPU and GPU implementations of pre-trained deep learning segmentation models were packaged using Singularity containers for use in Linux and Conda environment archives for Windows, macOS and Linux operating systems. The framework accepts images in various formats including DICOM and CERR's planC and outputs segmentation in various formats including DICOM RTSTRUCT and planC objects. The ability to access the results readily in planC format enables visualization as well as radiomics and dosimetric analysis. The framework can be readily deployed in clinical software such as MIM via their extensions. Results: The open-source, GPL copyrighted framework developed in this work has been successfully used to deploy Deep Learning based segmentation models for five in-house developed and published models. These models span various treatment sites (H&N, Lung and Prostate) and modalities (CT, MR). Documentation for their usage and demo workflow is provided at https://github.com/cerr/CERR/wiki/Auto-Segmentation-models. The framework has also been used in clinical workflow for segmenting images for treatment planning and for segmenting publicly available large datasets for outcomes studies. Conclusions: This work presented a comprehensive, open-source framework for deploying deep learning-based medical image segmentation models. The framework was used to translate the developed models to clinic as well as reproducible and consistent image segmentation across institutions, facilitating multi-institutional outcomes modeling studies.


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 268
Author(s):  
Yeganeh Jalali ◽  
Mansoor Fateh ◽  
Mohsen Rezvani ◽  
Vahid Abolghasemi ◽  
Mohammad Hossein Anisi

Lung CT image segmentation is a key process in many applications such as lung cancer detection. It is considered a challenging problem due to existing similar image densities in the pulmonary structures, different types of scanners, and scanning protocols. Most of the current semi-automatic segmentation methods rely on human factors therefore it might suffer from lack of accuracy. Another shortcoming of these methods is their high false-positive rate. In recent years, several approaches, based on a deep learning framework, have been effectively applied in medical image segmentation. Among existing deep neural networks, the U-Net has provided great success in this field. In this paper, we propose a deep neural network architecture to perform an automatic lung CT image segmentation process. In the proposed method, several extensive preprocessing techniques are applied to raw CT images. Then, ground truths corresponding to these images are extracted via some morphological operations and manual reforms. Finally, all the prepared images with the corresponding ground truth are fed into a modified U-Net in which the encoder is replaced with a pre-trained ResNet-34 network (referred to as Res BCDU-Net). In the architecture, we employ BConvLSTM (Bidirectional Convolutional Long Short-term Memory)as an advanced integrator module instead of simple traditional concatenators. This is to merge the extracted feature maps of the corresponding contracting path into the previous expansion of the up-convolutional layer. Finally, a densely connected convolutional layer is utilized for the contracting path. The results of our extensive experiments on lung CT images (LIDC-IDRI database) confirm the effectiveness of the proposed method where a dice coefficient index of 97.31% is achieved.


2020 ◽  
Vol 34 (04) ◽  
pp. 6925-6932 ◽  
Author(s):  
Hao Zheng ◽  
Yizhe Zhang ◽  
Lin Yang ◽  
Chaoli Wang ◽  
Danny Z. Chen

Image segmentation is critical to lots of medical applications. While deep learning (DL) methods continue to improve performance for many medical image segmentation tasks, data annotation is a big bottleneck to DL-based segmentation because (1) DL models tend to need a large amount of labeled data to train, and (2) it is highly time-consuming and label-intensive to voxel-wise label 3D medical images. Significantly reducing annotation effort while attaining good performance of DL segmentation models remains a major challenge. In our preliminary experiments, we observe that, using partially labeled datasets, there is indeed a large performance gap with respect to using fully annotated training datasets. In this paper, we propose a new DL framework for reducing annotation effort and bridging the gap between full annotation and sparse annotation in 3D medical image segmentation. We achieve this by (i) selecting representative slices in 3D images that minimize data redundancy and save annotation effort, and (ii) self-training with pseudo-labels automatically generated from the base-models trained using the selected annotated slices. Extensive experiments using two public datasets (the HVSMR 2016 Challenge dataset and mouse piriform cortex dataset) show that our framework yields competitive segmentation results comparing with state-of-the-art DL methods using less than ∼20% of annotated data.


2016 ◽  
Vol 78 (4-3) ◽  
Author(s):  
Hussain Rahman ◽  
Fakhrud Din ◽  
Sami ur Rahmana ◽  
Sehatullah Sehatullah

Region-growing based image segmentation techniques, available for medical images, are reviewed in this paper. In digital image processing, segmentation of humans' organs from medical images is a very challenging task. A number of medical image segmentation techniques have been proposed, but there is no standard automatic algorithm that can generally be used to segment a real 3D image obtained in daily routine by the clinicians. Our criteria for the evaluation of different region-growing based segmentation algorithms are: ease of use, noise vulnerability, effectiveness, need of manual initialization, efficiency, computational complexity, need of training, information used, and noise vulnerability. We test the common region-growing algorithms on a set of abdominal MRI scans for the aorta segmentation. The evaluation results of the segmentation algorithms show that region-growing techniques can be a better choice for segmenting an object of interest from medical images.


2019 ◽  
Vol 2019 ◽  
pp. 1-10 ◽  
Author(s):  
Lin Teng ◽  
Hang Li ◽  
Shahid Karim

Medical image segmentation is one of the hot issues in the related area of image processing. Precise segmentation for medical images is a vital guarantee for follow-up treatment. At present, however, low gray contrast and blurred tissue boundaries are common in medical images, and the segmentation accuracy of medical images cannot be effectively improved. Especially, deep learning methods need more training samples, which lead to time-consuming process. Therefore, we propose a novelty model for medical image segmentation based on deep multiscale convolutional neural network (CNN) in this article. First, we extract the region of interest from the raw medical images. Then, data augmentation is operated to acquire more training datasets. Our proposed method contains three models: encoder, U-net, and decoder. Encoder is mainly responsible for feature extraction of 2D image slice. The U-net cascades the features of each block of the encoder with those obtained by deconvolution in the decoder under different scales. The decoding is mainly responsible for the upsampling of the feature graph after feature extraction of each group. Simulation results show that the new method can boost the segmentation accuracy. And, it has strong robustness compared with other segmentation methods.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Jian-Hua Shu ◽  
Fu-Dong Nian ◽  
Ming-Hui Yu ◽  
Xu Li

Medical image segmentation is a key topic in image processing and computer vision. Existing literature mainly focuses on single-organ segmentation. However, since maximizing the concentration of radiotherapy drugs in the target area with protecting the surrounding organs is essential for making effective radiotherapy plan, multiorgan segmentation has won more and more attention. An improved Mask R-CNN (region-based convolutional neural network) model is proposed for multiorgan segmentation to aid esophageal radiation treatment. Due to the fact that organ boundaries may be fuzzy and organ shapes are various, original Mask R-CNN works well on natural image segmentation while leaves something to be desired on the multiorgan segmentation task. Addressing it, the advantages of this method are threefold: (1) a ROI (region of interest) generation method is presented in the RPN (region proposal network) which is able to utilize multiscale semantic features. (2) A prebackground classification subnetwork is integrated to the original mask generation branch to improve the precision of multiorgan segmentation. (3) 4341 CT images of 44 patients are collected and annotated to evaluate the proposed method. Additionally, extensive experiments on the collected dataset demonstrate that the proposed method can segment the heart, right lung, left lung, planning target volume (PTV), and clinical target volume (CTV) accurately and efficiently. Specifically, less than 5% of the cases were missed detection or false detection on the test set, which shows a great potential for real clinical usage.


Author(s):  
S. DivyaMeena ◽  
M. Mangaleswaran

Medical images have made a great effect on medicine, diagnosis, and treatment. The most important part of image processing is image segmentation. Medical Image Segmentation is the development of programmed or semi-automatic detection of limitations within a 2D or 3D image. In medical field, image segmentation is one of the vital steps in Image identification and Object recognition. Image segmentation is a method in which large data is partitioned into small amount of data. If the input MRI image is segmented then identifying the lump attacked region will be easier for physicians. In recent days, many algorithms are proposed for the image segmentation. In this paper, an analysis is made on various segmentation algorithms for medical images. Furthermore, a comparison of existing segmentation algorithms is also discussed along with the performance measure of each.


2018 ◽  
Vol 8 (9) ◽  
pp. 1826-1834
Author(s):  
Tian Chi Zhang ◽  
Jian Pei Zhang ◽  
Jing Zhang ◽  
Melvyn L. Smith

One of the most established region-based segmentation methods is the region based C-V model. This method formulates the image segmentation problem as a level set or improved level set clustering problem. However, the existing level set C-V model fails to perform well in the presence of noisy and incomplete data or when there is similarity between the objects and background, especially for clustering or segmentation tasks in medical images where objects appear vague and poorly contrasted in greyscale. In this paper, we modify the level set C-V model using a two-step modified Nash equilibrium approach. Firstly, a standard deviation using an entropy payoff approach is employed and secondly a two-step similarity clustering based approach is applied to the modified Nash equilibrium. One represents a maximum similarity within the clustered regions and the other the minimum similarity between the clusters. Finally, an improved C-V model based on a two-step modified Nash equilibrium is proposed to smooth the object contour during the image segmentation. Experiments demonstrate that the proposed method has good performance for segmenting noisy and poorly contrasting regions within medical images.


2010 ◽  
Vol 19 (01) ◽  
pp. 1-14 ◽  
Author(s):  
M. A. BALAFAR ◽  
A. B. D. RAHMAN RAMLI ◽  
M. IQBAL SARIPAN ◽  
SYAMSIAH MASHOHOR ◽  
ROZI MAHMUD

Image segmentation is one of the most important parts of clinical diagnostic tools. Medical images mostly contain noise and inhomogeneity. Therefore, accurate segmentation of medical images is a very difficult task. However, the process of accurate segmentation of these images is very important and crucial for a correct diagnosis by clinical tools. We proposed a new clustering method based on Fuzzy C-Mean (FCM) and user specified data. In the postulated method, the color image is converted to grey level image and anisotropic filter is applied to decrease noise; User selects training data for each target class, afterwards, the image is clustered using ordinary FCM. Due to inhomogeneity and unknown noise some clusters contain training data for more than one target class. These clusters are partitioned again. This process continues until there are no such clusters. Then, the clusters contain training data for a target class assigned to that target class; mean of intensity in each class is considered as feature for that class, afterwards, feature distance of each unsigned cluster from different class is found then unsigned clusters are signed to target class with least distance from. Experimental result is demonstrated to show effectiveness of new method.


Sign in / Sign up

Export Citation Format

Share Document