scholarly journals Render U-Net: A Unique Perspective on Render to Explore Accurate Medical Image Segmentation

2020 ◽  
Vol 10 (18) ◽  
pp. 6439
Author(s):  
Chen Li ◽  
Wei Chen ◽  
Yusong Tan

Organ lesions have a high mortality rate, and pose a serious threat to people’s lives. Segmenting organs accurately is helpful for doctors to diagnose. There is a demand for the advanced segmentation model for medical images. However, most segmentation models directly migrated from natural image segmentation models. These models usually ignore the importance of the boundary. To solve this difficulty, in this paper, we provided a unique perspective on rendering to explore accurate medical image segmentation. We adapt a subdivision-based point-sampling method to get high-quality boundaries. In addition, we integrated the attention mechanism and nested U-Net architecture into the proposed network Render U-Net.Render U-Net was evaluated on three public datasets, including LiTS, CHAOS, and DSB. This model obtained the best performance on five medical image segmentation tasks.

2020 ◽  
Vol 34 (04) ◽  
pp. 6925-6932 ◽  
Author(s):  
Hao Zheng ◽  
Yizhe Zhang ◽  
Lin Yang ◽  
Chaoli Wang ◽  
Danny Z. Chen

Image segmentation is critical to lots of medical applications. While deep learning (DL) methods continue to improve performance for many medical image segmentation tasks, data annotation is a big bottleneck to DL-based segmentation because (1) DL models tend to need a large amount of labeled data to train, and (2) it is highly time-consuming and label-intensive to voxel-wise label 3D medical images. Significantly reducing annotation effort while attaining good performance of DL segmentation models remains a major challenge. In our preliminary experiments, we observe that, using partially labeled datasets, there is indeed a large performance gap with respect to using fully annotated training datasets. In this paper, we propose a new DL framework for reducing annotation effort and bridging the gap between full annotation and sparse annotation in 3D medical image segmentation. We achieve this by (i) selecting representative slices in 3D images that minimize data redundancy and save annotation effort, and (ii) self-training with pseudo-labels automatically generated from the base-models trained using the selected annotated slices. Extensive experiments using two public datasets (the HVSMR 2016 Challenge dataset and mouse piriform cortex dataset) show that our framework yields competitive segmentation results comparing with state-of-the-art DL methods using less than ∼20% of annotated data.


2020 ◽  
Vol 10 (19) ◽  
pp. 6838
Author(s):  
Chen Li ◽  
Wei Chen ◽  
Yusong Tan

Malignant lesions are a huge threat to human health and have a high mortality rate. Locating the contour of organs is a preparation step, and it helps doctors diagnose correctly. Therefore, there is an urgent clinical need for a segmentation model specifically designed for medical imaging. However, most current medical image segmentation models directly migrate from natural image segmentation models, thus ignoring some characteristic features for medical images, such as false positive phenomena and the blurred boundary problem in 3D volume data. The research on organ segmentation models for medical images is still challenging and demanding. As a consequence, we redesign a 3D convolutional neural network (CNN) based on 3D U-Net and adopted the render method from computer graphics for 3D medical images segmentation, named Render 3D U-Net. This network adapts a subdivision-based point-sampling method to replace the original upsampling method for rendering high-quality boundaries. Besides, Render 3D U-Net integrates the point-sampling method into 3D ANU-Net architecture under deep supervision. Meanwhile, to reduce false positive phenomena in clinical diagnosis and to achieve more accurate segmentation, Render 3D U-Net specially designs a module for screening false positive. Finally, three public challenge datasets (MICCAI 2017 LiTS, MICCAI 2019 KiTS, and ISBI 2019 segTHOR) were selected as experiment datasets and to evaluate the performance on target organs. Compared with other models, Render 3D U-Net improved the performance on both overall organ and boundary in the CT image segmentation tasks, including in the liver, kidney, and heart.


2014 ◽  
Vol 989-994 ◽  
pp. 1088-1092
Author(s):  
Chen Guang Zhang ◽  
Yan Zhang ◽  
Xia Huan Zhang

In this paper, a novel interactive medical image segmentation method called SMOPL is proposed. This method only needs marking some pixels on foreground region for segmentation. To do this, SMOPL characterize the inherent correlations among foreground and background pixels as Hilbert-Schmidt independence. By maximizing the independence and minimizing the smoothness of labels on instance neighbor graph simultaneously, SMOPL gets the sufficiently smooth confidences of both positive and negative classes in absence of negative training examples. Then a image segmentation can be obtained by assigning each pixel to the label for which the greatest confidence is calculated. Experiments on real-world medical images show that SMOPL is robust to get a high-quality segmentation with only positive label examples.


2021 ◽  
Author(s):  
Nicola K Dinsdale ◽  
Mark Jenkinson ◽  
Ana IL Namburete

Acquisition of high quality manual annotations is vital for the development of segmentation algorithms. However, to create them we require a substantial amount of expert time and knowledge. Large numbers of labels are required to train convolutional neural networks due to the vast number of parameters that must be learned in the optimisation process. Here, we develop the STAMP algorithm to allow the simultaneous training and pruning of a UNet architecture for medical image segmentation with targeted channelwise dropout to make the network robust to the pruning. We demonstrate the technique across segmentation tasks and imaging modalities. It is then shown that, through online pruning, we are able to train networks to have much higher performance than the equivalent standard UNet models while reducing their size by more than 85% in terms of parameters. This has the potential to allow networks to be directly trained on datasets where very low numbers of labels are available.


2021 ◽  
Vol 1 (1) ◽  
pp. 50-52
Author(s):  
Bo Dong ◽  
Wenhai Wang ◽  
Jinpeng Li

We present our solutions to the MedAI for all three tasks: polyp segmentation task, instrument segmentation task, and transparency task. We use the same framework to process the two segmentation tasks of polyps and instruments. The key improvement over last year is new state-of-the-art vision architectures, especially transformers which significantly outperform ConvNets for the medical image segmentation tasks. Our solution consists of multiple segmentation models, and each model uses a transformer as the backbone network. we get the best IoU score of 0.915 on the instrument segmentation task and 0.836 on polyp segmentation task after submitting. Meanwhile, we provide complete solutions in https://github.com/dongbo811/MedAI-2021.


TEM Journal ◽  
2020 ◽  
pp. 1348-1356
Author(s):  
Vo Thi Hong Tuyet ◽  
Nguyen Thanh Binh

Energy between curves of image has useful for object contour. The edge map is an important task for recognition. The shape that is found by linking between edges will clearly present the useful information of objects. The aim of medical image segmentation is the representation of a medical image into small pieces. In this process, feature extraction must adapt with edge map completely. This paper proposed a solution for medical image segmentation based on fully convolutional network with gradient vector flow snake in bandelet domain. Our approach depends on decomposition in bandelet domain and reconstruction in contour detection by fully convolutional network combining with gradient vector flow snake. To improve the accuracy of the feature's extraction processing, the proposed method detected the edge map in bandelet domain by using fully convolutional network. And its reconstructed objects contour by using gradient vector flow snake combined with the boundary condition. The results of the proposed method have the segmentation clearly with small details of medical images in high-quality and low-quality cases.


2019 ◽  
Vol 31 (6) ◽  
pp. 1007 ◽  
Author(s):  
Haiou Wang ◽  
Hui Liu ◽  
Qiang Guo ◽  
Kai Deng ◽  
Caiming Zhang

Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 348
Author(s):  
Choongsang Cho ◽  
Young Han Lee ◽  
Jongyoul Park ◽  
Sangkeun Lee

Semantic image segmentation has a wide range of applications. When it comes to medical image segmentation, its accuracy is even more important than those of other areas because the performance gives useful information directly applicable to disease diagnosis, surgical planning, and history monitoring. The state-of-the-art models in medical image segmentation are variants of encoder-decoder architecture, which is called U-Net. To effectively reflect the spatial features in feature maps in encoder-decoder architecture, we propose a spatially adaptive weighting scheme for medical image segmentation. Specifically, the spatial feature is estimated from the feature maps, and the learned weighting parameters are obtained from the computed map, since segmentation results are predicted from the feature map through a convolutional layer. Especially in the proposed networks, the convolutional block for extracting the feature map is replaced with the widely used convolutional frameworks: VGG, ResNet, and Bottleneck Resent structures. In addition, a bilinear up-sampling method replaces the up-convolutional layer to increase the resolution of the feature map. For the performance evaluation of the proposed architecture, we used three data sets covering different medical imaging modalities. Experimental results show that the network with the proposed self-spatial adaptive weighting block based on the ResNet framework gave the highest IoU and DICE scores in the three tasks compared to other methods. In particular, the segmentation network combining the proposed self-spatially adaptive block and ResNet framework recorded the highest 3.01% and 2.89% improvements in IoU and DICE scores, respectively, in the Nerve data set. Therefore, we believe that the proposed scheme can be a useful tool for image segmentation tasks based on the encoder-decoder architecture.


Sign in / Sign up

Export Citation Format

Share Document