scholarly journals Weakly Supervised and Semi-Supervised Semantic Segmentation for Optic Disc of Fundus Image

Symmetry ◽  
2020 ◽  
Vol 12 (1) ◽  
pp. 145 ◽  
Author(s):  
Zheng Lu ◽  
Dali Chen

Weakly supervised and semi-supervised semantic segmentation has been widely used in the field of computer vision. Since it does not require groundtruth or it only needs a small number of groundtruths for training. Recently, some works use pseudo groundtruths which are generated by a classified network to train the model, however, this method is not suitable for medical image segmentation. To tackle this challenging problem, we use the GrabCut method to generate the pseudo groundtruths in this paper, and then we train the network based on a modified U-net model with the generated pseudo groundtruths, finally we utilize a small amount of groundtruths to fine tune the model. Extensive experiments on the challenging RIM-ONE and DRISHTI-GS benchmarks strongly demonstrate the effectiveness of our algorithm. We obtain state-of-art results on RIM-ONE and DRISHTI-GS databases.

Author(s):  
Hong Shen

In this chapter, we will give an intuitive introduction to the general problem of 3D medical image segmentation. We will give an overview of the popular and relevant methods that may be applicable, with a discussion about their advantages and limits. Specifically, we will discuss the issue of incorporating prior knowledge into the segmentation of anatomic structures and describe in detail the concept and issues of knowledge-based segmentation. Typical sample applications will accompany the discussions throughout this chapter. We hope this will help an application developer to improve insights in the understanding and application of various computer vision approaches to solve real-world problems of medical image segmentation.


2019 ◽  
Vol 9 (8) ◽  
pp. 1705-1716
Author(s):  
Shidu Dong ◽  
Zhi Liu ◽  
Huaqiu Wang ◽  
Yihao Zhang ◽  
Shaoguo Cui

To exploit three-dimensional (3D) context information and improve 3D medical image semantic segmentation, we propose a separate 3D (S3D) convolution neural network (CNN) architecture. First, a two-dimensional (2D) CNN is used to extract the 2D features of each slice in the xy-plane of 3D medical images. Second, one-dimensional (1D) features reassembled from the 2D features in the z-axis are input into a 1D-CNN and are then classified feature-wise. Analysis shows that S3D-CNN has lower time complexity, fewer parameters and less memory space requirements than other 3D-CNNs with a similar structure. As an example, we extend the deep convolutional encoder–decoder architecture (SegNet) to S3D-SegNet for brain tumor image segmentation. We also propose a method based on priority queues and the dice loss function to address the class imbalance for medical image segmentation. The experimental results show the following: (1) S3D-SegNet extended from SegNet can improve brain tumor image segmentation. (2) The proposed imbalance accommodation method can increase the speed of training convergence and reduce the negative impact of the imbalance. (3) S3D-SegNet with the proposed imbalance accommodation method offers performance comparable to that of some state-of-the-art 3D-CNNs and experts in brain tumor image segmentation.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Dominik Müller ◽  
Frank Kramer

Abstract Background The increased availability and usage of modern medical imaging induced a strong need for automatic medical image segmentation. Still, current image segmentation platforms do not provide the required functionalities for plain setup of medical image segmentation pipelines. Already implemented pipelines are commonly standalone software, optimized on a specific public data set. Therefore, this paper introduces the open-source Python library MIScnn. Implementation The aim of MIScnn is to provide an intuitive API allowing fast building of medical image segmentation pipelines including data I/O, preprocessing, data augmentation, patch-wise analysis, metrics, a library with state-of-the-art deep learning models and model utilization like training, prediction, as well as fully automatic evaluation (e.g. cross-validation). Similarly, high configurability and multiple open interfaces allow full pipeline customization. Results Running a cross-validation with MIScnn on the Kidney Tumor Segmentation Challenge 2019 data set (multi-class semantic segmentation with 300 CT scans) resulted into a powerful predictor based on the standard 3D U-Net model. Conclusions With this experiment, we could show that the MIScnn framework enables researchers to rapidly set up a complete medical image segmentation pipeline by using just a few lines of code. The source code for MIScnn is available in the Git repository: https://github.com/frankkramer-lab/MIScnn.


2011 ◽  
pp. 1144-1161
Author(s):  
Hong Shen

In this chapter, we will give an intuitive introduction to the general problem of 3D medical image segmentation. We will give an overview of the popular and relevant methods that may be applicable, with a discussion about their advantages and limits. Specifically, we will discuss the issue of incorporating prior knowledge into the segmentation of anatomic structures and describe in detail the concept and issues of knowledge-based segmentation. Typical sample applications will accompany the discussions throughout this chapter. We hope this will help an application developer to improve insights in the understanding and application of various computer vision approaches to solve real-world problems of medical image segmentation.


Author(s):  
Hong Shen

In this chapter, we will give an intuitive introduction to the general problem of 3D medical image segmentation. We will give an overview of the popular and relevant methods that may be applicable, with a discussion about their advantages and limits. Specifically, we will discuss the issue of incorporating prior knowledge into the segmentation of anatomic structures and describe in detail the concept and issues of knowledge-based segmentation. Typical sample applications will accompany the discussions throughout this chapter. We hope this will help an application developer to improve insights in the understanding and application of various computer vision approaches to solve real-world problems of medical image segmentation.


2021 ◽  
Vol 7 ◽  
pp. e607
Author(s):  
Ayat Abedalla ◽  
Malak Abdullah ◽  
Mahmoud Al-Ayyoub ◽  
Elhadj Benkhelifa

Medical imaging refers to visualization techniques to provide valuable information about the internal structures of the human body for clinical applications, diagnosis, treatment, and scientific research. Segmentation is one of the primary methods for analyzing and processing medical images, which helps doctors diagnose accurately by providing detailed information on the body’s required part. However, segmenting medical images faces several challenges, such as requiring trained medical experts and being time-consuming and error-prone. Thus, it appears necessary for an automatic medical image segmentation system. Deep learning algorithms have recently shown outstanding performance for segmentation tasks, especially semantic segmentation networks that provide pixel-level image understanding. By introducing the first fully convolutional network (FCN) for semantic image segmentation, several segmentation networks have been proposed on its basis. One of the state-of-the-art convolutional networks in the medical image field is U-Net. This paper presents a novel end-to-end semantic segmentation model, named Ens4B-UNet, for medical images that ensembles four U-Net architectures with pre-trained backbone networks. Ens4B-UNet utilizes U-Net’s success with several significant improvements by adapting powerful and robust convolutional neural networks (CNNs) as backbones for U-Nets encoders and using the nearest-neighbor up-sampling in the decoders. Ens4B-UNet is designed based on the weighted average ensemble of four encoder-decoder segmentation models. The backbone networks of all ensembled models are pre-trained on the ImageNet dataset to exploit the benefit of transfer learning. For improving our models, we apply several techniques for training and predicting, including stochastic weight averaging (SWA), data augmentation, test-time augmentation (TTA), and different types of optimal thresholds. We evaluate and test our models on the 2019 Pneumothorax Challenge dataset, which contains 12,047 training images with 12,954 masks and 3,205 test images. Our proposed segmentation network achieves a 0.8608 mean Dice similarity coefficient (DSC) on the test set, which is among the top one-percent systems in the Kaggle competition.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Qiang Zuo ◽  
Songyu Chen ◽  
Zhifang Wang

In recent years, semantic segmentation method based on deep learning provides advanced performance in medical image segmentation. As one of the typical segmentation networks, U-Net is successfully applied to multimodal medical image segmentation. A recurrent residual convolutional neural network with attention gate connection (R2AU-Net) based on U-Net is proposed in this paper. It enhances the capability of integrating contextual information by replacing basic convolutional units in U-Net by recurrent residual convolutional units. Furthermore, R2AU-Net adopts attention gates instead of the original skip connection. In this paper, the experiments are performed on three multimodal datasets: ISIC 2018, DRIVE, and public dataset used in LUNA and the Kaggle Data Science Bowl 2017. Experimental results show that R2AU-Net achieves much better performance than other improved U-Net algorithms for multimodal medical image segmentation.


Sign in / Sign up

Export Citation Format

Share Document