scholarly journals Exploiting Global Structure Information to Improve Medical Image Segmentation

Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3249
Author(s):  
Jaemoon Hwang ◽  
Sangheum Hwang

In this paper, we propose a method to enhance the performance of segmentation models for medical images. The method is based on convolutional neural networks that learn the global structure information, which corresponds to anatomical structures in medical images. Specifically, the proposed method is designed to learn the global boundary structures via an autoencoder and constrain a segmentation network through a loss function. In this manner, the segmentation model performs the prediction in the learned anatomical feature space. Unlike previous studies that considered anatomical priors by using a pre-trained autoencoder to train segmentation networks, we propose a single-stage approach in which the segmentation network and autoencoder are jointly learned. To verify the effectiveness of the proposed method, the segmentation performance is evaluated in terms of both the overlap and distance metrics on the lung area and spinal cord segmentation tasks. The experimental results demonstrate that the proposed method can enhance not only the segmentation performance but also the robustness against domain shifts.

2020 ◽  
Vol 34 (04) ◽  
pp. 6925-6932 ◽  
Author(s):  
Hao Zheng ◽  
Yizhe Zhang ◽  
Lin Yang ◽  
Chaoli Wang ◽  
Danny Z. Chen

Image segmentation is critical to lots of medical applications. While deep learning (DL) methods continue to improve performance for many medical image segmentation tasks, data annotation is a big bottleneck to DL-based segmentation because (1) DL models tend to need a large amount of labeled data to train, and (2) it is highly time-consuming and label-intensive to voxel-wise label 3D medical images. Significantly reducing annotation effort while attaining good performance of DL segmentation models remains a major challenge. In our preliminary experiments, we observe that, using partially labeled datasets, there is indeed a large performance gap with respect to using fully annotated training datasets. In this paper, we propose a new DL framework for reducing annotation effort and bridging the gap between full annotation and sparse annotation in 3D medical image segmentation. We achieve this by (i) selecting representative slices in 3D images that minimize data redundancy and save annotation effort, and (ii) self-training with pseudo-labels automatically generated from the base-models trained using the selected annotated slices. Extensive experiments using two public datasets (the HVSMR 2016 Challenge dataset and mouse piriform cortex dataset) show that our framework yields competitive segmentation results comparing with state-of-the-art DL methods using less than ∼20% of annotated data.


Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2962 ◽  
Author(s):  
Santiago González Izard ◽  
Ramiro Sánchez Torres ◽  
Óscar Alonso Plaza ◽  
Juan Antonio Juanes Méndez ◽  
Francisco José García-Peñalvo

The visualization of medical images with advanced techniques, such as augmented reality and virtual reality, represent a breakthrough for medical professionals. In contrast to more traditional visualization tools lacking 3D capabilities, these systems use the three available dimensions. To visualize medical images in 3D, the anatomical areas of interest must be segmented. Currently, manual segmentation, which is the most commonly used technique, and semi-automatic approaches can be time consuming because a doctor is required, making segmentation for each individual case unfeasible. Using new technologies, such as computer vision and artificial intelligence for segmentation algorithms and augmented and virtual reality for visualization techniques implementation, we designed a complete platform to solve this problem and allow medical professionals to work more frequently with anatomical 3D models obtained from medical imaging. As a result, the Nextmed project, due to the different implemented software applications, permits the importation of digital imaging and communication on medicine (dicom) images on a secure cloud platform and the automatic segmentation of certain anatomical structures with new algorithms that improve upon the current research results. A 3D mesh of the segmented structure is then automatically generated that can be printed in 3D or visualized using both augmented and virtual reality, with the designed software systems. The Nextmed project is unique, as it covers the whole process from uploading dicom images to automatic segmentation, 3D reconstruction, 3D visualization, and manipulation using augmented and virtual reality. There are many researches about application of augmented and virtual reality for medical image 3D visualization; however, they are not automated platforms. Although some other anatomical structures can be studied, we focused on one case: a lung study. Analyzing the application of the platform to more than 1000 dicom images and studying the results with medical specialists, we concluded that the installation of this system in hospitals would provide a considerable improvement as a tool for medical image visualization.


2019 ◽  
Vol 2019 ◽  
pp. 1-10 ◽  
Author(s):  
Lin Teng ◽  
Hang Li ◽  
Shahid Karim

Medical image segmentation is one of the hot issues in the related area of image processing. Precise segmentation for medical images is a vital guarantee for follow-up treatment. At present, however, low gray contrast and blurred tissue boundaries are common in medical images, and the segmentation accuracy of medical images cannot be effectively improved. Especially, deep learning methods need more training samples, which lead to time-consuming process. Therefore, we propose a novelty model for medical image segmentation based on deep multiscale convolutional neural network (CNN) in this article. First, we extract the region of interest from the raw medical images. Then, data augmentation is operated to acquire more training datasets. Our proposed method contains three models: encoder, U-net, and decoder. Encoder is mainly responsible for feature extraction of 2D image slice. The U-net cascades the features of each block of the encoder with those obtained by deconvolution in the decoder under different scales. The decoding is mainly responsible for the upsampling of the feature graph after feature extraction of each group. Simulation results show that the new method can boost the segmentation accuracy. And, it has strong robustness compared with other segmentation methods.


2013 ◽  
Vol 117 (9) ◽  
pp. 1017-1026 ◽  
Author(s):  
Pingkun Yan ◽  
Wuxia Zhang ◽  
Baris Turkbey ◽  
Peter L. Choyke ◽  
Xuelong Li

2021 ◽  
Author(s):  
Mohammadali Julazadeh

In this thesis a novel classification approach based on sparse representation framework is proposed. The method finds the minimum Euclidian distance between an input patch (pattern) and atoms (templates) of a learned-base dictionary for different classes to perform the classification task. A mathematical approach is developed to map the sparse representation vector to Euclidian distances. We show that the highest coefficient of the sparse vector is not necessarily a suitable indicator for classification. The proposed algorithm is compared with the conventional Sparse Representation Classification (SRC) framework as well as non-sparse based methods to evaluate its performance. Taking advantage of the introduced classification framework, we then propose a novel fully automated method for the purpose of segmenting different organs in medical images of the human body. Our results demonstrated an acceptable accuracy rate for both classification and the segmentation frameworks. To our knowledge, no other method utilizes sparse representation and dictionary learning techniques in order to segment medical images.


2020 ◽  
Vol 10 (18) ◽  
pp. 6439
Author(s):  
Chen Li ◽  
Wei Chen ◽  
Yusong Tan

Organ lesions have a high mortality rate, and pose a serious threat to people’s lives. Segmenting organs accurately is helpful for doctors to diagnose. There is a demand for the advanced segmentation model for medical images. However, most segmentation models directly migrated from natural image segmentation models. These models usually ignore the importance of the boundary. To solve this difficulty, in this paper, we provided a unique perspective on rendering to explore accurate medical image segmentation. We adapt a subdivision-based point-sampling method to get high-quality boundaries. In addition, we integrated the attention mechanism and nested U-Net architecture into the proposed network Render U-Net.Render U-Net was evaluated on three public datasets, including LiTS, CHAOS, and DSB. This model obtained the best performance on five medical image segmentation tasks.


Author(s):  
S. DivyaMeena ◽  
M. Mangaleswaran

Medical images have made a great effect on medicine, diagnosis, and treatment. The most important part of image processing is image segmentation. Medical Image Segmentation is the development of programmed or semi-automatic detection of limitations within a 2D or 3D image. In medical field, image segmentation is one of the vital steps in Image identification and Object recognition. Image segmentation is a method in which large data is partitioned into small amount of data. If the input MRI image is segmented then identifying the lump attacked region will be easier for physicians. In recent days, many algorithms are proposed for the image segmentation. In this paper, an analysis is made on various segmentation algorithms for medical images. Furthermore, a comparison of existing segmentation algorithms is also discussed along with the performance measure of each.


2021 ◽  
Vol 1 (2) ◽  
pp. 71-80
Author(s):  
Revella E. A. Armya Armya ◽  
Adnan Mohsin Abdulazeez

Medical image segmentation plays an essential role in computer-aided diagnostic systems in various applications. Therefore, researchers are attracted to apply new algorithms for medical image processing because it is a massive investment in developing medical imaging methods such as dermatoscopy, X-rays, microscopy, ultrasound, computed tomography (CT), positron emission tomography, and magnetic resonance imaging. (Magnetic Resonance Imaging), So segmentation of medical images is considered one of the most important medical imaging processes because it extracts the field of interest from the Return on investment (ROI) through an automatic or semi-automatic process. The medical image is divided into regions based on the specific descriptions, such as tissue/organ division in medical applications for border detection, tumor detection/segmentation, and comprehensive and accurate detection. Several methods of segmentation have been proposed in the literature, but their efficacy is difficult to compare. To better address, this issue, a variety of measurement standards have been suggested to decide the consistency of the segmentation outcome. Unsupervised ranking criteria use some of the statistics in the hash score based on the original picture. The key aim of this paper is to study some literature on unsupervised algorithms (K-mean, K-medoids) and to compare the working efficiency of unsupervised algorithms with different types of medical images.


Medical image segmentation results in the multiple fractioning of an input image for a deeper analysis/insight. Localization of objects and detection of boundaries are the coretheme of using segmentation for medical images. It elucidates the process of finding the anatomic structures in medical images. In this paper, we put forth a technique that has Fuzzy C-Means clustering and Artificial Bee Colony (ABC) Optimization has delivered the segmentation of MRA brain image. Artificial Bee Colony (ABC) has been used by many researchers as it is a population-based stochastic approach that has better search-inspace abilities for various optimization problems. The unsupervised clustering FCM has produced candidate outcomes in medical image processing. FCM is mostly preferable for segmenting the soft tissues in brain model, and it provides better output when compared to some of the competitive clustering techniques like KM, EM and KNN. The output of the suggested techniques is verified by using real MRA brain images. The results of Statistical parameters show that our method is notably better compared to other algorithms.


2021 ◽  
Vol 1 (1) ◽  
pp. 50-52
Author(s):  
Bo Dong ◽  
Wenhai Wang ◽  
Jinpeng Li

We present our solutions to the MedAI for all three tasks: polyp segmentation task, instrument segmentation task, and transparency task. We use the same framework to process the two segmentation tasks of polyps and instruments. The key improvement over last year is new state-of-the-art vision architectures, especially transformers which significantly outperform ConvNets for the medical image segmentation tasks. Our solution consists of multiple segmentation models, and each model uses a transformer as the backbone network. we get the best IoU score of 0.915 on the instrument segmentation task and 0.836 on polyp segmentation task after submitting. Meanwhile, we provide complete solutions in https://github.com/dongbo811/MedAI-2021.


Sign in / Sign up

Export Citation Format

Share Document