scholarly journals Low-Grade Glioma Segmentation Based on CNN with Fully Connected CRF

2017 ◽  
Vol 2017 ◽  
pp. 1-12 ◽  
Author(s):  
Zeju Li ◽  
Yuanyuan Wang ◽  
Jinhua Yu ◽  
Zhifeng Shi ◽  
Yi Guo ◽  
...  

This work proposed a novel automatic three-dimensional (3D) magnetic resonance imaging (MRI) segmentation method which would be widely used in the clinical diagnosis of the most common and aggressive brain tumor, namely, glioma. The method combined a multipathway convolutional neural network (CNN) and fully connected conditional random field (CRF). Firstly, 3D information was introduced into the CNN which makes more accurate recognition of glioma with low contrast. Then, fully connected CRF was added as a postprocessing step which purposed more delicate delineation of glioma boundary. The method was applied to T2flair MRI images of 160 low-grade glioma patients. With 59 cases of data training and manual segmentation as the ground truth, the Dice similarity coefficient (DSC) of our method was 0.85 for the test set of 101 MRI images. The results of our method were better than those of another state-of-the-art CNN method, which gained the DSC of 0.76 for the same dataset. It proved that our method could produce better results for the segmentation of low-grade gliomas.

2020 ◽  
Vol 22 (Supplement_3) ◽  
pp. iii356-iii356
Author(s):  
Fatema Malbari ◽  
Murali Chintagumpala ◽  
Jack Su ◽  
Mehmet Okcu ◽  
Frank Lin ◽  
...  

Abstract BACKGROUND Patients with chiasmatic-hypothalamic low grade glioma (CHLGG) have frequent MRIs with gadolinium based contrast agents (GBCA) for disease monitoring. Cumulative gadolinium deposition in children is a potential concern. The purpose of this research is to establish whether MRI with GBCA is necessary for determining tumor progression in children with CHLGG. METHODS Children with progressive CHLGG were identified from Texas Children’s Cancer Center between 2005–2019. Pre- and post-contrast MRI sequences were separately reviewed by one neuroradiologist who was blinded to the clinical course. Three dimensional measurements and tumor characteristics were collected. Radiographic progression was defined as a 25% increase in size (product of two largest dimensions) compared to baseline or best response after initiation of therapy. RESULTS A total of 28 patients with progressive CHLGG including 683 MRIs with GBCA (mean 24 MRIs/patient; range: 10–43 MRIs) were reviewed. No patients had a diagnosis of NF1. Progression was observed 92 times, 91 (98.9%) on noncontrast and 90 (97.8%) on contrast imaging. Sixty-seven radiographic and/or clinical progressions necessitating management changes were identified in all (100%) noncontrast sequences and 66 (98.5%) contrast sequences. Tumor growth >2 mm in any dimension was identified in 184/187(98.4%) on noncontrast and 181/187(96.8%) with contrast imaging. Non primary metastatic disease was seen in seven patients (25%), which were better visualized on contrast imaging in 4 (57%). CONCLUSION MRI without GBCA effectively identifies patients with progressive disease. One should consider eliminating contrast in imaging of children with CHLGG with GBCA reserved for monitoring those with metastatic disease.


2018 ◽  
Author(s):  
Omer Faruk Gulban ◽  
Marian Schneider ◽  
Ingo Marquardt ◽  
Roy A.M. Haast ◽  
Federico De Martino

AbstractHigh-resolution (functional) magnetic resonance imaging (MRI) at ultra high magnetic fields (7 Tesla and above) enables researchers to study how anatomical and functional properties change within the cortical ribbon, along surfaces and across cortical depths. These studies require an accurate delineation of the gray matter ribbon, which often suffers from inclusion of blood vessels, dura mater and other non-brain tissue. Residual segmentation errors are commonly corrected by browsing the data slice-by-slice and manually changing labels. This task becomes increasingly laborious and prone to error at higher resolutions since both work and error scale with the number of voxels. Here we show that many mislabeled, non-brain voxels can be corrected more efficiently and semi-automatically by representing three-dimensional anatomical images using two-dimensional histograms. We propose both a uni-modal (based on first spatial derivative) and multi-modal (based on compositional data analysis) approach to this representation and quantify the benefits in 7 Tesla MRI data of nine volunteers. We present an openly accessible Python implementation of these approaches and demonstrate that editing cortical segmentations using two-dimensional histogram representations as an additional post-processing step aids existing algorithms and yields improved gray matter borders. By making our data and corresponding expert (ground truth) segmentations openly available, we facilitate future efforts to develop and test segmentation algorithms on this challenging type of data.


2019 ◽  
Vol 26 (10) ◽  
pp. 1217-1226 ◽  
Author(s):  
Refaat E Gabr ◽  
Ivan Coronado ◽  
Melvin Robinson ◽  
Sheeba J Sujit ◽  
Sushmita Datta ◽  
...  

Objective: To investigate the performance of deep learning (DL) based on fully convolutional neural network (FCNN) in segmenting brain tissues in a large cohort of multiple sclerosis (MS) patients. Methods: We developed a FCNN model to segment brain tissues, including T2-hyperintense MS lesions. The training, validation, and testing of FCNN were based on ~1000 magnetic resonance imaging (MRI) datasets acquired on relapsing–remitting MS patients, as a part of a phase 3 randomized clinical trial. Multimodal MRI data (dual-echo, FLAIR, and T1-weighted images) served as input to the network. Expert validated segmentation was used as the target for training the FCNN. We cross-validated our results using the leave-one-center-out approach. Results: We observed a high average (95% confidence limits) Dice similarity coefficient for all the segmented tissues: 0.95 (0.92–0.98) for white matter, 0.96 (0.93–0.98) for gray matter, 0.99 (0.98–0.99) for cerebrospinal fluid, and 0.82 (0.63–1.0) for T2 lesions. High correlations between the DL segmented tissue volumes and ground truth were observed ( R2 > 0.92 for all tissues). The cross validation showed consistent results across the centers for all tissues. Conclusion: The results from this large-scale study suggest that deep FCNN can automatically segment MS brain tissues, including lesions, with high accuracy.


Author(s):  
Inés Barbero Garcia ◽  
José Luis Lerma ◽  
Ángel Marqués Mateu ◽  
Pablo Miranda

Cranial deformation affects a large number of infants. The methodologies commonly employed to measure the deformation include, among others, calliper measurements and visual assessment for mild cases and radiological imaging for severe cases, where surgical intervention is considered. Visual assessment and calliper measurements usually lack the required level of accuracy to evaluate the deformation. Radiological imaging, including Computed Tomography (CT) and Magnetic Resonance Imaging (MRI), are costly and highly invasive. The use of smartphones to record videos that can be used for three-dimensional (3D) modelling of the head has emerged as a low-cost, non-invasive methodology to extract 3D information of the patient. To be able to analyse the deformation, a novel technique is employed: the obtained model is compared with an ideal head. In this study we have tested the repeatability of the process. For this purpose, several models of two patients have been obtained and the differences between them are evaluated. The results show that the differences in the ellipsoid semiaxis for the same patient are usually below 4 mm, although they increase up to 6.4 mm in some cases. The variability in the distances to the ideal head, which are the values used to evaluate deformity, reaches a maximum value of 2.7 mm. The errors obtained are comparable to those of classical measurement techniques and show the potential of the methodology in development.http://dx.doi.org/10.4995/CIGeo2017.2017.6604


2021 ◽  
Author(s):  
Akila gurunathan ◽  
Batri Krishnan

Abstract Early identification and diagnosis of brain tumor using a supervised approach plays an essential role in the field of medicine. In this paper, an automated computer-aided method using deep learning architecture named CNN Deep net is proposed for the detection, classification and diagnosis of meningioma brain tumor. This proposed method includes preprocessing, classification, and segmentation of the primary occurring brain tumor in adults. The proposed CNN Deep Net architecture extracts the features internally from enhanced image and classifies them into normal and abnormal tumor images. The segmentation of tumor region is performed by the global thresholding with area morphological function method. This proposed method of fully automated classification and segmentation of brain tumor preserves the spatial invariance and inheritance. Furthermore, based on its feature attributes the proposed CNN Deep net classifier, classifies the detected tumor image belongs either to (low grade)benign or (high grade)malignant. This proposed CNN Deep net classification methodology approach with grading system is evaluated both quantitatively and qualitatively. The quantitative measures such as sensitivity, specificity, accuracy, Dice similarity coefficient, precision, F-score of the proposed classification and segmentation methodology states a better segmentation accuracy and classification rate of 99.4% and 99.5% with respect to ground truth images.


2020 ◽  
Vol 23 (1) ◽  
Author(s):  
Alejandra Márquez Herrera ◽  
Alex J. Cuadros-Vargas ◽  
Helio Pedrini

A neural network is a mathematical model that is able to perform a task automatically or semi-automatically after learning the human knowledge that we provided. Moreover, a Convolutional Neural Network (CNN) is a type of neural network that has shown to efficiently learn tasks related to the area of image analysis, such as image segmentation, whose main purpose is to find regions or separable objects within an image. A more specific type of segmentation, called semantic segmentation, guarantees that each region has a semantic meaning by giving it a label or class. Since CNNs can automate the task of image semantic segmentation, they have been very useful for the medical area, applying them to the segmentation of organs or abnormalities (tumors). This work aims to improve the task of binary semantic segmentation of volumetric medical images acquired by Magnetic Resonance Imaging (MRI) using a pre-existing Three-Dimensional Convolutional Neural Network (3D CNN) architecture. We propose a formulation of a loss function for training this 3D CNN, for improving pixel-wise segmentation results. This loss function is formulated based on the idea of adapting a similarity coefficient, used for measuring the spatial overlap between the prediction and ground truth, and then using it to train the network. As contribution, the developed approach achieved good performance in a context where the pixel classes are imbalanced. We show how the choice of the loss function for training can affect the nal quality of the segmentation. We validate our proposal over two medical image semantic segmentation datasets and show comparisons in performance between the proposed loss function and other pre-existing loss functions used for binary semantic segmentation.


Electronics ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 130
Author(s):  
Shuangcai Yin ◽  
Hongmin Deng ◽  
Zelin Xu ◽  
Qilin Zhu ◽  
Junfeng Cheng

Due to the outbreak of lung infections caused by the coronavirus disease (COVID-19), humans have to face an unprecedented and devastating global health crisis. Since chest computed tomography (CT) images of COVID-19 patients contain abundant pathological features closely related to this disease, rapid detection and diagnosis based on CT images is of great significance for the treatment of patients and blocking the spread of the disease. In particular, the segmentation of the COVID-19 CT lung-infected area can quantify and evaluate the severity of the disease. However, due to the blurred boundaries and low contrast between the infected and the non-infected areas in COVID-19 CT images, the manual segmentation of the COVID-19 lesion is laborious and places high demands on the operator. Quick and accurate segmentation of COVID-19 lesions from CT images based on deep learning has drawn increasing attention. To effectively improve the segmentation effect of COVID-19 lung infection, a modified UNet network that combines the squeeze-and-attention (SA) and dense atrous spatial pyramid pooling (Dense ASPP) modules) (SD-UNet) is proposed, fusing global context and multi-scale information. Specifically, the SA module is introduced to strengthen the attention of pixel grouping and fully exploit the global context information, allowing the network to better mine the differences and connections between pixels. The Dense ASPP module is utilized to capture multi-scale information of COVID-19 lesions. Moreover, to eliminate the interference of background noise outside the lungs and highlight the texture features of the lung lesion area, we extract in advance the lung area from the CT images in the pre-processing stage. Finally, we evaluate our method using the binary-class and multi-class COVID-19 lung infection segmentation datasets. The experimental results show that the metrics of Sensitivity, Dice Similarity Coefficient, Accuracy, Specificity, and Jaccard Similarity are 0.8988 (0.6169), 0.8696 (0.5936), 0.9906 (0.9821), 0.9932 (0.9907), and 0.7702 (0.4788), respectively, for the binary-class (multi-class) segmentation task in the proposed SD-UNet. The result of the COVID-19 lung infection area segmented by SD-UNet is closer to the ground truth compared to several existing models such as CE-Net, DeepLab v3+, UNet++, and other models, which further proves that a more accurate segmentation effect can be achieved by our method. It has the potential to assist doctors in making more accurate and rapid diagnosis and quantitative assessment of COVID-19.


2012 ◽  
Vol 154 (7) ◽  
pp. 1255-1262 ◽  
Author(s):  
Andrej Šteňo ◽  
Martin Karlík ◽  
Peter Mendel ◽  
Miroslav Čík ◽  
Juraj Šteňo

Author(s):  
Robert W. Mackin

This paper presents two advances towards the automated three-dimensional (3-D) analysis of thick and heavily-overlapped regions in cytological preparations such as cervical/vaginal smears. First, a high speed 3-D brightfield microscope has been developed, allowing the acquisition of image data at speeds approaching 30 optical slices per second. Second, algorithms have been developed to detect and segment nuclei in spite of the extremely high image variability and low contrast typical of such regions. The analysis of such regions is inherently a 3-D problem that cannot be solved reliably with conventional 2-D imaging and image analysis methods.High-Speed 3-D imaging of the specimen is accomplished by moving the specimen axially relative to the objective lens of a standard microscope (Zeiss) at a speed of 30 steps per second, where the stepsize is adjustable from 0.2 - 5μm. The specimen is mounted on a computer-controlled, piezoelectric microstage (Burleigh PZS-100, 68/μm displacement). At each step, an optical slice is acquired using a CCD camera (SONY XC-11/71 IP, Dalsa CA-D1-0256, and CA-D2-0512 have been used) connected to a 4-node array processor system based on the Intel i860 chip.


Sign in / Sign up

Export Citation Format

Share Document