scholarly journals Development of a Convolutional Neural Network Based Skull Segmentation in MRI Using Standard Tesselation Language Models

2021 ◽  
Vol 11 (4) ◽  
pp. 310
Author(s):  
Rodrigo Dalvit Carvalho da Silva ◽  
Thomas Richard Jenkyn ◽  
Victor Alexander Carranza

Segmentation is crucial in medical imaging analysis to help extract regions of interest (ROI) from different imaging modalities. The aim of this study is to develop and train a 3D convolutional neural network (CNN) for skull segmentation in magnetic resonance imaging (MRI). 58 gold standard volumetric labels were created from computed tomography (CT) scans in standard tessellation language (STL) models. These STL models were converted into matrices and overlapped on the 58 corresponding MR images to create the MRI gold standards labels. The CNN was trained with these 58 MR images and a mean ± standard deviation (SD) Dice similarity coefficient (DSC) of 0.7300 ± 0.04 was achieved. A further investigation was carried out where the brain region was removed from the image with the help of a 3D CNN and manual corrections by using only MR images. This new dataset, without the brain, was presented to the previous CNN which reached a new mean ± SD DSC of 0.7826 ± 0.03. This paper aims to provide a framework for segmenting the skull using CNN and STL models, as the 3D CNN was able to segment the skull with a certain precision.

2019 ◽  
Author(s):  
Carolina L. S. Cipriano ◽  
Giovanni L. F. Da Silva ◽  
Jonnison L. Ferreira ◽  
Aristófanes C. Silva ◽  
Anselmo Cardoso De Paiva

One of the most severe and common brain tumors is gliomas. Manual classification of injuries of this type is a laborious task in the clinical routine. Therefore, this work proposes an automatic method to classify lesions in the brain in 3D MR images based on superpixels, PSO algorithm and convolutional neural network. The proposed method obtained results for the complete, central and active regions, an accuracy of 87.88%, 70.51%, 80.08% and precision of 76%, 84%, 75% for the respective regions. The results demonstrate the difficulty of the network in the classification of the regions found in the lesions.


2019 ◽  
Vol 18 (2) ◽  
Author(s):  
Ida Bagus Leo Mahadya Suta ◽  
Rukmi Sari Hartati ◽  
Yoga Divayana

Tumor otak menjadi salah satu penyakit yang paling mematikan, salah satu jenis yang paling banyak ditemukan adalah glioma sekitar 6 dari 100.000 pasien adalah penderita glioma. Citra digital melalui Magnetic Resonance Imaging (MRI) merupakan salah satu metode untuk membantu dokter dalam menganalisa dan mengklasifikasikan jenis tumor otak. Namun, klasifikasi secara manual membutuhkan waktu yang lama dan memiliki resiko kesalahan yang tinggi, untuk itu dibutuhkan suatu cara otomatis dan akurat dalam melakukan klasifikasi citra MRI. Convolutional Neural Network (CNN) menjadi salah satu solusi dalam melakukan klasifikasi otomatis dalam citra MRI. CNN merupakan algoritma deep learning yang memiliki kemampuan untuk belajar sendiri dari kasus kasus sebelumnya. Dan dari penelitian yang telah dilakukan, diperoleh hasil bahwa CNN mampu dalam menyelesaikan klasifikasi tumor otak dengan akurasi yang tinggi. Peningkatan akurasi diperoleh dengan mengembangkan algoritma CNN baik melalui menentukan nilai kernel dan/atau fungsi aktivasi.


2019 ◽  
Vol 9 (22) ◽  
pp. 4874 ◽  
Author(s):  
Xiaofeng Du ◽  
Yifan He

Super-resolution (SR) technology is essential for improving image quality in magnetic resonance imaging (MRI). The main challenge of MRI SR is to reconstruct high-frequency (HR) details from a low-resolution (LR) image. To address this challenge, we develop a gradient-guided convolutional neural network for improving the reconstruction accuracy of high-frequency image details from the LR image. A gradient prior is fully explored to supply the information of high-frequency details during the super-resolution process, thereby leading to a more accurate reconstructed image. Experimental results of image super-resolution on public MRI databases demonstrate that the gradient-guided convolutional neural network achieves better performance over the published state-of-art approaches.


2021 ◽  
Vol 20 ◽  
pp. 153303382110464
Author(s):  
Jiankui Yuan ◽  
Elisha Fredman ◽  
Jian-Yue Jin ◽  
Serah Choi ◽  
David Mansur ◽  
...  

The aim of this work is to study the dosimetric effect from generated synthetic computed tomography (sCT) from magnetic resonance (MR) images using a deep learning algorithm for Gamma Knife (GK) stereotactic radiosurgery (SRS). The Monte Carlo (MC) method is used for dose calculations. Thirty patients were retrospectively selected with our institution IRB’s approval. All patients were treated with GK SRS based on T1-weighted MR images and also underwent conventional external beam treatment with a CT scan. Image datasets were preprocessed with registration and were normalized to obtain similar intensity for the pairs of MR and CT images. A deep convolutional neural network arranged in an encoder–decoder fashion was used to learn the direct mapping from MR to the corresponding CT. A number of metrics including the voxel-wise mean error (ME) and mean absolute error (MAE) were used for evaluating the difference between generated sCT and the true CT. To study the dosimetric accuracy, MC simulations were performed based on the true CT and sCT using the same treatment parameters. The method produced an MAE of 86.6 ± 34.1 Hundsfield units (HU) and a mean squared error (MSE) of 160.9 ± 32.8. The mean Dice similarity coefficient was 0.82 ± 0.05 for HU > 200. The difference for dose-volume parameter D95 between the ground true dose and the dose calculated with sCT was 1.1% if a synthetic CT-to-density table was used, and 4.9% compared with the calculations based on the water-brain phantom.


2018 ◽  
Vol 2018 ◽  
pp. 1-7 ◽  
Author(s):  
Qiaoliang Li ◽  
Yuzhen Xu ◽  
Zhewei Chen ◽  
Dexiang Liu ◽  
Shi-Ting Feng ◽  
...  

Objectives. To evaluate the application of a deep learning architecture, based on the convolutional neural network (CNN) technique, to perform automatic tumor segmentation of magnetic resonance imaging (MRI) for nasopharyngeal carcinoma (NPC). Materials and Methods. In this prospective study, 87 MRI containing tumor regions were acquired from newly diagnosed NPC patients. These 87 MRI were augmented to >60,000 images. The proposed CNN network is composed of two phases: feature representation and scores map reconstruction. We designed a stepwise scheme to train our CNN network. To evaluate the performance of our method, we used case-by-case leave-one-out cross-validation (LOOCV). The ground truth of tumor contouring was acquired by the consensus of two experienced radiologists. Results. The mean values of dice similarity coefficient, percent match, and their corresponding ratio with our method were 0.89±0.05, 0.90±0.04, and 0.84±0.06, respectively, all of which were better than reported values in the similar studies. Conclusions. We successfully established a segmentation method for NPC based on deep learning in contrast-enhanced magnetic resonance imaging. Further clinical trials with dedicated algorithms are warranted.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Fanar E. K. Al-Khuzaie ◽  
Oguz Bayat ◽  
Adil D. Duru

There are many kinds of brain abnormalities that cause changes in different parts of the brain. Alzheimer’s disease is a chronic condition that degenerates the cells of the brain leading to memory asthenia. Cognitive mental troubles such as forgetfulness and confusion are one of the most important features of Alzheimer’s patients. In the literature, several image processing techniques, as well as machine learning strategies, were introduced for the diagnosis of the disease. This study is aimed at recognizing the presence of Alzheimer’s disease based on the magnetic resonance imaging of the brain. We adopted a deep learning methodology for the discrimination between Alzheimer’s patients and healthy patients from 2D anatomical slices collected using magnetic resonance imaging. Most of the previous researches were based on the implementation of a 3D convolutional neural network, whereas we incorporated the usage of 2D slices as input to the convolutional neural network. The data set of this research was obtained from the OASIS website. We trained the convolutional neural network structure using the 2D slices to exhibit the deep network weightings that we named as the Alzheimer Network (AlzNet). The accuracy of our enhanced network was 99.30%. This work investigated the effects of many parameters on AlzNet, such as the number of layers, number of filters, and dropout rate. The results were interesting after using many performance metrics for evaluating the proposed AlzNet.


We consider the problem of fully automatic brain tumor segmentation in MR images containing glioblastomas. We propose a three Dimensional Convolutional Neural Network (3D MedImg-CNN) approach which achieves high performance while being extremely efficient, a balance that existing methods have struggled to achieve. Our 3D MedImg-CNN is formed directly on the raw image modalities and thus learn a characteristic representation directly from the data. We propose a new cascaded architecture with two pathways that each model normal details in tumors. Fully exploiting the convolutional nature of our model also allows us to segment a complete cerebral image in one minute. The performance of the proposed 3D MedImg-CNN with CNN segmentation method is computed using dice similarity coefficient (DSC). In experiments on the 2013, 2015 and 2017 BraTS challenges datasets; we unveil that our approach is among the most powerful methods in the literature, while also being very effective.


2020 ◽  
Vol 23 (1) ◽  
Author(s):  
Alejandra Márquez Herrera ◽  
Alex J. Cuadros-Vargas ◽  
Helio Pedrini

A neural network is a mathematical model that is able to perform a task automatically or semi-automatically after learning the human knowledge that we provided. Moreover, a Convolutional Neural Network (CNN) is a type of neural network that has shown to efficiently learn tasks related to the area of image analysis, such as image segmentation, whose main purpose is to find regions or separable objects within an image. A more specific type of segmentation, called semantic segmentation, guarantees that each region has a semantic meaning by giving it a label or class. Since CNNs can automate the task of image semantic segmentation, they have been very useful for the medical area, applying them to the segmentation of organs or abnormalities (tumors). This work aims to improve the task of binary semantic segmentation of volumetric medical images acquired by Magnetic Resonance Imaging (MRI) using a pre-existing Three-Dimensional Convolutional Neural Network (3D CNN) architecture. We propose a formulation of a loss function for training this 3D CNN, for improving pixel-wise segmentation results. This loss function is formulated based on the idea of adapting a similarity coefficient, used for measuring the spatial overlap between the prediction and ground truth, and then using it to train the network. As contribution, the developed approach achieved good performance in a context where the pixel classes are imbalanced. We show how the choice of the loss function for training can affect the nal quality of the segmentation. We validate our proposal over two medical image semantic segmentation datasets and show comparisons in performance between the proposed loss function and other pre-existing loss functions used for binary semantic segmentation.


Author(s):  
Boo-Kyeong Choi ◽  
Nuwan Madusanka ◽  
Heung-Kook Choi ◽  
Jae-Hong So ◽  
Cho-Hee Kim ◽  
...  

Background: In this study, we used a convolutional neural network (CNN) to classify Alzheimer’s disease (AD), mild cognitive impairment (MCI), and normal control (NC) subjects based on images of the hippocampus region extracted from magnetic resonance (MR) images of the brain. Materials and Methods: The datasets used in this study were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI). To segment the hippocampal region automatically, the patient brain MR images were matched to the International Consortium for Brain Mapping template (ICBM) using 3D-Slicer software. Using prior knowledge and anatomical annotation label information, the hippocampal region was automatically extracted from the brain MR images. Results: The area of the hippocampus in each image was preprocessed using local entropy minimization with a bi-cubic spline model (LEMS) by an inhomogeneity intensity correction method. To train the CNN model, we separated the dataset into three groups, namely AD/NC, AD/MCI, and MCI/NC. The prediction model achieved an accuracy of 92.3% for AD/NC, 85.6% for AD/MCI, and 78.1% for MCI/NC. Conclusion: The results of this study were compared to those of previous studies, and summarized and analyzed to facilitate more flexible analyses based on additional experiments. The classification accuracy obtained by the proposed method is highly accurate. These findings suggest that this approach is efficient and may be a promising strategy to obtain good AD, MCI and NC classification performance using small patch images of hippocampus instead of whole slide images.


Author(s):  
Dominic Gascho ◽  
Michael J. Thali ◽  
Rosa M. Martinez ◽  
Stephan A. Bolliger

AbstractThe computed tomography (CT) scan of a 19-year-old man who died from an occipito-frontal gunshot wound presented an impressive radiating fracture line where the entire sagittal suture burst due to the high intracranial pressure that arose from a near-contact shot from a 9 mm bullet fired from a Glock 17 pistol. Photorealistic depictions of the radiating fracture lines along the cranial bones were created using three-dimensional reconstruction methods, such as the novel cinematic rendering technique that simulates the propagation and interaction of light when it passes through volumetric data. Since the brain had collapsed, depiction of soft tissue was insufficient on CT images. An additional magnetic resonance imaging (MRI) examination was performed, which enabled the diagnostic assessment of cerebral injuries.


Sign in / Sign up

Export Citation Format

Share Document