scholarly journals Research and Analysis of Brain Glioma Imaging Based on Deep Learning

2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Tao Luo ◽  
YaLing Li

The incidence of glioma is increasing year by year, seriously endangering people’s health. Magnetic resonance imaging (MRI) can effectively provide intracranial images of brain tumors and provide strong support for the diagnosis and treatment of the disease. Accurate segmentation of brain glioma has positive significance in medicine. However, due to the strong variability of the size, shape, and location of glioma and the large differences between different cases, the recognition and segmentation of glioma images are very difficult. Traditional methods are time-consuming, labor-intensive, and inefficient, and single-modal MRI images cannot provide comprehensive information about gliomas. Therefore, it is necessary to synthesize multimodal MRI images to identify and segment glioma MRI images. This work is based on multimodal MRI images and based on deep learning technology to achieve automatic and efficient segmentation of gliomas. The main tasks are as follows. A deep learning model based on dense blocks of holes, 3D U-Net, is proposed. It can automatically segment multimodal MRI glioma images. U-Net network is often used in image segmentation and has good performance. However, due to the strong specificity of glioma, the U-Net model cannot effectively obtain more details. Therefore, the 3D U-Net model proposed in this paper can integrate hollow convolution and densely connected blocks. In addition, this paper also combines classification loss and cross-entropy loss as the loss function of the network to improve the problem of category imbalance in glioma image segmentation tasks. The algorithm proposed in this paper has been used to perform a lot of experiments on the BraTS2018 dataset, and the results prove that this model has good segmentation performance.

2019 ◽  
pp. 129-141 ◽  
Author(s):  
Hui Xian Chia

This article examines the use of artificial intelligence (AI) and deep learning, specifically, to create financial robo-advisers. These machines have the potential to be perfectly honest fiduciaries, acting in their client’s best interests without conflicting self-interest or greed, unlike their human counterparts. However, the application of AI technology to create financial robo-advisers is not without risk. This article will focus on the unique risks posed by deep learning technology. One of the main fears regarding deep learning is that it is a “black box”, its decision-making process is opaque and not open to scrutiny even by the people who developed it. This poses a significant challenge to financial regulators, whom would not be able to examine the underlying rationale and rules of the robo-adviser to determine its safety for public use. The rise of deep learning has been met with calls for ‘explainability’ of how deep learning agents make their decisions. This paper argues that greater explainability can be achieved by describing the ‘personality’ of deep learning robo-advisers, and further proposes a framework for describing the parameters of the deep learning model using concepts that can be readily understood by people without technical expertise. This regards whether the robo-adviser is ‘greedy’, ‘selfish’ or ‘prudent’. Greater understanding will enable regulators and consumers to better judge the safety and suitability of deep learning financial robo-advisers.


Author(s):  
Jingyan Qiu ◽  
Linjian Li ◽  
Yida Liu ◽  
Yingjun Ou ◽  
Yubei Lin

Alzheimer’s disease (AD) is one of the most common forms of dementia. The early stage of the disease is defined as Mild Cognitive Impairment (MCI). Recent research results have shown the prospect of combining Magnetic Resonance Imaging (MRI) scanning of the brain and deep learning to diagnose AD. However, the CNN deep learning model requires a large scale of samples for training. Transfer learning is the key to enable a model with high accuracy by using limited data for training. In this paper, DenseNet and Inception V4, which were pre-trained on the ImageNet dataset to obtain initialization values of weights, are, respectively, used for the graphic classification task. The ensemble method is employed to enhance the effectiveness and efficiency of the classification models and the result of different models are eventually processed through probability-based fusion. Our experiments were completely conducted on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) public dataset. Only the ternary classification is made due to a higher demand for medical detection and diagnosis. The accuracies of AD/MCI/Normal Control (NC) of different models are estimated in this paper. The results of the experiments showed that the accuracies of the method achieved a maximum of 92.65%, which is a remarkable outcome compared with the accuracies of the state-of-the-art methods.


2019 ◽  
Vol 18 (2) ◽  
Author(s):  
Ida Bagus Leo Mahadya Suta ◽  
Rukmi Sari Hartati ◽  
Yoga Divayana

Tumor otak menjadi salah satu penyakit yang paling mematikan, salah satu jenis yang paling banyak ditemukan adalah glioma sekitar 6 dari 100.000 pasien adalah penderita glioma. Citra digital melalui Magnetic Resonance Imaging (MRI) merupakan salah satu metode untuk membantu dokter dalam menganalisa dan mengklasifikasikan jenis tumor otak. Namun, klasifikasi secara manual membutuhkan waktu yang lama dan memiliki resiko kesalahan yang tinggi, untuk itu dibutuhkan suatu cara otomatis dan akurat dalam melakukan klasifikasi citra MRI. Convolutional Neural Network (CNN) menjadi salah satu solusi dalam melakukan klasifikasi otomatis dalam citra MRI. CNN merupakan algoritma deep learning yang memiliki kemampuan untuk belajar sendiri dari kasus kasus sebelumnya. Dan dari penelitian yang telah dilakukan, diperoleh hasil bahwa CNN mampu dalam menyelesaikan klasifikasi tumor otak dengan akurasi yang tinggi. Peningkatan akurasi diperoleh dengan mengembangkan algoritma CNN baik melalui menentukan nilai kernel dan/atau fungsi aktivasi.


Electronics ◽  
2020 ◽  
Vol 9 (8) ◽  
pp. 1199
Author(s):  
Michelle Bardis ◽  
Roozbeh Houshyar ◽  
Chanon Chantaduly ◽  
Alexander Ushinsky ◽  
Justin Glavis-Bloom ◽  
...  

(1) Background: The effectiveness of deep learning artificial intelligence depends on data availability, often requiring large volumes of data to effectively train an algorithm. However, few studies have explored the minimum number of images needed for optimal algorithmic performance. (2) Methods: This institutional review board (IRB)-approved retrospective review included patients who received prostate magnetic resonance imaging (MRI) between September 2014 and August 2018 and a magnetic resonance imaging (MRI) fusion transrectal biopsy. T2-weighted images were manually segmented by a board-certified abdominal radiologist. Segmented images were trained on a deep learning network with the following case numbers: 8, 16, 24, 32, 40, 80, 120, 160, 200, 240, 280, and 320. (3) Results: Our deep learning network’s performance was assessed with a Dice score, which measures overlap between the radiologist’s segmentations and deep learning-generated segmentations and ranges from 0 (no overlap) to 1 (perfect overlap). Our algorithm’s Dice score started at 0.424 with 8 cases and improved to 0.858 with 160 cases. After 160 cases, the Dice increased to 0.867 with 320 cases. (4) Conclusions: Our deep learning network for prostate segmentation produced the highest overall Dice score with 320 training cases. Performance improved notably from training sizes of 8 to 120, then plateaued with minimal improvement at training case size above 160. Other studies utilizing comparable network architectures may have similar plateaus, suggesting suitable results may be obtainable with small datasets.


2021 ◽  
Vol 1 ◽  
Author(s):  
Shanshan Wang ◽  
Guohua Cao ◽  
Yan Wang ◽  
Shu Liao ◽  
Qian Wang ◽  
...  

Artificial intelligence (AI) as an emerging technology is gaining momentum in medical imaging. Recently, deep learning-based AI techniques have been actively investigated in medical imaging, and its potential applications range from data acquisition and image reconstruction to image analysis and understanding. In this review, we focus on the use of deep learning in image reconstruction for advanced medical imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET). Particularly, recent deep learning-based methods for image reconstruction will be emphasized, in accordance with their methodology designs and performances in handling volumetric imaging data. It is expected that this review can help relevant researchers understand how to adapt AI for medical imaging and which advantages can be achieved with the assistance of AI.


2018 ◽  
Vol 29 (3) ◽  
pp. 67-88 ◽  
Author(s):  
Wen Zeng ◽  
Hongjiao Xu ◽  
Hui Li ◽  
Xiang Li

In the big data era, it is a great challenge to identify high-level abstract features out of a flood of sci-tech literature to achieve in-depth analysis of data. The deep learning technology has developed rapidly and achieved applications in many fields, but has rarely been utilized in the research of sci-tech literature data. This article introduced the presentation method of vector space of terminologies in sci-tech literature based on the deep learning model. It explored and adopted a deep AE model to reduce the dimensionality of input word vector feature. Also put forward is the methodology of correlation analysis of sci-tech literature based on deep learning technology. The experimental results showed that the processing of sci-tech literature data could be simplified into the computation of vectors in the multi-dimensional vector space, and the similarity in vector space could be used to represent similarity in text semantics. The correlation analysis of subject contents between sci-tech literatures of the same or different types can be made using this method.


2008 ◽  
Vol 65 (7) ◽  
pp. 1245-1249 ◽  
Author(s):  
Bonnie L. Rogers ◽  
Christopher G. Lowe ◽  
Esteban Fernández-Juricic ◽  
Lawrence R. Frank

The physical consequences of barotrauma on the economically important rockfish ( Sebastes ) were evaluated with a novel method using T2-weighted magnetic resonance imaging (MRI) in combination with image segmentation and analysis. For this pilot study, two fishes were captured on hook-and-line from 100 m, euthanized, and scanned in a 3 Tesla human MRI scanner. Analyses were made on each fish, one exhibiting swim bladder overinflation and exophthalmia and the other showing low to moderate swim bladder overinflation. Air space volumes in the body were quantified using image segmentation techniques that allow definition of individual anatomical regions in the three-dimensional MRIs. The individual exhibiting the most severe signs of barotrauma revealed the first observation of a gas-filled orbital space behind the eyes, which was not observable by gross dissection. Severe exophthalmia resulted in extreme stretching of the optic nerves, which was clearly validated with dissections and not seen in the other individual. Expanding gas from swim bladder overinflation must leak from the swim bladder, rupture the peritoneum, and enter the cranium. This MRI method of evaluating rockfish following rapid decompression is useful for quantifying the magnitude of internal barotrauma associated with decompression and complementing studies on the effects of capture and discard mortality of rockfishes.


Proceedings ◽  
2019 ◽  
Vol 21 (1) ◽  
pp. 28
Author(s):  
Alejandro Puente-Castro ◽  
Cristian Robert Munteanu ◽  
Enrique Fernandez-Blanco

Automatic detection of Alzheimer’s disease is a very active area of research. This is due to its usefulness in starting the protocol to stop the inevitable progression of this neurodegenerative disease. This paper proposes a system for the detection of the disease by means of Deep Learning techniques in magnetic resonance imaging (MRI). As a solution, a model of neuronal networks (ANN) and two sets of reference data for training are proposed. Finally, the goodness of this system is verified within the domain of the application.


Sign in / Sign up

Export Citation Format

Share Document