scholarly journals DeepAD: Alzheimer’s Disease Classification via Deep Convolutional Neural Networks using MRI and fMRI

2016 ◽  
Author(s):  
Saman Sarraf ◽  
Danielle D. DeSouza ◽  
John Anderson ◽  
Ghassem Tofighi ◽  

1AbstractTo extract patterns from neuroimaging data, various statistical methods and machine learning algorithms have been explored for the diagnosis of Alzheimer’s disease among older adults in both clinical and research applications; however, distinguishing between Alzheimer’s and healthy brain data has been challenging in older adults (age > 75) due to highly similar patterns of brain atrophy and image intensities. Recently, cutting-edge deep learning technologies have rapidly expanded into numerous fields, including medical image analysis. This paper outlines state-of-the-art deep learning-based pipelines employed to distinguish Alzheimer’s magnetic resonance imaging (MRI) and functional MRI (fMRI) from normal healthy control data for a given age group. Using these pipelines, which were executed on a GPU-based high-performance computing platform, the data were strictly and carefully preprocessed. Next, scale- and shift-invariant low- to high-level features were obtained from a high volume of training images using convolutional neural network (CNN) architecture. In this study, fMRI data were used for the first time in deep learning applications for the purposes of medical image analysis and Alzheimer’s disease prediction. These proposed and implemented pipelines, which demonstrate a significant improvement in classification output over other studies, resulted in high and reproducible accuracy rates of 99.9% and 98.84% for the fMRI and MRI pipelines, respectively. Additionally, for clinical purposes, subject-level classification was performed, resulting in an average accuracy rate of 94.32% and 97.88% for the fMRI and MRI pipelines, respectively. Finally, a decision making algorithm designed for the subject-level classification improved the rate to 97.77% for fMRI and 100% for MRI pipelines.

Author(s):  
Deekshitha Prakash ◽  
Nuwan Madusanka ◽  
Subrata Bhattacharjee ◽  
Cho-Hee Kim ◽  
Hyeon-Gyun Park ◽  
...  

Background: In this study, we employed transfer learning technique to classify Magnetic Resonance (MR) images using a pre-trained convolutional neural network (CNN). Aims: To prevent Alzheimer’s disease (AD) from progression to dementia, early prediction and classification of AD plays a crucial role in medical image analysis. Background: In this study, we employed transfer learning technique to classify Magnetic Resonance (MR) images using a pre-trained convolutional neural network (CNN). Objective: To address the early diagnosis of AD, we employed computer-assisted technique specifically deep learning (DL) model to detect AD. Methods: In particular, we classified Alzheimer’s disease (AD), mild cognitive impairment (MCI) and normal control (NC) subjects using whole slide two-dimensional (2D) images. To illustrate this approach, we made use of state-of-the-art CNN base models, i.e., the residual networks ResNet-101, ResNet-50 and ResNet-18, and compared their effectiveness to identifying AD. To evaluate this approach, an AD Neuroimaging Initiative (ADNI) dataset was utilized. We have also showed uniqueness by using MR images selected only from the central slice containing left and right hippocampus regions to evaluate the models. Results: All the three models used randomly split data in the ratio 70:30 for training and testing. Among the three, ResNet-101 showed 98.37% accuracy, better than the other two ResNet models, and performed well in multiclass classification. The promising results emphasize the benefit of using transfer learning specifically when the dataset is low. Conclusion: From this study, we can assure that transfer learning helps to overcome DL problems mainly when the data available is insufficient to train a model from scratch. This approach is highly advantageous in medical image analysis to diagnose diseases like AD.


Author(s):  
Joy Nkechinyere Olawuyi ◽  
Bernard Ijesunor Akhigbe ◽  
Babajide Samuel Afolabi ◽  
Attoh Okine

The recent advancement in imaging technology, together with the hierarchical feature representation capability of deep learning models, has led to the popularization of deep learning models. Thus, research tends towards the use of deep neural networks as against the hand-crafted machine learning algorithms for solving computational problems involving medical images analysis. This limitation has led to the use of features extracted from non-medical data for training models for medical image analysis, considered optimal for practical implementation in clinical setting because medical images contain semantic contents that are different from that of natural images. Therefore, there is need for an alternative to cross-domain feature-learning. Hence, this chapter discusses the possible ways of harnessing domain-specific features which have semantic contents for development of deep learning models.


2021 ◽  
Vol 7 (2) ◽  
pp. 19
Author(s):  
Tirivangani Magadza ◽  
Serestina Viriri

Quantitative analysis of the brain tumors provides valuable information for understanding the tumor characteristics and treatment planning better. The accurate segmentation of lesions requires more than one image modalities with varying contrasts. As a result, manual segmentation, which is arguably the most accurate segmentation method, would be impractical for more extensive studies. Deep learning has recently emerged as a solution for quantitative analysis due to its record-shattering performance. However, medical image analysis has its unique challenges. This paper presents a review of state-of-the-art deep learning methods for brain tumor segmentation, clearly highlighting their building blocks and various strategies. We end with a critical discussion of open challenges in medical image analysis.


Author(s):  
Adwait Patil

Abstract: Alzheimer’s disease is one of the neurodegenerative disorders. It initially starts with innocuous symptoms but gradually becomes severe. This disease is so dangerous because there is no treatment, the disease is detected but typically at a later stage. So it is important to detect Alzheimer at an early stage to counter the disease and for a probable recovery for the patient. There are various approaches currently used to detect symptoms of Alzheimer’s disease (AD) at an early stage. The fuzzy system approach is not widely used as it heavily depends on expert knowledge but is quite efficient in detecting AD as it provides a mathematical foundation for interpreting the human cognitive processes. Another more accurate and widely accepted approach is the machine learning detection of AD stages which uses machine learning algorithms like Support Vector Machines (SVMs) , Decision Tree , Random Forests to detect the stage depending on the data provided. The final approach is the Deep Learning approach using multi-modal data that combines image , genetic data and patient data using deep models and then uses the concatenated data to detect the AD stage more efficiently; this method is obscure as it requires huge volumes of data. This paper elaborates on all the three approaches and provides a comparative study about them and which method is more efficient for AD detection. Keywords: Alzheimer’s Disease (AD), Fuzzy System , Machine Learning , Deep Learning , Multimodal data


Author(s):  
Yanteng Zhang ◽  
Qizhi Teng ◽  
Linbo Qing ◽  
Yan Liu ◽  
Xiaohai He

Alzheimer’s disease (AD) is a degenerative brain disease and the most common cause of dementia. In recent years, with the widespread application of artificial intelligence in the medical field, various deep learning-based methods have been applied for AD detection using sMRI images. Many of these networks achieved AD vs HC (Healthy Control) classification accuracy of up to 90%but with a large number of computational parameters and floating point operations (FLOPs). In this paper, we adopt a novel ghost module, which uses a series of cheap operations of linear transformation to generate more feature maps, embedded into our designed ResNet architecture for task of AD vs HC classification. According to experiments on the OASIS dataset, our lightweight network achieves an optimistic accuracy of 97.92%and its total parameters are dozens of times smaller than state-of-the-art deep learning networks. Our proposed AD classification network achieves better performance while the computational cost is reduced significantly.


Sign in / Sign up

Export Citation Format

Share Document