Doctor's Dilemma: Evaluating an Explainable Subtractive Spatial Lightweight Convolutional Neural Network for Brain Tumor Diagnosis

Author(s):  
Ambeshwar Kumar ◽  
Ramachandran Manikandan ◽  
Utku Kose ◽  
Deepak Gupta ◽  
Suresh C. Satapathy

In Medicine Deep Learning has become an essential tool to achieve outstanding diagnosis on image data. However, one critical problem is that Deep Learning comes with complicated, black-box models so it is not possible to analyze their trust level directly. So, Explainable Artificial Intelligence (XAI) methods are used to build additional interfaces for explaining how the model has reached the outputs by moving from the input data. Of course, that's again another competitive problem to analyze if such methods are successful according to the human view. So, this paper comes with two important research efforts: (1) to build an explainable deep learning model targeting medical image analysis, and (2) to evaluate the trust level of this model via several evaluation works including human contribution. The target problem was selected as the brain tumor classification, which is a remarkable, competitive medical image-based problem for Deep Learning. In the study, MR-based pre-processed brain images were received by the Subtractive Spatial Lightweight Convolutional Neural Network (SSLW-CNN) model, which includes additional operators to reduce the complexity of classification. In order to ensure the explainable background, the model also included Class Activation Mapping (CAM). It is important to evaluate the trust level of a successful model. So, numerical success rates of the SSLW-CNN were evaluated based on the peak signal-to-noise ratio (PSNR), computational time, computational overhead, and brain tumor classification accuracy. The objective of the proposed SSLW-CNN model is to obtain faster and good tumor classification with lesser time. The results illustrate that the SSLW-CNN model provides better performance of PSNR which is enhanced by 8%, classification accuracy is improved by 33%, computation time is reduced by 19%, computation overhead is decreased by 23%, and classification time is minimized by 13%, as compared to state-of-the-art works. Because the model provided good numerical results, it was then evaluated in terms of XAI perspective by including doctor-model based evaluations such as feedback CAM visualizations, usability, expert surveys, comparisons of CAM with other XAI methods, and manual diagnosis comparison. The results show that the SSLW-CNN provides good performance on brain tumor diagnosis and ensures a trustworthy solution for the doctors.

Healthcare ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 153 ◽  
Author(s):  
Francisco Javier Díaz-Pernas ◽  
Mario Martínez-Zarzuela ◽  
Míriam Antón-Rodríguez ◽  
David González-Ortega

In this paper, we present a fully automatic brain tumor segmentation and classification model using a Deep Convolutional Neural Network that includes a multiscale approach. One of the differences of our proposal with respect to previous works is that input images are processed in three spatial scales along different processing pathways. This mechanism is inspired in the inherent operation of the Human Visual System. The proposed neural model can analyze MRI images containing three types of tumors: meningioma, glioma, and pituitary tumor, over sagittal, coronal, and axial views and does not need preprocessing of input images to remove skull or vertebral column parts in advance. The performance of our method on a publicly available MRI image dataset of 3064 slices from 233 patients is compared with previously classical machine learning and deep learning published methods. In the comparison, our method remarkably obtained a tumor classification accuracy of 0.973, higher than the other approaches using the same database.


2021 ◽  
Vol 13 (3) ◽  
pp. 335
Author(s):  
Yuhao Qing ◽  
Wenyi Liu

In recent years, image classification on hyperspectral imagery utilizing deep learning algorithms has attained good results. Thus, spurred by that finding and to further improve the deep learning classification accuracy, we propose a multi-scale residual convolutional neural network model fused with an efficient channel attention network (MRA-NET) that is appropriate for hyperspectral image classification. The suggested technique comprises a multi-staged architecture, where initially the spectral information of the hyperspectral image is reduced into a two-dimensional tensor, utilizing a principal component analysis (PCA) scheme. Then, the constructed low-dimensional image is input to our proposed ECA-NET deep network, which exploits the advantages of its core components, i.e., multi-scale residual structure and attention mechanisms. We evaluate the performance of the proposed MRA-NET on three public available hyperspectral datasets and demonstrate that, overall, the classification accuracy of our method is 99.82 %, 99.81%, and 99.37, respectively, which is higher compared to the corresponding accuracy of current networks such as 3D convolutional neural network (CNN), three-dimensional residual convolution structure (RES-3D-CNN), and space–spectrum joint deep network (SSRN).


2022 ◽  
Vol 10 (1) ◽  
pp. 0-0

Brain tumor is a severe cancer disease caused by uncontrollable and abnormal partitioning of cells. Timely disease detection and treatment plans lead to the increased life expectancy of patients. Automated detection and classification of brain tumor are a more challenging process which is based on the clinician’s knowledge and experience. For this fact, one of the most practical and important techniques is to use deep learning. Recent progress in the fields of deep learning has helped the clinician’s in medical imaging for medical diagnosis of brain tumor. In this paper, we present a comparison of Deep Convolutional Neural Network models for automatically binary classification query MRI images dataset with the goal of taking precision tools to health professionals based on fined recent versions of DenseNet, Xception, NASNet-A, and VGGNet. The experiments were conducted using an MRI open dataset of 3,762 images. Other performance measures used in the study are the area under precision, recall, and specificity.


Author(s):  
Nyoman Abiwinanda ◽  
Muhammad Hanif ◽  
S. Tafwida Hesaputra ◽  
Astri Handayani ◽  
Tati Rajab Mengko

2019 ◽  
Vol 11 (9) ◽  
pp. 1006 ◽  
Author(s):  
Quanlong Feng ◽  
Jianyu Yang ◽  
Dehai Zhu ◽  
Jiantao Liu ◽  
Hao Guo ◽  
...  

Coastal land cover classification is a significant yet challenging task in remote sensing because of the complex and fragmented nature of coastal landscapes. However, availability of multitemporal and multisensor remote sensing data provides opportunities to improve classification accuracy. Meanwhile, rapid development of deep learning has achieved astonishing results in computer vision tasks and has also been a popular topic in the field of remote sensing. Nevertheless, designing an effective and concise deep learning model for coastal land cover classification remains problematic. To tackle this issue, we propose a multibranch convolutional neural network (MBCNN) for the fusion of multitemporal and multisensor Sentinel data to improve coastal land cover classification accuracy. The proposed model leverages a series of deformable convolutional neural networks to extract representative features from a single-source dataset. Extracted features are aggregated through an adaptive feature fusion module to predict final land cover categories. Experimental results indicate that the proposed MBCNN shows good performance, with an overall accuracy of 93.78% and a Kappa coefficient of 0.9297. Inclusion of multitemporal data improves accuracy by an average of 6.85%, while multisensor data contributes to 3.24% of accuracy increase. Additionally, the featured fusion module in this study also increases accuracy by about 2% when compared with the feature-stacking method. Results demonstrate that the proposed method can effectively mine and fuse multitemporal and multisource Sentinel data, which improves coastal land cover classification accuracy.


Sign in / Sign up

Export Citation Format

Share Document