scholarly journals Medical Image Classification Algorithm Based on Visual Attention Mechanism-MCNN

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Fengping An ◽  
Xiaowei Li ◽  
Xingmin Ma

Due to the complexity of medical images, traditional medical image classification methods have been unable to meet the actual application needs. In recent years, the rapid development of deep learning theory has provided a technical approach for solving medical image classification. However, deep learning has the following problems in the application of medical image classification. First, it is impossible to construct a deep learning model with excellent performance according to the characteristics of medical images. Second, the current deep learning network structure and training strategies are less adaptable to medical images. Therefore, this paper first introduces the visual attention mechanism into the deep learning model so that the information can be extracted more effectively according to the problem of medical images, and the reasoning is realized at a finer granularity. It can increase the interpretability of the model. Additionally, to solve the problem of matching the deep learning network structure and training strategy to medical images, this paper will construct a novel multiscale convolutional neural network model that can automatically extract high-level discriminative appearance features from the original image, and the loss function uses the Mahalanobis distance optimization model to obtain a better training strategy, which can improve the robust performance of the network model. The medical image classification task is completed by the above method. Based on the above ideas, this paper proposes a medical classification algorithm based on a visual attention mechanism-multiscale convolutional neural network. The lung nodules and breast cancer images were classified by the method in this paper. The experimental results show that the accuracy of medical image classification in this paper is not only higher than that of traditional machine learning methods but also improved compared with other deep learning methods, and the method has good stability and robustness.

2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
ZhiFei Lai ◽  
HuiFang Deng

Medical image classification is a key technique of Computer-Aided Diagnosis (CAD) systems. Traditional methods rely mainly on the shape, color, and/or texture features as well as their combinations, most of which are problem-specific and have shown to be complementary in medical images, which leads to a system that lacks the ability to make representations of high-level problem domain concepts and that has poor model generalization ability. Recent deep learning methods provide an effective way to construct an end-to-end model that can compute final classification labels with the raw pixels of medical images. However, due to the high resolution of the medical images and the small dataset size, deep learning models suffer from high computational costs and limitations in the model layers and channels. To solve these problems, in this paper, we propose a deep learning model that integrates Coding Network with Multilayer Perceptron (CNMP), which combines high-level features that are extracted from a deep convolutional neural network and some selected traditional features. The construction of the proposed model includes the following steps. First, we train a deep convolutional neural network as a coding network in a supervised manner, and the result is that it can code the raw pixels of medical images into feature vectors that represent high-level concepts for classification. Second, we extract a set of selected traditional features based on background knowledge of medical images. Finally, we design an efficient model that is based on neural networks to fuse the different feature groups obtained in the first and second step. We evaluate the proposed approach on two benchmark medical image datasets: HIS2828 and ISIC2017. We achieve an overall classification accuracy of 90.1% and 90.2%, respectively, which are higher than the current successful methods.


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-15 ◽  
Author(s):  
Feng-Ping An

Due to the complexity of medical images, traditional medical image classification methods have been unable to meet actual application needs. In recent years, the rapid development of deep learning theory has provided a technical approach for solving medical image classification tasks. However, deep learning has the following problems in medical image classification. First, it is impossible to construct a deep learning model hierarchy for medical image properties; second, the network initialization weights of deep learning models are not well optimized. Therefore, this paper starts from the perspective of network optimization and improves the nonlinear modeling ability of the network through optimization methods. A new network weight initialization method is proposed, which alleviates the problem that existing deep learning model initialization is limited by the type of the nonlinear unit adopted and increases the potential of the neural network to handle different visual tasks. Moreover, through an in-depth study of the multicolumn convolutional neural network framework, this paper finds that the number of features and the convolution kernel size at different levels of the convolutional neural network are different. In contrast, the proposed method can construct different convolutional neural network models that adapt better to the characteristics of the medical images of interest and thus can better train the resulting heterogeneous multicolumn convolutional neural networks. Finally, using the adaptive sliding window fusion mechanism proposed in this paper, both methods jointly complete the classification task of medical images. Based on the above ideas, this paper proposes a medical classification algorithm based on a weight initialization/sliding window fusion for multilevel convolutional neural networks. The methods proposed in this study were applied to breast mass, brain tumor tissue, and medical image database classification experiments. The results show that the proposed method not only achieves a higher average accuracy than that of traditional machine learning and other deep learning methods but also is more stable and more robust.


Diagnostics ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1384
Author(s):  
Yin Dai ◽  
Yifan Gao ◽  
Fayu Liu

Over the past decade, convolutional neural networks (CNN) have shown very competitive performance in medical image analysis tasks, such as disease classification, tumor segmentation, and lesion detection. CNN has great advantages in extracting local features of images. However, due to the locality of convolution operation, it cannot deal with long-range relationships well. Recently, transformers have been applied to computer vision and achieved remarkable success in large-scale datasets. Compared with natural images, multi-modal medical images have explicit and important long-range dependencies, and effective multi-modal fusion strategies can greatly improve the performance of deep models. This prompts us to study transformer-based structures and apply them to multi-modal medical images. Existing transformer-based network architectures require large-scale datasets to achieve better performance. However, medical imaging datasets are relatively small, which makes it difficult to apply pure transformers to medical image analysis. Therefore, we propose TransMed for multi-modal medical image classification. TransMed combines the advantages of CNN and transformer to efficiently extract low-level features of images and establish long-range dependencies between modalities. We evaluated our model on two datasets, parotid gland tumors classification and knee injury classification. Combining our contributions, we achieve an improvement of 10.1% and 1.9% in average accuracy, respectively, outperforming other state-of-the-art CNN-based models. The results of the proposed method are promising and have tremendous potential to be applied to a large number of medical image analysis tasks. To our best knowledge, this is the first work to apply transformers to multi-modal medical image classification.


2019 ◽  
Vol 54 ◽  
pp. 10-19 ◽  
Author(s):  
Jianpeng Zhang ◽  
Yutong Xie ◽  
Qi Wu ◽  
Yong Xia

2021 ◽  
Author(s):  
Yulong Wang ◽  
Xiaofeng Liao ◽  
Dewen Qiao ◽  
Jiahui Wu

Abstract With the rapid development of modern medical science and technology, medical image classification has become a more and more challenging problem. However, in most traditional classification methods, image feature extraction is difficult, and the accuracy of classifier needs to be improved. Therefore, this paper proposes a high-accuracy medical image classification method based on deep learning, which is called hybrid CQ-SVM. Specifically, we combine the advantages of convolutional neural network (CNN) and support vector machine (SVM), and integrate the novel hybrid model. In our scheme, quantum-behaved particle swarm optimization algorithm (QPSO) is adopted to set its parameters automatically for solving the SVM parameter setting problem, CNN works as a trainable feature extractor and SVM optimized by QPSO performs as a trainable classifier. This method can automatically extract features from original medical images and generate predictions. The experimental results show that this method can extract better medical image features, and achieve higher classification accuracy.


Information ◽  
2020 ◽  
Vol 11 (6) ◽  
pp. 318 ◽  
Author(s):  
Kamran Kowsari ◽  
Rasoul Sali ◽  
Lubaina Ehsan ◽  
William Adorno ◽  
Asad Ali ◽  
...  

Image classification is central to the big data revolution in medicine. Improved information processing methods for diagnosis and classification of digital medical images have shown to be successful via deep learning approaches. As this field is explored, there are limitations to the performance of traditional supervised classifiers. This paper outlines an approach that is different from the current medical image classification tasks that view the issue as multi-class classification. We performed a hierarchical classification using our Hierarchical Medical Image classification (HMIC) approach. HMIC uses stacks of deep learning models to give particular comprehension at each level of the clinical picture hierarchy. For testing our performance, we use biopsy of the small bowel images that contain three categories in the parent level (Celiac Disease, Environmental Enteropathy, and histologically normal controls). For the child level, Celiac Disease Severity is classified into 4 classes (I, IIIa, IIIb, and IIIC).


Author(s):  
Xiangbin Liu ◽  
Jiesheng He ◽  
Liping Song ◽  
Shuai Liu ◽  
Gautam Srivastava

With the rapid development of Artificial Intelligence (AI), deep learning has increasingly become a research hotspot in various fields, such as medical image classification. Traditional deep learning models use Bilinear Interpolation when processing classification tasks of multi-size medical image dataset, which will cause the loss of information of the image, and then affect the classification effect. In response to this problem, this work proposes a solution for an adaptive size deep learning model. First, according to the characteristics of the multi-size medical image dataset, the optimal size set module is proposed in combination with the unpooling process. Next, an adaptive deep learning model module is proposed based on the existing deep learning model. Then, the model is fused with the size fine-tuning module used to process multi-size medical images to obtain a solution of the adaptive size deep learning model. Finally, the proposed solution model is applied to the pneumonia CT medical image dataset. Through experiments, it can be seen that the model has strong robustness, and the classification effect is improved by about 4% compared with traditional algorithms.


Sign in / Sign up

Export Citation Format

Share Document