scholarly journals Medical Image Classification Based on Deep Features Extracted by Deep Model and Statistic Feature Fusion with Multilayer Perceptron‬

2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
ZhiFei Lai ◽  
HuiFang Deng

Medical image classification is a key technique of Computer-Aided Diagnosis (CAD) systems. Traditional methods rely mainly on the shape, color, and/or texture features as well as their combinations, most of which are problem-specific and have shown to be complementary in medical images, which leads to a system that lacks the ability to make representations of high-level problem domain concepts and that has poor model generalization ability. Recent deep learning methods provide an effective way to construct an end-to-end model that can compute final classification labels with the raw pixels of medical images. However, due to the high resolution of the medical images and the small dataset size, deep learning models suffer from high computational costs and limitations in the model layers and channels. To solve these problems, in this paper, we propose a deep learning model that integrates Coding Network with Multilayer Perceptron (CNMP), which combines high-level features that are extracted from a deep convolutional neural network and some selected traditional features. The construction of the proposed model includes the following steps. First, we train a deep convolutional neural network as a coding network in a supervised manner, and the result is that it can code the raw pixels of medical images into feature vectors that represent high-level concepts for classification. Second, we extract a set of selected traditional features based on background knowledge of medical images. Finally, we design an efficient model that is based on neural networks to fuse the different feature groups obtained in the first and second step. We evaluate the proposed approach on two benchmark medical image datasets: HIS2828 and ISIC2017. We achieve an overall classification accuracy of 90.1% and 90.2%, respectively, which are higher than the current successful methods.

Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-15 ◽  
Author(s):  
Feng-Ping An

Due to the complexity of medical images, traditional medical image classification methods have been unable to meet actual application needs. In recent years, the rapid development of deep learning theory has provided a technical approach for solving medical image classification tasks. However, deep learning has the following problems in medical image classification. First, it is impossible to construct a deep learning model hierarchy for medical image properties; second, the network initialization weights of deep learning models are not well optimized. Therefore, this paper starts from the perspective of network optimization and improves the nonlinear modeling ability of the network through optimization methods. A new network weight initialization method is proposed, which alleviates the problem that existing deep learning model initialization is limited by the type of the nonlinear unit adopted and increases the potential of the neural network to handle different visual tasks. Moreover, through an in-depth study of the multicolumn convolutional neural network framework, this paper finds that the number of features and the convolution kernel size at different levels of the convolutional neural network are different. In contrast, the proposed method can construct different convolutional neural network models that adapt better to the characteristics of the medical images of interest and thus can better train the resulting heterogeneous multicolumn convolutional neural networks. Finally, using the adaptive sliding window fusion mechanism proposed in this paper, both methods jointly complete the classification task of medical images. Based on the above ideas, this paper proposes a medical classification algorithm based on a weight initialization/sliding window fusion for multilevel convolutional neural networks. The methods proposed in this study were applied to breast mass, brain tumor tissue, and medical image database classification experiments. The results show that the proposed method not only achieves a higher average accuracy than that of traditional machine learning and other deep learning methods but also is more stable and more robust.


2021 ◽  
Vol 22 (2) ◽  
pp. 234-248
Author(s):  
Mohd Adli Md Ali ◽  
Mohd Radhwan Abidin ◽  
Nik Arsyad Nik Muhamad Affendi ◽  
Hafidzul Abdullah ◽  
Daaniyal R. Rosman ◽  
...  

The rapid advancement in pattern recognition via the deep learning method has made it possible to develop an autonomous medical image classification system. This system has proven robust and accurate in classifying most pathological features found in a medical image, such as airspace opacity, mass, and broken bone. Conventionally, this system takes routine medical images with minimum pre-processing as the model's input; in this research, we investigate if saliency maps can be an alternative model input. Recent research has shown that saliency maps' application increases deep learning model performance in image classification, object localization, and segmentation. However, conventional bottom-up saliency map algorithms regularly failed to localize salient or pathological anomalies in medical images. This failure is because most medical images are homogenous, lacking color, and contrast variant. Therefore, we also introduce the Xenafas algorithm in this paper. The algorithm creates a new kind of anomalous saliency map called the Intensity Probability Mapping and Weighted Intensity Probability Mapping. We tested the proposed saliency maps on five deep learning models based on common convolutional neural network architecture. The result of this experiment showed that using the proposed saliency map over regular radiograph chest images increases the sensitivity of most models in identifying images with air space opacities. Using the Grad-CAM algorithm, we showed how the proposed saliency map shifted the model attention to the relevant region in chest radiograph images. While in the qualitative study, it was found that the proposed saliency map regularly highlights anomalous features, including foreign objects and cardiomegaly. However, it is inconsistent in highlighting masses and nodules. ABSTRAK: Perkembangan pesat sistem pengecaman corak menggunakan kaedah pembelajaran mendalam membolehkan penghasilan sistem klasifikasi gambar perubatan secara automatik. Sistem ini berupaya menilai secara tepat jika terdapat tanda-tanda patologi di dalam gambar perubatan seperti kelegapan ruang udara, jisim dan tulang patah. Kebiasaannya, sistem ini akan mengambil gambar perubatan dengan pra-pemprosesan minimum sebagai input. Kajian ini adalah tentang potensi peta salien dapat dijadikan sebagai model input alternatif. Ini kerana kajian terkini telah menunjukkan penggunaan peta salien dapat meningkatkan prestasi model pembelajaran mendalam dalam pengklasifikasian gambar, pengesanan objek, dan segmentasi gambar. Walau bagaimanapun, sistem konvensional algoritma peta salien jenis bawah-ke-atas kebiasaannya gagal  mengesan salien atau anomali patologi dalam gambar-gambar perubatan. Kegagalan ini disebabkan oleh sifat gambar perubatan yang homogen, kurang variasi warna dan kontras. Oleh itu, kajian ini memperkenalkan algoritma Xenafas yang menghasilkan dua jenis pemetaan saliensi anomali iaitu Pemetaan Kebarangkalian Keamatan dan Pemetaan Kebarangkalian Keamatan Pemberat. Kajian dibuat pada peta salien yang dicadangkan iaitu pada lima model pembelajaran mendalam berdasarkan seni bina rangkaian neural konvolusi yang sama. Dapatan kajian menunjukkan dengan menggunakan peta salien atas gambar-gambar radiografi dada tetap membantu kesensitifan kebanyakan model dalam mengidentifikasi gambar-gambar dengan kelegapan ruang udara. Dengan menggunakan algoritma Grad-CAM, peta salien yang dicadangkan ini mampu mengalih fokus model kepada kawasan yang relevan kepada gambar radiografi dada. Sementara itu, kajian kualitatif ini juga menunjukkan algoritma yang dicadangkan mampu memberi ciri anomali, termasuk objek asing dan kardiomegali. Walau bagaimanapun, ianya tidak konsisten dalam menjelaskan berat dan nodul.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Fengping An ◽  
Xiaowei Li ◽  
Xingmin Ma

Due to the complexity of medical images, traditional medical image classification methods have been unable to meet the actual application needs. In recent years, the rapid development of deep learning theory has provided a technical approach for solving medical image classification. However, deep learning has the following problems in the application of medical image classification. First, it is impossible to construct a deep learning model with excellent performance according to the characteristics of medical images. Second, the current deep learning network structure and training strategies are less adaptable to medical images. Therefore, this paper first introduces the visual attention mechanism into the deep learning model so that the information can be extracted more effectively according to the problem of medical images, and the reasoning is realized at a finer granularity. It can increase the interpretability of the model. Additionally, to solve the problem of matching the deep learning network structure and training strategy to medical images, this paper will construct a novel multiscale convolutional neural network model that can automatically extract high-level discriminative appearance features from the original image, and the loss function uses the Mahalanobis distance optimization model to obtain a better training strategy, which can improve the robust performance of the network model. The medical image classification task is completed by the above method. Based on the above ideas, this paper proposes a medical classification algorithm based on a visual attention mechanism-multiscale convolutional neural network. The lung nodules and breast cancer images were classified by the method in this paper. The experimental results show that the accuracy of medical image classification in this paper is not only higher than that of traditional machine learning methods but also improved compared with other deep learning methods, and the method has good stability and robustness.


Medical imaging classification is playing a vital role in identifying and diagnoses the diseases, which is very helpful to doctor. Conventional ways classify supported the form, color, and/or texture, most of tiny problematic areas haven’t shown in medical images, which meant less efficient classification and that has poor ability to identify disease. Advanced deep learning algorithms provide an efficient way to construct a finished model that can compute final classification labels with the raw pixels of medical images. These conventional algorithms are not sufficient for high resolution images due to small dataset size, advanced deep learning models suffer from very high computational costs and limitations in the channels and multilayers in the channels. To overcome these limitations, we proposed a new algorithm Normalized Coding Network with Multi-scale Perceptron (NCNMP), which combines high-level features and traditional features. The Architecture of the proposed model includes three stages. Training, retrieve, fuse. We examined the proposed algorithm on medical image dataset NIH2626. We got an overall image classification accuracy of 91.35, which are greater than the present methods.


2020 ◽  
Author(s):  
vishal mellahalli siddegowda

Deep learning has come up with the intense class of models which have potential applications in the field of image classification, video recognition, object recognition, natural language Processing and speech recognition. Mainly, Deep convolutional Neural Network is one of the deep learning models that is used for image classification, that extracts the feature from the images and use these extracted features to classify images (2D or 3D images). In this paper, DCNN is used to classify mammogram images obtained from medical imaging process to detect the benign and malignant cells. The outcome of the study is to bring out the idea behind computing techniques incorporated with medical diagnostics, helping medical professional to take advantage of computer aided diagnostics, ultimately improving the time spent by pathologist to inspect the stained tissues in-turn increasing the survival rates.


Sign in / Sign up

Export Citation Format

Share Document