scholarly journals Detection of Diabetic Eye Disease from Retinal Images Using a Deep Learning Based CenterNet Model

Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5283
Author(s):  
Tahira Nazir ◽  
Marriam Nawaz ◽  
Junaid Rashid ◽  
Rabbia Mahum ◽  
Momina Masood ◽  
...  

Diabetic retinopathy (DR) is an eye disease that alters the blood vessels of a person suffering from diabetes. Diabetic macular edema (DME) occurs when DR affects the macula, which causes fluid accumulation in the macula. Efficient screening systems require experts to manually analyze images to recognize diseases. However, due to the challenging nature of the screening method and lack of trained human resources, devising effective screening-oriented treatment is an expensive task. Automated systems are trying to cope with these challenges; however, these methods do not generalize well to multiple diseases and real-world scenarios. To solve the aforementioned issues, we propose a new method comprising two main steps. The first involves dataset preparation and feature extraction and the other relates to improving a custom deep learning based CenterNet model trained for eye disease classification. Initially, we generate annotations for suspected samples to locate the precise region of interest, while the other part of the proposed solution trains the Center Net model over annotated images. Specifically, we use DenseNet-100 as a feature extraction method on which the one-stage detector, CenterNet, is employed to localize and classify the disease lesions. We evaluated our method over challenging datasets, namely, APTOS-2019 and IDRiD, and attained average accuracy of 97.93% and 98.10%, respectively. We also performed cross-dataset validation with benchmark EYEPACS and Diaretdb1 datasets. Both qualitative and quantitative results demonstrate that our proposed approach outperforms state-of-the-art methods due to more effective localization power of CenterNet, as it can easily recognize small lesions and deal with over-fitted training data. Our proposed framework is proficient in correctly locating and classifying disease lesions. In comparison to existing DR and DME classification approaches, our method can extract representative key points from low-intensity and noisy images and accurately classify them. Hence our approach can play an important role in automated detection and recognition of DR and DME lesions.

2021 ◽  
Vol 3 ◽  
Author(s):  
Tobias Tesch ◽  
Stefan Kollet ◽  
Jochen Garcke

A deep learning (DL) model learns a function relating a set of input variables with a set of target variables. While the representation of this function in form of the DL model often lacks interpretability, several interpretation methods exist that provide descriptions of the function (e.g., measures of feature importance). On the one hand, these descriptions may build trust in the model or reveal its limitations. On the other hand, they may lead to new scientific understanding. In any case, a description is only useful if one is able to identify if parts of it reflect spurious instead of causal relations (e.g., random associations in the training data instead of associations due to a physical process). However, this can be challenging even for experts because, in scientific tasks, causal relations between input and target variables are often unknown or extremely complex. Commonly, this challenge is addressed by training separate instances of the considered model on random samples of the training set and identifying differences between the obtained descriptions. Here, we demonstrate that this may not be sufficient and propose to additionally consider more general modifications of the prediction task. We refer to the proposed approach as variant approach and demonstrate its usefulness and its superiority over pure sampling approaches with two illustrative prediction tasks from hydrometeorology. While being conceptually simple, to our knowledge the approach has not been formalized and systematically evaluated before.


2021 ◽  
Author(s):  
J. Annrose ◽  
N. Herald Anantha Rufus ◽  
C. R. Edwin Selva Rex ◽  
D. Godwin Immanuel

Abstract Bean which is botanically called Phaseolus vulgaris L belongs to the Fabaceae family.During bean disease identification, unnecessary economical losses occur due to the delay of the treatment period, incorrect treatment, and lack of knowledge. The existing deep learning and machine learning techniques met few issues such as high computational complexity, higher cost associated with the training data, more execution time, noise, feature dimensionality, lower accuracy, low speed, etc. To tackle these problems, we have proposed a hybrid deep learning model with an Archimedes optimization algorithm (HDL-AOA) for bean disease classification. In this work, there are five bean classes of which one is a healthy class whereas the remaining four classes indicate different diseases such as Bean halo blight, Pythium diseases, Rhizoctonia root rot, and Anthracnose abnormalities acquired from the Soybean (Large) Data Set.The hybrid deep learning technique is the combination of wavelet packet decomposition (WPD) and long short term memory (LSTM). Initially, the WPD decomposes the input images into four sub-series. For these sub-series, four LSTM networks were developed. During bean disease classification, an Archimedes optimization algorithm (AOA) enhances the classification accuracy for multiple single LSTM networks. MATLAB software implements the HDL-AOA model for bean disease classification. The proposed model accomplishes lower MAPE than other exiting methods. Finally, the proposed HDL-AOA model outperforms excellent classification results using different evaluation measures such as accuracy, specificity, sensitivity, precision, recall, and F-score.


2021 ◽  
Author(s):  
Emir Akcin ◽  
Kemal Sami Isleyen ◽  
Enes Ozcan ◽  
Alaa Ali Hameed ◽  
Erdal Alimovski ◽  
...  

2021 ◽  
Author(s):  
J. Annrose ◽  
N. Herald Anantha Rufus ◽  
C. R. Edwin Selva Rex ◽  
D. Godwin Immanuel

Abstract Bean which is botanically called Phaseolus vulgaris L belongs to the Fabaceae family.During bean disease identification, unnecessary economical losses occur due to the delay of the treatment period, incorrect treatment, and lack of knowledge. The existing deep learning and machine learning techniques met few issues such as high computational complexity, higher cost associated with the training data, more execution time, noise, feature dimensionality, lower accuracy, low speed, etc. To tackle these problems, we have proposed a hybrid deep learning model with an Archimedes optimization algorithm (HDL-AOA) for bean disease classification. In this work, there are five bean classes of which one is a healthy class whereas the remaining four classes indicate different diseases such as Bean halo blight, Pythium diseases, Rhizoctonia root rot, and Anthracnose abnormalities acquired from the Soybean (Large) Data Set.The hybrid deep learning technique is the combination of wavelet packet decomposition (WPD) and long short term memory (LSTM). Initially, the WPD decomposes the input images into four sub-series. For these sub-series, four LSTM networks were developed. During bean disease classification, an Archimedes optimization algorithm (AOA) enhances the classification accuracy for multiple single LSTM networks. MATLAB software implements the HDL-AOA model for bean disease classification. The proposed model accomplishes lower MAPE than other exiting methods. Finally, the proposed HDL-AOA model outperforms excellent classification results using different evaluation measures such as accuracy, specificity, sensitivity, precision, recall, and F-score.


Author(s):  
Bhuvaneswari Chandran ◽  
P. Aruna ◽  
D. Loganathan

The purpose of the chapter is to present a novel method to classify lung diseases from the computed tomography images which assist physicians in the diagnosis of lung diseases. The method is based on a new approach which combines a proposed M2 feature extraction method and a novel hybrid genetic approach with different types of classifiers. The feature extraction methods performed in this work are moment invariants, proposed multiscale filter method and proposed M2 feature extraction method. The essential features which are the results of the feature extraction technique are selected by the novel hybrid genetic algorithm feature selection algorithms. Classification is performed by the support vector machine, multilayer perceptron neural network and Bayes Net classifiers. The result obtained proves that the proposed technique is an efficient and robust method. The performance of the proposed M2 feature extraction with proposed hybrid GA and SVM classifier combination achieves maximum classification accuracy.


2020 ◽  
Vol 10 (16) ◽  
pp. 5582
Author(s):  
Xiaochen Yuan ◽  
Tian Huang

In this paper, a novel approach that uses a deep learning technique is proposed to detect and identify a variety of image operations. First, we propose the spatial domain-based nonlinear residual (SDNR) feature extraction method by constructing residual values from locally supported filters in the spatial domain. By applying minimum and maximum operators, diversity and nonlinearity are introduced; moreover, this construction brings nonsymmetry to the distribution of SDNR samples. Then, we propose applying a deep learning technique to the extracted SDNR features to detect and classify a variety of image operations. Many experiments have been conducted to verify the performance of the proposed approach, and the results indicate that the proposed method performs well in detecting and identifying the various common image postprocessing operations. Furthermore, comparisons between the proposed approach and the existing methods show the superiority of the proposed approach.


2019 ◽  
Vol 9 (18) ◽  
pp. 3930 ◽  
Author(s):  
Jaehyun Yoo ◽  
Jongho Park

This paper studies the indoor localization based on Wi-Fi received signal strength indicator (RSSI). In addition to position estimation, this study examines the expansion of applications using Wi-Fi RSSI data sets in three areas: (i) feature extraction, (ii) mobile fingerprinting, and (iii) mapless localization. First, the features of Wi-Fi RSSI observations are extracted with respect to different floor levels and designated landmarks. Second, the mobile fingerprinting method is proposed to allow a trainer to collect training data efficiently, which is faster and more efficient than the conventional static fingerprinting method. Third, in the case of the unknown-map situation, the trajectory learning method is suggested to learn map information using crowdsourced data. All of these parts are interconnected from the feature extraction and mobile fingerprinting to the map learning and the estimation. Based on the experimental results, we observed (i) clearly classified data points by the feature extraction method as regards the floors and landmarks, (ii) efficient mobile fingerprinting compared to conventional static fingerprinting, and (iii) improvement of the positioning accuracy owing to the trajectory learning.


Sign in / Sign up

Export Citation Format

Share Document