scholarly journals A Novel Filtered Segmentation-Based Bayesian Deep Neural Network Framework on Large Diabetic Retinopathy Databases

2020 ◽  
Vol 34 (6) ◽  
pp. 683-692
Author(s):  
Shaik Akbar ◽  
Divya Midhunchakkaravarthy

Image thresholding-based segmentation models play a vital role in the detection of Diabetic retinopathy (DR) on large databases. Most of the conventional segmentation-based classification models are independent of over segmented regions and outliers. Also, these models have less true positive rate and high error rate on different DR feature sets. In order to overcome these problems, a novel filtered based segmentation framework is designed and implemented on the large DR feature space. In this work, a novel image filtering approach, optimal image segmentation approach and hybrid Bayesian deep learning framework are developed on the large DR image databases. Experimental results proved that the proposed filtered segmentation-based Bayesian deep neural network has better accuracy and runtime than the conventional models on different DR variation databases.

Author(s):  
Yasir Eltigani Ali Mustaf ◽  
◽  
Bashir Hassan Ismail ◽  

Diagnosis of diabetic retinopathy (DR) via images of colour fundus requires experienced clinicians to determine the presence and importance of a large number of small characteristics. This work proposes and named Adapted Stacked Auto Encoder (ASAE-DNN) a novel deep learning framework for diabetic retinopathy (DR), three hidden layers have been used to extract features and classify them then use a Softmax classification. The models proposed are checked on Messidor's data set, including 800 training images and 150 test images. Exactness, accuracy, time, recall and calculation are assessed for the outcomes of the proposed models. The results of these studies show that the model ASAE-DNN was 97% accurate.


Mathematics ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. 1620 ◽  
Author(s):  
Ganjar Alfian ◽  
Muhammad Syafrudin ◽  
Norma Latif Fitriyani ◽  
Muhammad Anshari ◽  
Pavel Stasa ◽  
...  

Extracting information from individual risk factors provides an effective way to identify diabetes risk and associated complications, such as retinopathy, at an early stage. Deep learning and machine learning algorithms are being utilized to extract information from individual risk factors to improve early-stage diagnosis. This study proposes a deep neural network (DNN) combined with recursive feature elimination (RFE) to provide early prediction of diabetic retinopathy (DR) based on individual risk factors. The proposed model uses RFE to remove irrelevant features and DNN to classify the diseases. A publicly available dataset was utilized to predict DR during initial stages, for the proposed and several current best-practice models. The proposed model achieved 82.033% prediction accuracy, which was a significantly better performance than the current models. Thus, important risk factors for retinopathy can be successfully extracted using RFE. In addition, to evaluate the proposed prediction model robustness and generalization, we compared it with other machine learning models and datasets (nephropathy and hypertension–diabetes). The proposed prediction model will help improve early-stage retinopathy diagnosis based on individual risk factors.


2017 ◽  
Vol 7 (2) ◽  
pp. 16-41 ◽  
Author(s):  
Naghmeh Moradpoor Sheykhkanloo

Structured Query Language injection (SQLi) attack is a code injection technique where hackers inject SQL commands into a database via a vulnerable web application. Injected SQL commands can modify the back-end SQL database and thus compromise the security of a web application. In the previous publications, the author has proposed a Neural Network (NN)-based model for detections and classifications of the SQLi attacks. The proposed model was built from three elements: 1) a Uniform Resource Locator (URL) generator, 2) a URL classifier, and 3) a NN model. The proposed model was successful to: 1) detect each generated URL as either a benign URL or a malicious, and 2) identify the type of SQLi attack for each malicious URL. The published results proved the effectiveness of the proposal. In this paper, the author re-evaluates the performance of the proposal through two scenarios using controversial data sets. The results of the experiments are presented in order to demonstrate the effectiveness of the proposed model in terms of accuracy, true-positive rate as well as false-positive rate.


2017 ◽  
Author(s):  
Luís Dias ◽  
Rosalvo Neto

Google released on November of 2015 Tensorflow, an open source machine learning framework that can be used to implement Deep Neural Network algorithms, a class of algorithms that shows great potential in solving complex problems. Considering the importance of usability in software success, this research aims to perform a usability analysis on Tensorflow and to compare it with another widely used framework, R. The evaluation was performed through usability tests with university students. The study led do indications that Tensorflow usability is equal or better than the usability of traditional frameworks used by the scientific community.


2021 ◽  
Author(s):  
Sara Saleh Alfozan ◽  
Mohamad Mahdi Hassan

Infection of agricultural plants is a serious threat to food safety. It can severely damage plants' yielding capacity. Farmers are the primary victims of this threat. Due to the advancement of AI, image-based intelligent apps can play a vital role in mitigating this threat by quick and early detection of plants infections. In this paper, we present a mobile app in this regard. We have developed MajraDoc to detect some common diseases in local agricultural plants. We have created a dataset of 10886 images for ten classes of plants diseases to train the deep neural network. The VGG-19 network model was modified and trained using transfer learning techniques. The model achieved high accuracy, and the application performed well in predicting all ten classes of infections.


Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5695
Author(s):  
Maciej Stanuch ◽  
Marek Wodzinski ◽  
Andrzej Skalski

Devices and systems secured by biometric factors became a part of our lives because they are convenient, easy to use, reliable, and secure. They use information about unique features of our bodies in order to authenticate a user. It is possible to enhance the security of these devices by adding supplementary modality while keeping the user experience at the same level. Palm vein systems are based on infrared wavelengths used for capturing images of users’ veins. It is both convenient for the user, and it is one of the most secure biometric solutions. The proposed system uses IR and UV wavelengths; the images are then processed by a deep convolutional neural network for extraction of biometric features and authentication of users. We tested the system in a verification scenario that consisted of checking if the images collected from the user contained the same biometric features as those in the database. The True Positive Rate (TPR) achieved by the system when the information from the two modalities were combined was 99.5% by the threshold of acceptance set to the Equal Error Rate (EER).


2019 ◽  
Vol 11 (1) ◽  
pp. 1-17
Author(s):  
Pinki Sharma ◽  
Jyotsna Sengupta ◽  
P. K. Suri

Cloud computing is the internet-based technique where the users utilize the online resources for computing services. The attacks or intrusion into the cloud service is the major issue in the cloud environment since it degrades performance. In this article, we propose an adaptive lion-based neural network (ALNN) to detect the intrusion behaviour. Initially, the cloud network has generated the clusters using a WLI fuzzy clustering mechanism. This mechanism obtains the different numbers of clusters in which the data objects are grouped together. Then, the clustered data is fed into the newly designed adaptive lion-based neural network. The proposed method is developed by the combination of Levenberg-Marquardt algorithm of neural network and adaptive lion algorithm where female lions are used to update the weight adaptively using lion optimization algorithm. Then, the proposed method is used to detect the malicious activity through training process. Thus, the different clustered data is given to the proposed ALNN model. Once the data is trained, then it needs to be aggregated. Subsequently, the aggregated data is fed into the proposed ALNN method where the intrusion behaviour is detected. Finally, the simulation results of the proposed method and performance is analysed through accuracy, false positive rate, and true positive rate. Thus, the proposed ALNN algorithm attains 96.46% accuracy which ensures better detection performance.


2021 ◽  
Author(s):  
Rakesh Kumar Jha ◽  
Preety D Swami

Abstract Time-frequency analysis plays a vital role in fault diagnosis of nonstationary vibration signals acquired from mechanical systems. However, the practical applications face the challenges of continuous variation in speed and load. Apart from this, the disturbances introduced by noise are inevitable. This paper aims to develop a robust method for fault identification in bearings under varying speed, load and noisy conditions. An Optimal Wavelet Subband Deep Neural Network (OWS-DNN) technique is proposed that automatically extracts features from an optimal wavelet subband selected on the basis of Shannon entropy. After denoising the optimal subband, the optimal subbands are dimensionally reduced by the encoder section of an autoencoder. The output of the encoder can be considered as data features. Finally, softmax classifier is employed to classify the encoder output. The vibration signals were recorded on a machinery fault simulator setup for various combinations of speed and load for healthy and faulty bearings. The signals were subjected to various noise levels and the deep neural network was trained. The achieved experimental results reveal high accuracy in fault classification as compared to other techniques under comparison.


Sign in / Sign up

Export Citation Format

Share Document