scholarly journals An Investigation into the Application of Deep Learning in the Detection and Mitigation of DDOS Attack on SDN Controllers

Technologies ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 14
Author(s):  
James Dzisi Gadze ◽  
Akua Acheampomaa Bamfo-Asante ◽  
Justice Owusu Agyemang ◽  
Henry Nunoo-Mensah ◽  
Kwasi Adu-Boahen Opare

Software-Defined Networking (SDN) is a new paradigm that revolutionizes the idea of a software-driven network through the separation of control and data planes. It addresses the problems of traditional network architecture. Nevertheless, this brilliant architecture is exposed to several security threats, e.g., the distributed denial of service (DDoS) attack, which is hard to contain in such software-based networks. The concept of a centralized controller in SDN makes it a single point of attack as well as a single point of failure. In this paper, deep learning-based models, long-short term memory (LSTM) and convolutional neural network (CNN), are investigated. It illustrates their possibility and efficiency in being used in detecting and mitigating DDoS attack. The paper focuses on TCP, UDP, and ICMP flood attacks that target the controller. The performance of the models was evaluated based on the accuracy, recall, and true negative rate. We compared the performance of the deep learning models with classical machine learning models. We further provide details on the time taken to detect and mitigate the attack. Our results show that RNN LSTM is a viable deep learning algorithm that can be applied in the detection and mitigation of DDoS in the SDN controller. Our proposed model produced an accuracy of 89.63%, which outperformed linear-based models such as SVM (86.85%) and Naive Bayes (82.61%). Although KNN, which is a linear-based model, outperformed our proposed model (achieving an accuracy of 99.4%), our proposed model provides a good trade-off between precision and recall, which makes it suitable for DDoS classification. In addition, it was realized that the split ratio of the training and testing datasets can give different results in the performance of a deep learning algorithm used in a specific work. The model achieved the best performance when a split of 70/30 was used in comparison to 80/20 and 60/40 split ratios.

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
G. Kothai ◽  
E. Poovammal ◽  
Gaurav Dhiman ◽  
Kadiyala Ramana ◽  
Ashutosh Sharma ◽  
...  

The vehicular adhoc network (VANET) is an emerging research topic in the intelligent transportation system that furnishes essential information to the vehicles in the network. Nearly 150 thousand people are affected by the road accidents that must be minimized, and improving safety is required in VANET. The prediction of traffic congestions plays a momentous role in minimizing accidents in roads and improving traffic management for people. However, the dynamic behavior of the vehicles in the network degrades the rendition of deep learning models in predicting the traffic congestion on roads. To overcome the congestion problem, this paper proposes a new hybrid boosted long short-term memory ensemble (BLSTME) and convolutional neural network (CNN) model that ensemble the powerful features of CNN with BLSTME to negotiate the dynamic behavior of the vehicle and to predict the congestion in traffic effectively on roads. The CNN extracts the features from traffic images, and the proposed BLSTME trains and strengthens the weak classifiers for the prediction of congestion. The proposed model is developed using Tensor flow python libraries and are tested in real traffic scenario simulated using SUMO and OMNeT++. The extensive experimentations are carried out, and the model is measured with the performance metrics likely prediction accuracy, precision, and recall. Thus, the experimental result shows 98% of accuracy, 96% of precision, and 94% of recall. The results complies that the proposed model clobbers the other existing algorithms by furnishing 10% higher than deep learning models in terms of stability and performance.


Medical imaging is an emerging field in engineering. As traditional way of brain tumor analysis, MRI scanning is the way to identify brain tumor. The core drawback of manual MRI studies conducted by surgeons is getting manual visual errorswhich can lead toofa false identification of tumor boundaries. To avoid such human errors, ultra age engineering adopted deep learning as a new technique for brain tumor segmentation. Deep learning convolution network can be further developed by means of various deep learning models for better performance. Hence, we proposed a new deep learning algorithm development which can more efficiently identifies the types of brain tumors in terms of level of tumor like T1, T2, and T1ce etc. The proposed system can identify tumors using convolution neural network(CNN) which works with the proposed algorithm “Sculptor DeepCNet”. The proposed model can be used by surgeons to identify post-surgical remains (if any) of brain tumors and thus proposed research can be useful for ultra-age neural surgical image assessments. This paper discusses newly developed algorithm and its testing results.


Information ◽  
2020 ◽  
Vol 11 (5) ◽  
pp. 279 ◽  
Author(s):  
Bambang Susilo ◽  
Riri Fitri Sari

The internet has become an inseparable part of human life, and the number of devices connected to the internet is increasing sharply. In particular, Internet of Things (IoT) devices have become a part of everyday human life. However, some challenges are increasing, and their solutions are not well defined. More and more challenges related to technology security concerning the IoT are arising. Many methods have been developed to secure IoT networks, but many more can still be developed. One proposed way to improve IoT security is to use machine learning. This research discusses several machine-learning and deep-learning strategies, as well as standard datasets for improving the security performance of the IoT. We developed an algorithm for detecting denial-of-service (DoS) attacks using a deep-learning algorithm. This research used the Python programming language with packages such as scikit-learn, Tensorflow, and Seaborn. We found that a deep-learning model could increase accuracy so that the mitigation of attacks that occur on an IoT network is as effective as possible.


2021 ◽  
Vol 5 (4) ◽  
pp. 73
Author(s):  
Mohamed Chetoui ◽  
Moulay A. Akhloufi ◽  
Bardia Yousefi ◽  
El Mostafa Bouattane

The coronavirus pandemic is spreading around the world. Medical imaging modalities such as radiography play an important role in the fight against COVID-19. Deep learning (DL) techniques have been able to improve medical imaging tools and help radiologists to make clinical decisions for the diagnosis, monitoring and prognosis of different diseases. Computer-Aided Diagnostic (CAD) systems can improve work efficiency by precisely delineating infections in chest X-ray (CXR) images, thus facilitating subsequent quantification. CAD can also help automate the scanning process and reshape the workflow with minimal patient contact, providing the best protection for imaging technicians. The objective of this study is to develop a deep learning algorithm to detect COVID-19, pneumonia and normal cases on CXR images. We propose two classifications problems, (i) a binary classification to classify COVID-19 and normal cases and (ii) a multiclass classification for COVID-19, pneumonia and normal. Nine datasets and more than 3200 COVID-19 CXR images are used to assess the efficiency of the proposed technique. The model is trained on a subset of the National Institute of Health (NIH) dataset using swish activation, thus improving the training accuracy to detect COVID-19 and other pneumonia. The models are tested on eight merged datasets and on individual test sets in order to confirm the degree of generalization of the proposed algorithms. An explainability algorithm is also developed to visually show the location of the lung-infected areas detected by the model. Moreover, we provide a detailed analysis of the misclassified images. The obtained results achieve high performances with an Area Under Curve (AUC) of 0.97 for multi-class classification (COVID-19 vs. other pneumonia vs. normal) and 0.98 for the binary model (COVID-19 vs. normal). The average sensitivity and specificity are 0.97 and 0.98, respectively. The sensitivity of the COVID-19 class achieves 0.99. The results outperformed the comparable state-of-the-art models for the detection of COVID-19 on CXR images. The explainability model shows that our model is able to efficiently identify the signs of COVID-19.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Chia-Yen Lee ◽  
Guan-Lin Chen ◽  
Zhong-Xuan Zhang ◽  
Yi-Hong Chou ◽  
Chih-Chung Hsu

The sonogram is currently an effective cancer screening and diagnosis way due to the convenience and harmlessness in humans. Traditionally, lesion boundary segmentation is first adopted and then classification is conducted, to reach the judgment of benign or malignant tumor. In addition, sonograms often contain much speckle noise and intensity inhomogeneity. This study proposes a novel benign or malignant tumor classification system, which comprises intensity inhomogeneity correction and stacked denoising autoencoder (SDAE), and it is suitable for small-size dataset. A classifier is established by extracting features in the multilayer training of SDAE; automatic analysis of imaging features by the deep learning algorithm is applied on image classification, thus allowing the system to have high efficiency and robust distinguishing. In this study, two kinds of dataset (private data and public data) are used for deep learning models training. For each dataset, two groups of test images are compared: the original images and the images after intensity inhomogeneity correction, respectively. The results show that when deep learning algorithm is applied on the sonograms after intensity inhomogeneity correction, there is a significant increase of the tumor distinguishing accuracy. This study demonstrated that it is important to use preprocessing to highlight the image features and further give these features for deep learning models. In this way, the classification accuracy will be better to just use the original images for deep learning.


2020 ◽  
Author(s):  
Luna Zhang ◽  
Yang Zou ◽  
Ningning He ◽  
Yu Chen ◽  
Zhen Chen ◽  
...  

AbstractAs a novel type of post-translational modification, lysine 2-Hydroxyisobutyrylation (Khib) plays an important role in gene transcription and signal transduction. In order to understand its regulatory mechanism, the essential step is the recognition of Khib sites. Thousands of Khib sites have been experimentally verified across five different species. However, there are only a couple traditional machine-learning algorithms developed to predict Khib sites for limited species, lacking a general prediction algorithm. We constructed a deep-learning algorithm based on convolutional neural network with the one-hot encoding approach, dubbed CNNOH. It performs favorably to the traditional machine-learning models and other deep-learning models across different species, in terms of cross-validation and independent test. The area under the ROC curve (AUC) values for CNNOH ranged from 0.82 to 0.87 for different organisms, which is superior to the currently-available Khib predictors. Moreover, we developed the general model based on the integrated data from multiple species and it showed great universality and effectiveness with the AUC values in the range of 0.79 to 0.87. Accordingly, we constructed the on-line prediction tool dubbed DeepKhib for easily identifying Khib sites, which includes both species-specific and general models. DeepKhib is available at http://www.bioinfogo.org/DeepKhib.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Kai Ma

To solve the problem of invalid resource recommendation data and poor recommendation effect in basketball teaching network course resource recommendation, a basketball teaching network course resource recommendation method based on a deep learning algorithm is proposed. The objective function is applied to eliminate the noise in the basketball teaching network course resource data. The prominent characteristics of basketball teaching network curriculum resources are extracted using a kernel function and combined into a feature set. A convolution neural network (CNN) was employed to realize the basketball teaching network curriculum resources recommendation model. The model was assessed in terms of computation time and recognition error. To validate the performance, the proposed model was compared with two well-known recommendation models such as the learning resource recommendation method based on transfer learning and the personalized learning resource recommendation method based on three-dimensional feature collaborative domination. Experimental results show that the proposed model achieved the lowest computation time of 15 s and recommendation error less than 0.4% as compared with the existing model.


2021 ◽  
Vol 7 ◽  
pp. e345
Author(s):  
Mojtaba Mohammadpoor ◽  
Mehran Sheikhi karizaki ◽  
Mina Sheikhi karizaki

Background COVID-19 pandemic imposed a lockdown situation to the world these past months. Researchers and scientists around the globe faced serious efforts from its detection to its treatment. Methods Pathogenic laboratory testing is the gold standard but it is time-consuming. Lung CT-scans and X-rays are other common methods applied by researchers to detect COVID-19 positive cases. In this paper, we propose a deep learning neural network-based model as an alternative fast screening method that can be used for detecting the COVID-19 cases by analyzing CT-scans. Results Applying the proposed method on a publicly available dataset collected of positive and negative cases showed its ability on distinguishing them by analyzing each individual CT image. The effect of different parameters on the performance of the proposed model was studied and tabulated. By selecting random train and test images, the overall accuracy and ROC-AUC of the proposed model can easily exceed 95% and 90%, respectively, without any image pre-selecting or preprocessing.


2020 ◽  
Author(s):  
Sandip S Panesar ◽  
Vishwesh Nath ◽  
Sudhir K Pathak ◽  
Walter Schneider ◽  
Bennett A. Landman ◽  
...  

BackgroundDiffusion tensor imaging (DTI) is a commonly utilized pre-surgical tractography technique. Despite widespread use, DTI suffers from several critical limitations. These include an inability to replicate crossing fibers and a low angular-resolution, affecting quality of results. More advanced, non-tensor methods have been devised to address DTI’s shortcomings, but they remain clinically underutilized due to lack of awareness, logistical and cost factors.ObjectiveNath et al. (2020) described a method of transforming DTI data into non-tensor high-resolution data, suitable for tractography, using a deep learning technique. This study aims to apply this technique to real-life tumor cases.MethodsThe deep learning model utilizes a residual convolutional neural network architecture to yield a spherical harmonic representation of the diffusion-weighted MR signal. The model was trained using normal subject data. DTI data from clinical cases were utilized for testing: Subject 1 had a right-sided anaplastic oligodendroglioma. Subject 2 had a right-sided glioblastoma. We conducted deterministic fiber tractography on both the DTI data and the post-processed deep learning algorithm datasets.ResultsGenerally, all tracts generated using the deep learning algorithm dataset were qualitatively and quantitatively (in terms of tract volume) superior than those created with DTI data. This was true for both test cases.ConclusionsWe successfully utilized a deep learning technique to convert standard DTI data into data capable of high-angular resolution tractography. This method dispenses with specialized hardware or dedicated acquisition protocols. It presents an economical and logistically feasible method for increasing access to high definition tractography imaging clinically.


Sign in / Sign up

Export Citation Format

Share Document