scholarly journals Attention-Based Deep Learning Models for Detection of Fake News in Social Networks

Automatic fake news detection is a challenging problem in deception detection. While evaluating the performance of deep learning-based models, if all the models are giving higher accuracy on a test dataset, it will make it harder to validate the performance of the deep learning models under consideration. So, we will need a complex problem to validate the performance of a deep learning model. LIAR is one such complex, much resent, labeled benchmark dataset which is publicly available for doing research on fake news detection to model statistical and machine learning approaches to combating fake news. In this work, a novel fake news detection system is implemented using Deep Neural Network models such as CNN, LSTM, BiLSTM, and the performance of their attention mechanism is evaluated by analyzing their performance in terms of Accuracy, Precision, Recall, and F1-score with training, validation and test datasets of LIAR.

Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 344
Author(s):  
Jeyaprakash Hemalatha ◽  
S. Abijah Roseline ◽  
Subbiah Geetha ◽  
Seifedine Kadry ◽  
Robertas Damaševičius

Recently, there has been a huge rise in malware growth, which creates a significant security threat to organizations and individuals. Despite the incessant efforts of cybersecurity research to defend against malware threats, malware developers discover new ways to evade these defense techniques. Traditional static and dynamic analysis methods are ineffective in identifying new malware and pose high overhead in terms of memory and time. Typical machine learning approaches that train a classifier based on handcrafted features are also not sufficiently potent against these evasive techniques and require more efforts due to feature-engineering. Recent malware detectors indicate performance degradation due to class imbalance in malware datasets. To resolve these challenges, this work adopts a visualization-based method, where malware binaries are depicted as two-dimensional images and classified by a deep learning model. We propose an efficient malware detection system based on deep learning. The system uses a reweighted class-balanced loss function in the final classification layer of the DenseNet model to achieve significant performance improvements in classifying malware by handling imbalanced data issues. Comprehensive experiments performed on four benchmark malware datasets show that the proposed approach can detect new malware samples with higher accuracy (98.23% for the Malimg dataset, 98.46% for the BIG 2015 dataset, 98.21% for the MaleVis dataset, and 89.48% for the unseen Malicia dataset) and reduced false-positive rates when compared with conventional malware mitigation techniques while maintaining low computational time. The proposed malware detection solution is also reliable and effective against obfuscation attacks.


2019 ◽  
Vol 1 (1) ◽  
pp. 450-465 ◽  
Author(s):  
Abhishek Sehgal ◽  
Nasser Kehtarnavaz

Deep learning solutions are being increasingly used in mobile applications. Although there are many open-source software tools for the development of deep learning solutions, there are no guidelines in one place in a unified manner for using these tools toward real-time deployment of these solutions on smartphones. From the variety of available deep learning tools, the most suited ones are used in this paper to enable real-time deployment of deep learning inference networks on smartphones. A uniform flow of implementation is devised for both Android and iOS smartphones. The advantage of using multi-threading to achieve or improve real-time throughputs is also showcased. A benchmarking framework consisting of accuracy, CPU/GPU consumption, and real-time throughput is considered for validation purposes. The developed deployment approach allows deep learning models to be turned into real-time smartphone apps with ease based on publicly available deep learning and smartphone software tools. This approach is applied to six popular or representative convolutional neural network models, and the validation results based on the benchmarking metrics are reported.


2021 ◽  
Vol 11 (15) ◽  
pp. 7147
Author(s):  
Jinmo Gu ◽  
Jinhyuk Na ◽  
Jeongeun Park ◽  
Hayoung Kim

Outbound telemarketing is an efficient direct marketing method wherein telemarketers solicit potential customers by phone to purchase or subscribe to products or services. However, those who are not interested in the information or offers provided by outbound telemarketing generally experience such interactions negatively because they perceive telemarketing as spam. In this study, therefore, we investigate the use of deep learning models to predict the success of outbound telemarketing for insurance policy loans. We propose an explainable multiple-filter convolutional neural network model called XmCNN that can alleviate overfitting and extract various high-level features using hundreds of input variables. To enable the practical application of the proposed method, we also examine ensemble models to further improve its performance. We experimentally demonstrate that the proposed XmCNN significantly outperformed conventional deep neural network models and machine learning models. Furthermore, a deep learning ensemble model constructed using the XmCNN architecture achieved the lowest false positive rate (4.92%) and the highest F1-score (87.47%). We identified important variables influencing insurance policy loan prediction through the proposed model, suggesting that these factors should be considered in practice. The proposed method may increase the efficiency of outbound telemarketing and reduce the spam problems caused by calling non-potential customers.


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245177
Author(s):  
Xing Han Lu ◽  
Aihua Liu ◽  
Shih-Chieh Fuh ◽  
Yi Lian ◽  
Liming Guo ◽  
...  

Motivation Recurrent neural networks (RNN) are powerful frameworks to model medical time series records. Recent studies showed improved accuracy of predicting future medical events (e.g., readmission, mortality) by leveraging large amount of high-dimensional data. However, very few studies have explored the ability of RNN in predicting long-term trajectories of recurrent events, which is more informative than predicting one single event in directing medical intervention. Methods In this study, we focus on heart failure (HF) which is the leading cause of death among cardiovascular diseases. We present a novel RNN framework named Deep Heart-failure Trajectory Model (DHTM) for modelling the long-term trajectories of recurrent HF. DHTM auto-regressively predicts the future HF onsets of each patient and uses the predicted HF as input to predict the HF event at the next time point. Furthermore, we propose an augmented DHTM named DHTM+C (where “C” stands for co-morbidities), which jointly predicts both the HF and a set of acute co-morbidities diagnoses. To efficiently train the DHTM+C model, we devised a novel RNN architecture to model disease progression implicated in the co-morbidities. Results Our deep learning models confers higher prediction accuracy for both the next-step HF prediction and the HF trajectory prediction compared to the baseline non-neural network models and the baseline RNN model. Compared to DHTM, DHTM+C is able to output higher probability of HF for high-risk patients, even in cases where it is only given less than 2 years of data to predict over 5 years of trajectory. We illustrated multiple non-trivial real patient examples of complex HF trajectories, indicating a promising path for creating highly accurate and scalable longitudinal deep learning models for modeling the chronic disease.


Author(s):  
Ying Qu ◽  
Hairong Qi ◽  
Chiman Kwan

There are two mast cameras (Mastcam) onboard the Mars rover Curiosity. Both Mastcams are multispectral imagers with nine bands in each. The right Mastcam has three times higher resolution than the left. In this chapter, we apply some recently developed deep neural network models to enhance the left Mastcam images with help from the right Mastcam images. Actual Mastcam images were used to demonstrate the performance of the proposed algorithms.


Author(s):  
E.Yu. Silantieva ◽  
V.A. Zabelina ◽  
G.A. Savchenko ◽  
I.M. Chernenky

This study presents an analysis of autoencoder models for the problems of detecting anomalies in network traffic. Results of the training were assessed using open source software on the UNB ICS IDS 2017 dataset. As deep learning models, we considered standard and variational autoencoder, Deep SSAD approaches for a normal autoencoder (AE-SAD) and a variational autoencoder (VAE-SAD). The constructed deep learning models demonstrated different indicators of anomaly detection accuracy; the best result in terms of the AUC metric of 98% was achieved with VAE-SAD model. In the future, it is planned to continue the analysis of the characteristics of neural network models in cybersecurity problems. One of directions is to study the influence of structure of network traffic on the performance indicators of using deep learning models. Based on the results, it is planned to develop an approach of robust identification of security events based on deep learning methods.


Electronics ◽  
2020 ◽  
Vol 9 (7) ◽  
pp. 1151 ◽  
Author(s):  
Wooyeon Jo ◽  
Sungjin Kim ◽  
Changhoon Lee ◽  
Taeshik Shon

The proliferation of various connected platforms, including Internet of things, industrial control systems (ICSs), connected cars, and in-vehicle networks, has resulted in the simultaneous use of multiple protocols and devices. Chaotic situations caused by the usage of different protocols and various types of devices, such as heterogeneous networks, implemented differently by vendors renders the adoption of a flexible security solution difficult, such as recent deep learning-based intrusion detection system (IDS) studies. These studies optimized the deep learning model for their environment to improve performance, but the basic principle of the deep learning model used was not changed, so this can be called a next-generation IDS with a model that has little or no requirements. Some studies proposed IDS based on unsupervised learning technology that does not require labeled data. However, not using available assets, such as network packet data, is a waste of resources. If the security solution considers the role and importance of the devices constituting the network and the security area of the protocol standard by experts, the assets can be well used, but it will no longer be flexible. Most deep learning model-based IDS studies used recurrent neural network (RNN), which is a supervised learning model, because the characteristics of the RNN model, especially when the long-short term memory (LSTM) is incorporated, are better configured to reflect the flow of the packet data stream over time, and thus perform better than other supervised learning models such as convolutional neural network (CNN). However, if the input data induce the CNN’s kernel to sufficiently reflect the network characteristics through proper preprocessing, it could perform better than other deep learning models in the network IDS. Hence, we propose the first preprocessing method, called “direct”, for network IDS that can use the characteristics of the kernel by using the minimum protocol information, field size, and offset. In addition to direct, we propose two more preprocessing techniques called “weighted” and “compressed”. Each requires additional network information; therefore, direct conversion was compared with related studies. Including direct, the proposed preprocessing methods are based on field-to-pixel philosophy, which can reflect the advantages of CNN by extracting the convolutional features of each pixel. Direct is the most intuitive method of applying field-to-pixel conversion to reflect an image’s convolutional characteristics in the CNN. Weighted and compressed are conversion methods used to evaluate the direct method. Consequently, the IDS constructed using a CNN with the proposed direct preprocessing method demonstrated meaningful performance in the NSL-KDD dataset.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Nida Aslam ◽  
Irfan Ullah Khan ◽  
Farah Salem Alotaibi ◽  
Lama Abdulaziz Aldaej ◽  
Asma Khaled Aldubaikil

Pervasive usage and the development of social media networks have provided the platform for the fake news to spread fast among people. Fake news often misleads people and creates wrong society perceptions. The spread of low-quality news in social media has negatively affected individuals and society. In this study, we proposed an ensemble-based deep learning model to classify news as fake or real using LIAR dataset. Due to the nature of the dataset attributes, two deep learning models were used. For the textual attribute “statement,” Bi-LSTM-GRU-dense deep learning model was used, while for the remaining attributes, dense deep learning model was used. Experimental results showed that the proposed study achieved an accuracy of 0.898, recall of 0.916, precision of 0.913, and F-score of 0.914, respectively, using only statement attribute. Moreover, the outcome of the proposed models is remarkable when compared with that of the previous studies for fake news detection using LIAR dataset.


2018 ◽  
Vol 246 ◽  
pp. 03004
Author(s):  
Yaqiong Qin ◽  
Zhaohui Ye ◽  
Conghui Zhang

Traditional methods of dividing petroleum reservoirs are inefficient, and the accuracy of onehidden-layer BP neural network is not ideal when applied to dividing reservoirs. This paper proposes to use the deep learning models to solve the reservoir division problem. We apply multiple-hidden-layer BP neural network and convolutional neural network models, and adjust the network structures according to the characteristics of the reservoir problem. The results show that the deep learning models are better than onehidden- layer BP neural network, and the performance of the convolutional neural network is very close to the artificial work.


Sign in / Sign up

Export Citation Format

Share Document