scholarly journals Logical-Combinatorial Approaches in Dynamic Recognition Problems

2020 ◽  
pp. 96-107
Author(s):  
Levon Aslanyan ◽  
Viktor Krasnoproshin ◽  
Vladimir Ryazanov ◽  
Hasmik Sahakyan

A pattern recognition scenario, where instead of object classification into the classes by the learning set, the algorithm aims to allocate all objects to the same, the so-called "normal" class, is the research objective. Given the learning set L; the class K0 is called “normal”, and the reminder l classes K1, K2, ... , Kl from the environment K are “deviated”. The classification algorithm is for a recurrent use in a "classification, action" format. Actions Ai are defined for each “deviated” class Ki. Applied to an object x ∈ Ki, the action delivers update Ai(x) of the object. The goal is in constructing a classification algorithm A that applied repeatedly (small number of times) to the objects of L, moves the objects (correspondingly, the elements of K) to the “normal” class. In this way, the static recognition action is transferred to a dynamic domain. This paper is continuing the discussion on the “normal” class classification problem, its theoretical postulations, possible use cases, and advantages of using logical-combinatorial approaches in solving these dynamic recognition problems. Some light relation to the topics like reinforcement learning, and recurrent neural networks are also provided.

2020 ◽  
Author(s):  
Dean Sumner ◽  
Jiazhen He ◽  
Amol Thakkar ◽  
Ola Engkvist ◽  
Esben Jannik Bjerrum

<p>SMILES randomization, a form of data augmentation, has previously been shown to increase the performance of deep learning models compared to non-augmented baselines. Here, we propose a novel data augmentation method we call “Levenshtein augmentation” which considers local SMILES sub-sequence similarity between reactants and their respective products when creating training pairs. The performance of Levenshtein augmentation was tested using two state of the art models - transformer and sequence-to-sequence based recurrent neural networks with attention. Levenshtein augmentation demonstrated an increase performance over non-augmented, and conventionally SMILES randomization augmented data when used for training of baseline models. Furthermore, Levenshtein augmentation seemingly results in what we define as <i>attentional gain </i>– an enhancement in the pattern recognition capabilities of the underlying network to molecular motifs.</p>


Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5875
Author(s):  
Waleed Nazih ◽  
Yasser Hifny ◽  
Wail S. Elkilani ◽  
Habib Dhahri ◽  
Tamer Abdelkader

Many companies have transformed their telephone systems into Voice over IP (VoIP) systems. Although implementation is simple, VoIP is vulnerable to different types of attacks. The Session Initiation Protocol (SIP) is a widely used protocol for handling VoIP signaling functions. SIP is unprotected against attacks because it is a text-based protocol and lacks defense against the growing security threats. The Distributed Denial of Service (DDoS) attack is a harmful attack, because it drains resources, and prevents legitimate users from using the available services. In this paper, we formulate detection of DDoS attacks as a classification problem and propose an approach using token embedding to enhance extracted features from SIP messages. We discuss a deep learning model based on Recurrent Neural Networks (RNNs) developed to detect DDoS attacks with low and high-rate intensity. For validation, a balanced real traffic dataset was built containing three attack scenarios with different attack durations and intensities. Experiments show that the system has a high detection accuracy and low detection time. The detection accuracy was higher for low-rate attacks than that of traditional machine learning.


2001 ◽  
Vol 40 (05) ◽  
pp. 386-391 ◽  
Author(s):  
H. R. Doyle ◽  
B. Parmanto

Summary Objectives: This paper investigates a version of recurrent neural network with the backpropagation through time (BPTT) algorithm for predicting liver transplant graft failure based on a time series sequence of clinical observations. The objective is to improve upon the current approaches to liver transplant outcome prediction by developing a more complete model that takes into account not only the preoperative risk assessment, but also the early postoperative history. Methods: A 6-fold cross-validation procedure was used to measure the performance of the networks. The data set was divided into a learning set and a test set by maintaining the same proportion of positive and negative cases in the original set. The effects of network complexity on overfitting were investigated by constructing two types of networks with different numbers of hidden units. For each type of network, 10 individual networks were trained on the learning set and used to form a committee. The performance of the networks was measured exhaustively with respect to both the entire training and test sets. Results: The networks were capable of learning the time series problem and achieved good performances of 90% correct classification on the learning set and 78% on the test set. The prediction accuracy increases as more information becomes progressively available after the operation with the daily improvement of 10% on the learning set and 5% on the test set. Conclusions: Recurrent neural networks trained with BPTT algorithm are capable of learning to represent temporal behavior of the time series prediction task. This model is an improvement upon the current model that does not take into account postoperative temporal information.


Sign in / Sign up

Export Citation Format

Share Document