scholarly journals Class Retrieval of Detected Adversarial Attacks

2021 ◽  
Vol 11 (14) ◽  
pp. 6438
Author(s):  
Jalal Al-afandi ◽  
Horváth András

Adversarial attack is a genuine threat compromising the safety of many intelligent systems curbing the standardization of using neural networks in security-critical applications. Since the emergence of adversarial attacks, the research community has worked relentlessly to avert the malicious damage of these attacks. Here, we present a new, additional and required element to ameliorate adversarial attacks: the recovery of the original class after a detected attack. Recovering the original class of an adversarial sample without taking any precautions is an uncharted concept which we would like to introduce with our novel class retrieval algorithm. As case studies, we demonstrate the validity of our approach on MNIST, CIFAR10 and ImageNet datasets where recovery rates were 72%, 65% and 65%, respectively.

Author(s):  
Branko Latinović ◽  
Dragan Vasiljević

Models used for creating intelligent systems based on artificial non-chromic networks indicate to the teachers which educational as well as teaching activities should be corrected. Activities that require to be corrected are performed at established distance learning systems and thus can be: lectures, assignments, tests, grading, competitions, directed leisure activities, and case studies. Results regarding data processing in artificial neural networks specifically indicate a specific activity that needs to be maintained, promoted, or changed in order to improve students’ abilities and achievements. The developed models are also very useful to students who can understand their achievements much better as well as to develop their skills for future competencies. These models indicate that students’ abilities are far more developed in those who use some of the mentioned distance learning systems in comparison with the students who learn due to the traditional classes system.


Author(s):  
Rebecca PRICE ◽  
Christine DE LILLE ◽  
Cara WRIGLEY ◽  
Kees DORST

There is an increasing need for organizations to adapt to rapid changes in society. This need requires organizations’ and the leader within them, to explore, recognize, build and exploit new capabilities. Researching such capabilities has drawn attention from the design management research community in recent years. Dominantly, research contributions have focused on perspectives of innovation and the strategic application of design with the researcher distanced from context. Descriptive and evaluative case studies of past organizational leadership have been vital, by building momentum for the design movement. However, there is a need now to progress toward prescriptive and explorative research perspectives that embrace context through practice and the simultaneous research of design.  Therefore, the aim of this track is to lead and progress discussion on research methodologies that support the research community in developing explorative and prescriptive research methodologies for context-orientated organizational research. This track brings together a group of diverse international researchers and practitioners to fuel discussion on design approaches and subsequent outcomes of prescriptive and explorative research methodologies.


Symmetry ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 428
Author(s):  
Hyun Kwon ◽  
Jun Lee

This paper presents research focusing on visualization and pattern recognition based on computer science. Although deep neural networks demonstrate satisfactory performance regarding image and voice recognition, as well as pattern analysis and intrusion detection, they exhibit inferior performance towards adversarial examples. Noise introduction, to some degree, to the original data could lead adversarial examples to be misclassified by deep neural networks, even though they can still be deemed as normal by humans. In this paper, a robust diversity adversarial training method against adversarial attacks was demonstrated. In this approach, the target model is more robust to unknown adversarial examples, as it trains various adversarial samples. During the experiment, Tensorflow was employed as our deep learning framework, while MNIST and Fashion-MNIST were used as experimental datasets. Results revealed that the diversity training method has lowered the attack success rate by an average of 27.2 and 24.3% for various adversarial examples, while maintaining the 98.7 and 91.5% accuracy rates regarding the original data of MNIST and Fashion-MNIST.


2003 ◽  
Vol 7 (5) ◽  
pp. 693-706 ◽  
Author(s):  
E. Gaume ◽  
R. Gosset

Abstract. Recently Feed-Forward Artificial Neural Networks (FNN) have been gaining popularity for stream flow forecasting. However, despite the promising results presented in recent papers, their use is questionable. In theory, their “universal approximator‿ property guarantees that, if a sufficient number of neurons is selected, good performance of the models for interpolation purposes can be achieved. But the choice of a more complex model does not ensure a better prediction. Models with many parameters have a high capacity to fit the noise and the particularities of the calibration dataset, at the cost of diminishing their generalisation capacity. In support of the principle of model parsimony, a model selection method based on the validation performance of the models, "traditionally" used in the context of conceptual rainfall-runoff modelling, was adapted to the choice of a FFN structure. This method was applied to two different case studies: river flow prediction based on knowledge of upstream flows, and rainfall-runoff modelling. The predictive powers of the neural networks selected are compared to the results obtained with a linear model and a conceptual model (GR4j). In both case studies, the method leads to the selection of neural network structures with a limited number of neurons in the hidden layer (two or three). Moreover, the validation results of the selected FNN and of the linear model are very close. The conceptual model, specifically dedicated to rainfall-runoff modelling, appears to outperform the other two approaches. These conclusions, drawn on specific case studies using a particular evaluation method, add to the debate on the usefulness of Artificial Neural Networks in hydrology. Keywords: forecasting; stream-flow; rainfall-runoff; Artificial Neural Networks


2002 ◽  
Vol 124 (3) ◽  
pp. 364-374 ◽  
Author(s):  
Alexander G. Parlos ◽  
Sunil K. Menon ◽  
Amir F. Atiya

On-line filtering of stochastic variables that are difficult or expensive to directly measure has been widely studied. In this paper a practical algorithm is presented for adaptive state filtering when the underlying nonlinear state equations are partially known. The unknown dynamics are constructively approximated using neural networks. The proposed algorithm is based on the two-step prediction-update approach of the Kalman Filter. The algorithm accounts for the unmodeled nonlinear dynamics and makes no assumptions regarding the system noise statistics. The proposed filter is implemented using static and dynamic feedforward neural networks. Both off-line and on-line learning algorithms are presented for training the filter networks. Two case studies are considered and comparisons with Extended Kalman Filters (EKFs) performed. For one of the case studies, the EKF converges but it results in higher state estimation errors than the equivalent neural filter with on-line learning. For another, more complex case study, the developed EKF does not converge. For both case studies, the off-line trained neural state filters converge quite rapidly and exhibit acceptable performance. On-line training further enhances filter performance, decoupling the eventual filter accuracy from the accuracy of the assumed system model.


2020 ◽  
Vol 34 (07) ◽  
pp. 10901-10908 ◽  
Author(s):  
Abdullah Hamdi ◽  
Matthias Mueller ◽  
Bernard Ghanem

One major factor impeding more widespread adoption of deep neural networks (DNNs) is their lack of robustness, which is essential for safety-critical applications such as autonomous driving. This has motivated much recent work on adversarial attacks for DNNs, which mostly focus on pixel-level perturbations void of semantic meaning. In contrast, we present a general framework for adversarial attacks on trained agents, which covers semantic perturbations to the environment of the agent performing the task as well as pixel-level attacks. To do this, we re-frame the adversarial attack problem as learning a distribution of parameters that always fools the agent. In the semantic case, our proposed adversary (denoted as BBGAN) is trained to sample parameters that describe the environment with which the black-box agent interacts, such that the agent performs its dedicated task poorly in this environment. We apply BBGAN on three different tasks, primarily targeting aspects of autonomous navigation: object detection, self-driving, and autonomous UAV racing. On these tasks, BBGAN can generate failure cases that consistently fool a trained agent.


Sign in / Sign up

Export Citation Format

Share Document