scholarly journals Two Improved Methods of Generating Adversarial Examples against Faster R-CNNs for Tram Environment Perception Systems

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Shize Huang ◽  
Xiaowen Liu ◽  
Xiaolu Yang ◽  
Zhaoxin Zhang ◽  
Lingyu Yang

Trams have increasingly deployed object detectors to perceive running conditions, and deep learning networks have been widely adopted by those detectors. Growing neural networks have incurred severe attacks such as adversarial example attacks, imposing threats to tram safety. Only if adversarial attacks are studied thoroughly, researchers can come up with better defence methods against them. However, most existing methods of generating adversarial examples have been devoted to classification, and none of them target tram environment perception systems. In this paper, we propose an improved projected gradient descent (PGD) algorithm and an improved Carlini and Wagner (C&W) algorithm to generate adversarial examples against Faster R-CNN object detectors. Experiments verify that both algorithms can successfully conduct nontargeted and targeted white-box digital attacks when trams are running. We also compare the performance of the two methods, including attack effects, similarity to clean images, and the generating time. The results show that both algorithms can generate adversarial examples within 220 seconds, a much shorter time, without decrease of the success rate.

Mathematics ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. 1446
Author(s):  
Yueyun Shang ◽  
Shunzhi Jiang ◽  
Dengpan Ye ◽  
Jiaqing Huang

Steganography is a collection of techniques for concealing the existence of information by embedding it within a cover. With the development of deep learning, some novel steganography methods have appeared based on the autoencoder or generative adversarial networks. While the deep learning based steganography methods have the advantages of automatic generation and capacity, the security of the algorithm needs to improve. In this paper, we take advantage of the linear behavior of deep learning networks in higher space and propose a novel steganography scheme which enhances the security by adversarial example. The system is trained with different training settings on two datasets. The experiment results show that the proposed scheme could escape from deep learning steganalyzer detection. Besides, the produced stego could extract secret image with less distortion.


2020 ◽  
Vol 34 (10) ◽  
pp. 13867-13868
Author(s):  
Xiao Liu ◽  
Jing Zhao ◽  
Shiliang Sun

Adversarial attack on graph neural network (GNN) is distinctive as it often jointly trains the available nodes to generate a graph as an adversarial example. Existing attacking approaches usually consider the case that all the training set is available which may be impractical. In this paper, we propose a novel Bayesian adversarial attack approach based on projected gradient descent optimization, called Bayesian PGD attack, which gets more general attack examples than deterministic attack approaches. The generated adversarial examples by our approach using the same partial dataset as deterministic attack approaches would make the GNN have higher misclassification rate on graph node classification. Specifically, in our approach, the edge perturbation Z is used for generating adversarial examples, which is viewed as a random variable with scale constraint, and the optimization target of the edge perturbation is to maximize the KL divergence between its true posterior distribution p(Z|D) and its approximate variational distribution qθ(Z). We experimentally find that the attack performance will decrease with the reduction of available nodes, and the effect of attack using different nodes varies greatly especially when the number of nodes is small. Through experimental comparison with the state-of-the-art attack approaches on GNNs, our approach is demonstrated to have better and robust attack performance.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Hyun Kwon ◽  
Jang-Woon Baek

Deep learning technology has been used to develop improved license plate recognition (LPR) systems. In particular, deep neural networks have brought significant improvements in the LPR system. However, deep neural networks are vulnerable to adversarial examples. In the existing LPR system, adversarial examples study specific spots that are easily identifiable by humans or require human feedback. In this paper, we propose a method of generating adversarial examples in the license plate, which has no human feedback and is difficult to identify by humans. In the proposed method, adversarial noise is added only to the license plate among the entire image to create an adversarial example that is erroneously recognized by the LPR system without being identified by humans. Experiments were performed using the baza silka dataset, and TensorFlow was used as the machine learning library. When epsilon is 0.6 for the first type, and alpha and the iteration of the second type are 0.4 and 1000, respectively, the adversarial examples generated by the first and second type generation methods are reduced to 20% and 15% accuracy in the LPR system.


2021 ◽  
pp. 1-17
Author(s):  
Hania H. Farag ◽  
Lamiaa A. A. Said ◽  
Mohamed R. M. Rizk ◽  
Magdy Abd ElAzim Ahmed

COVID-19 has been considered as a global pandemic. Recently, researchers are using deep learning networks for medical diseases’ diagnosis. Some of these researches focuses on optimizing deep learning neural networks for enhancing the network accuracy. Optimizing the Convolutional Neural Network includes testing various networks which are obtained through manually configuring their hyperparameters, then the configuration with the highest accuracy is implemented. Each time a different database is used, a different combination of the hyperparameters is required. This paper introduces two COVID-19 diagnosing systems using both Residual Network and Xception Network optimized by random search in the purpose of finding optimal models that give better diagnosis rates for COVID-19. The proposed systems showed that hyperparameters tuning for the ResNet and the Xception Net using random search optimization give more accurate results than other techniques with accuracies 99.27536% and 100 % respectively. We can conclude that hyperparameters tuning using random search optimization for either the tuned Residual Network or the tuned Xception Network gives better accuracies than other techniques diagnosing COVID-19.


2021 ◽  
Author(s):  
Ghassan Mohammed Halawani

The main purpose of this project is to modify a convolutional neural network for image classification, based on a deep-learning framework. A transfer learning technique is used by the MATLAB interface to Alex-Net to train and modify the parameters in the last two fully connected layers of Alex-Net with a new dataset to perform classifications of thousands of images. First, the general common architecture of most neural networks and their benefits are presented. The mathematical models and the role of each part in the neural network are explained in detail. Second, different neural networks are studied in terms of architecture, application, and the working method to highlight the strengths and weaknesses of each of neural network. The final part conducts a detailed study on one of the most powerful deep-learning networks in image classification – i.e. the convolutional neural network – and how it can be modified to suit different classification tasks by using transfer learning technique in MATLAB.


Author(s):  
Anibal Pedraza ◽  
Oscar Deniz ◽  
Gloria Bueno

AbstractThe phenomenon of Adversarial Examples has become one of the most intriguing topics associated to deep learning. The so-called adversarial attacks have the ability to fool deep neural networks with inappreciable perturbations. While the effect is striking, it has been suggested that such carefully selected injected noise does not necessarily appear in real-world scenarios. In contrast to this, some authors have looked for ways to generate adversarial noise in physical scenarios (traffic signs, shirts, etc.), thus showing that attackers can indeed fool the networks. In this paper we go beyond that and show that adversarial examples also appear in the real-world without any attacker or maliciously selected noise involved. We show this by using images from tasks related to microscopy and also general object recognition with the well-known ImageNet dataset. A comparison between these natural and the artificially generated adversarial examples is performed using distance metrics and image quality metrics. We also show that the natural adversarial examples are in fact at a higher distance from the originals that in the case of artificially generated adversarial examples.


Deep learning is a subset of the field of machine learning, which is a subfield of AI. The facts that differentiate deep learning networks in general from “canonical” feedforward multilayer networks are More neurons than previous networks, More complex ways of connecting layers, “Cambrian explosion” of computing power to train and Automatic feature extraction. Deep learning is defined as neural networks with a large number of parameters and layers in fundamental network architectures. Some of the network architectures are Convolutional Neural Networks, Recurrent Neural Networks Recursive Neural Networks, RCNN (Region Based CNN), Fast RCNN, Google Net, YOLO (You Only Look Once), Single Shot detectors, SegNet and GAN (Generative Adversarial Network). Different architectures work well with different types of Datasets. Object Detection is an important computer vision problem with a variety of applications. The tasks involved are classification, Object Localisation and instance segmentation. This paper will discuss how the different architectures are useful to detect the object.


Author(s):  
Chunlong Fan ◽  
Cailong Li ◽  
Jici Zhang ◽  
Yiping Teng ◽  
Jianzhong Qiao

Neural network technology has achieved good results in many tasks, such as image classification. However, for some input examples of neural networks, after the addition of designed and imperceptible perturbations to the examples, these adversarial examples can change the output results of the original examples. For image classification problems, we derive low-dimensional attack perturbation solutions on multidimensional linear classifiers and extend them to multidimensional nonlinear neural networks. Based on this, a new adversarial example generation algorithm is designed to modify a specified number of pixels. The algorithm adopts a greedy iterative strategy, and gradually iteratively determines the importance and attack range of pixel points. Finally, experiments demonstrate that the algorithm-generated adversarial example is of good quality, and the effects of key parameters in the algorithm are also analyzed.


2021 ◽  
pp. 1-11
Author(s):  
Tianshi Mu ◽  
Kequan Lin ◽  
Huabing Zhang ◽  
Jian Wang

Deep learning is gaining significant traction in a wide range of areas. Whereas, recent studies have demonstrated that deep learning exhibits the fatal weakness on adversarial examples. Due to the black-box nature and un-transparency problem of deep learning, it is difficult to explain the reason for the existence of adversarial examples and also hard to defend against them. This study focuses on improving the adversarial robustness of convolutional neural networks. We first explore how adversarial examples behave inside the network through visualization. We find that adversarial examples produce perturbations in hidden activations, which forms an amplification effect to fool the network. Motivated by this observation, we propose an approach, termed as sanitizing hidden activations, to help the network correctly recognize adversarial examples by eliminating or reducing the perturbations in hidden activations. To demonstrate the effectiveness of our approach, we conduct experiments on three widely used datasets: MNIST, CIFAR-10 and ImageNet, and also compare with state-of-the-art defense techniques. The experimental results show that our sanitizing approach is more generalized to defend against different kinds of attacks and can effectively improve the adversarial robustness of convolutional neural networks.


Sign in / Sign up

Export Citation Format

Share Document