scholarly journals Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness Against Adversarial Attack

Author(s):  
Zhezhi He ◽  
Adnan Siraj Rakin ◽  
Deliang Fan
2020 ◽  
Vol 24 (4) ◽  
pp. 145-148
Author(s):  
Haruki Masuda ◽  
Tsunato Nakai ◽  
Kota Yoshida ◽  
Takaya Kubota ◽  
Mitsuru Shiozaki ◽  
...  

Author(s):  
Sai Kiran Cherupally ◽  
Jian Meng ◽  
Adnan Siraj Rakin ◽  
Shihui Yin ◽  
Injune Yeo ◽  
...  

Abstract We present a novel deep neural network (DNN) training scheme and RRAM in-memory computing (IMC) hardware evaluation towards achieving high robustness to the RRAM device/array variations and adversarial input attacks. We present improved IMC inference accuracy results evaluated on state-of-the-art DNNs including ResNet-18, AlexNet, and VGG with binary, 2-bit, and 4-bit activation/weight precision for the CIFAR-10 dataset. These DNNs are evaluated with measured noise data obtained from three different RRAM-based IMC prototype chips. Across these various DNNs and IMC chip measurements, we show that our proposed hardware noise-aware DNN training consistently improves DNN inference accuracy for actual IMC hardware, up to 8% accuracy improvement for the CIFAR-10 dataset. We also analyze the impact of our proposed noise injection scheme on the adversarial robustness of ResNet-18 DNNs with 1-bit, 2-bit, and 4-bit activation/weight precision. Our results show up to 6% improvement in the robustness to black-box adversarial input attacks.


2021 ◽  
Vol 51 (9) ◽  
pp. 1411
Author(s):  
萌霏 夏 ◽  
子鹏 叶 ◽  
旺 赵 ◽  
冉 易 ◽  
永进 刘

2021 ◽  
Author(s):  
Eduardo Soares ◽  
plamen angelov

This paper presents the RADNN algorithm. The RADNN is a robust to imperceptible adversarial attack algorithm that uses the concept of data density and similarities to detect attacks on real-time. Differently from traditional deep learnings that need be trained on the attacks to be able to detect, RADNN has a mechanism that detects data patterns changes. In order to evaluate the proposed method, we considered the PerC attacks and a 1000 images from the Imagenet dataset. The RADNN could correctly identify 97.2% of the attacks.


2021 ◽  
Author(s):  
Eduardo Soares ◽  
plamen angelov

This paper presents the RADNN algorithm. The RADNN is a robust to imperceptible adversarial attack algorithm that uses the concept of data density and similarities to detect attacks on real-time. Differently from traditional deep learnings that need be trained on the attacks to be able to detect, RADNN has a mechanism that detects data patterns changes. In order to evaluate the proposed method, we considered the PerC attacks and a 1000 images from the Imagenet dataset. The RADNN could correctly identify 97.2% of the attacks.


Author(s):  
Xiang Li ◽  
Yuchen Jiang ◽  
Chenglin Liu ◽  
Shaochong Liu ◽  
Hao Luo ◽  
...  

Author(s):  
David T. Wang ◽  
Brady Williamson ◽  
Thomas Eluvathingal ◽  
Bruce Mahoney ◽  
Jennifer Scheler

Sign in / Sign up

Export Citation Format

Share Document