TitleAdversarial parameter defense by multi-step risk minimization
AuthorsZhang, Zhiyuan
Luo, Ruixuan
Ren, Xuancheng
Su, Qi
Li, Liangyou
Sun, Xu
AffiliationPeking Univ, Sch EECS, MOE Key Lab Computat Linguist, Beijing, Peoples R China
Peking Univ, Ctr Data Sci, Beijing, Peoples R China
Peking Univ, Sch Foreign Languages, Beijing, Peoples R China
Huawei Noahs Ark Lab, Hong Kong, Peoples R China
Issue DateDec-2021
AbstractPrevious studies demonstrate DNNs' vulnerability to adversarial examples and adversarial training can establish a defense to adversarial examples. In addition, recent studies show that deep neural networks also exhibit vulnerability to parameter corruptions. The vulnerability of model parameters is of crucial value to the study of model robustness and generalization. In this work, we introduce the concept of parameter corruption and propose to leverage the loss change indicators for measuring the flatness of the loss basin and the parameter robustness of neural network parameters. On such basis, we analyze parameter corruptions and propose the multi-step adversarial corruption algorithm. To enhance neural networks, we propose the adversarial parameter defense algorithm that minimizes the average risk of multiple adversarial parameter corruptions. Experimental results show that the proposed algorithm can improve both the parameter robustness and accuracy of neural networks. (C) 2021 Elsevier Ltd. All rights reserved.
Appears in Collections:信息科学技术学院

Files in This Work
There are no files associated with this item.

Web of Science®

Checked on Last Week


Checked on Current Time


Checked on Current Time

Google Scholar™

License: See PKU IR operational policies.