TitleExploring the Vulnerability of Deep Neural Networks: A Study of Parameter Corruption
AuthorsSun, Xu
Zhang, Zhiyuan
Ren, Xuancheng
Luo, Ruixuan
Li, Liangyou
AffiliationPeking Univ, Sch EECS, MOE Key Lab Computat Linguist, Beijing, Peoples R China
Peking Univ, Ctr Data Sci, Beijing, Peoples R China
Huawei Noahs Ark Lab, Shenzhen, Guangdong, Peoples R China
Issue Date2021
PublisherTHIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE
AbstractWe argue that the vulnerability of model parameters is of crucial value to the study of model robustness and generalization but little research has been devoted to understanding this matter. In this work, we propose an indicator to measure the robustness of neural network parameters by exploiting their vulnerability via parameter corruption. The proposed indicator describes the maximum loss variation in the non-trivial worst-case scenario under parameter corruption. For practical purposes, we give a gradient-based estimation, which is far more effective than random corruption trials that can hardly induce the worst accuracy degradation. Equipped with theoretical support and empirical validation, we are able to systematically investigate the robustness of different model parameters and reveal vulnerability of deep neural networks that has been rarely paid attention to before. Moreover, we can enhance the models accordingly with the proposed adversarial corruption-resistant training, which not only improves the parameter robustness but also translates into accuracy elevation.
URIhttp://hdl.handle.net/20.500.11897/623187
ISBN978-1-57735-866-4
ISSN2159-5399
IndexedCPCI-S(ISTP)
Appears in Collections:信息科学技术学院
计算语言学教育部重点实验室
其他研究院

Files in This Work
There are no files associated with this item.

Web of Science®



Checked on Last Week

百度学术™



Checked on Current Time




License: See PKU IR operational policies.