Title | Exploring the Vulnerability of Deep Neural Networks: A Study of Parameter Corruption |
Authors | Sun, Xu Zhang, Zhiyuan Ren, Xuancheng Luo, Ruixuan Li, Liangyou |
Affiliation | Peking Univ, Sch EECS, MOE Key Lab Computat Linguist, Beijing, Peoples R China Peking Univ, Ctr Data Sci, Beijing, Peoples R China Huawei Noahs Ark Lab, Shenzhen, Guangdong, Peoples R China |
Issue Date | 2021 |
Publisher | THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE |
Abstract | We argue that the vulnerability of model parameters is of crucial value to the study of model robustness and generalization but little research has been devoted to understanding this matter. In this work, we propose an indicator to measure the robustness of neural network parameters by exploiting their vulnerability via parameter corruption. The proposed indicator describes the maximum loss variation in the non-trivial worst-case scenario under parameter corruption. For practical purposes, we give a gradient-based estimation, which is far more effective than random corruption trials that can hardly induce the worst accuracy degradation. Equipped with theoretical support and empirical validation, we are able to systematically investigate the robustness of different model parameters and reveal vulnerability of deep neural networks that has been rarely paid attention to before. Moreover, we can enhance the models accordingly with the proposed adversarial corruption-resistant training, which not only improves the parameter robustness but also translates into accuracy elevation. |
URI | http://hdl.handle.net/20.500.11897/623187 |
ISBN | 978-1-57735-866-4 |
ISSN | 2159-5399 |
Indexed | CPCI-S(ISTP) |
Appears in Collections: | 信息科学技术学院 计算语言学教育部重点实验室 其他研究院 |