Title | Memorized sparse backpropagation |
Authors | Zhang, Zhiyuan Yang, Pengcheng Ren, Xuancheng Su, Qi Sun, Xu |
Affiliation | Peking Univ, Sch EECS, Beijing 100871, Peoples R China Peking Univ, Sch Foreign Languages, Beijing 100871, Peoples R China |
Issue Date | 20-Nov-2020 |
Publisher | NEUROCOMPUTING |
Abstract | Neural network learning is usually time-consuming since backpropagation needs to compute full gradients and backpropagate them across multiple layers. Despite its success of existing works in accelerating propagation through sparseness, the relevant theoretical characteristics remain under-researched and empirical studies found that they suffer from the loss of information contained in unpropagated gradients. To tackle these problems, this paper presents a unified sparse backpropagation framework and provides a detailed analysis of its theoretical characteristics. Analysis reveals that when applied to a multilayer perceptron, our framework essentially performs gradient descent using an estimated gradient similar enough to the true gradient, resulting in convergence in probability under certain conditions. Furthermore, a simple yet effective algorithm named memorized sparse backpropagation (MSBP) is proposed to remedy the problem of information loss by storing unpropagated gradients in memory for learning in the next steps. Experimental results demonstrate that the proposed MSBP is effective to alleviate the information loss in traditional sparse backpropagation while achieving comparable acceleration. (C) 2020 Elsevier B.V. All rights reserved. |
URI | http://hdl.handle.net/20.500.11897/592963 |
ISSN | 0925-2312 |
DOI | 10.1016/j.neucom.2020.08.055 |
Indexed | SCI(E) |
Appears in Collections: | 信息科学技术学院 外国语学院 |