TitleMemorized sparse backpropagation
AuthorsZhang, Zhiyuan
Yang, Pengcheng
Ren, Xuancheng
Su, Qi
Sun, Xu
AffiliationPeking Univ, Sch EECS, Beijing 100871, Peoples R China
Peking Univ, Sch Foreign Languages, Beijing 100871, Peoples R China
Issue Date20-Nov-2020
AbstractNeural network learning is usually time-consuming since backpropagation needs to compute full gradients and backpropagate them across multiple layers. Despite its success of existing works in accelerating propagation through sparseness, the relevant theoretical characteristics remain under-researched and empirical studies found that they suffer from the loss of information contained in unpropagated gradients. To tackle these problems, this paper presents a unified sparse backpropagation framework and provides a detailed analysis of its theoretical characteristics. Analysis reveals that when applied to a multilayer perceptron, our framework essentially performs gradient descent using an estimated gradient similar enough to the true gradient, resulting in convergence in probability under certain conditions. Furthermore, a simple yet effective algorithm named memorized sparse backpropagation (MSBP) is proposed to remedy the problem of information loss by storing unpropagated gradients in memory for learning in the next steps. Experimental results demonstrate that the proposed MSBP is effective to alleviate the information loss in traditional sparse backpropagation while achieving comparable acceleration. (C) 2020 Elsevier B.V. All rights reserved.
Appears in Collections:信息科学技术学院

Files in This Work
There are no files associated with this item.

Web of Science®

Checked on Last Week


Checked on Current Time


Checked on Current Time

Google Scholar™

License: See PKU IR operational policies.