TitleADA-Tucker: Compressing deep neural networks via adaptive dimension adjustment tucker decomposition
AuthorsZhong, Zhisheng
Wei, Fangyin
Lin, Zhouchen
Zhang, Chao
AffiliationPeking Univ, Sch EECS, Key Lab Machine Percept MOE, Beijing, Peoples R China
KeywordsConvolutional neural network
Compression
Tucker decomposition
Dimension adjustment
Issue Date2019
PublisherNEURAL NETWORKS
AbstractDespite recent success of deep learning models in numerous applications, their widespread use on mobile devices is seriously impeded by storage and computational requirements. In this paper, we propose a novel network compression method called Adaptive Dimension Adjustment Tucker decomposition (ADA-Tucker). With learnable core tensors and transformation matrices, ADA-Tucker performs Tucker decomposition of arbitrary-order tensors. Furthermore, we propose that weight tensors in networks with proper order and balanced dimension are easier to be compressed. Therefore, the high flexibility in decomposition choice distinguishes ADA-Tucker from all previous low-rank models. To compress more, we further extend the model to Shared Core ADA-Tucker (SCADA-Tucker) by defining a shared core tensor for all layers. Our methods require no overhead of recording indices of non-zero elements. Without loss of accuracy, our methods reduce the storage of LeNet-5 and LeNet-300 by ratios of 691x and 233x, respectively, significantly outperforming state of the art. The effectiveness of our methods is also evaluated on other three benchmarks (CIFAR-10, SVHN, ILSVRC12) and modern newly deep networks (ResNet, Wide-ResNet). (C) 2018 Elsevier Ltd. All rights reserved.
URIhttp://hdl.handle.net/20.500.11897/551197
ISSN0893-6080
DOI10.1016/j.neunet.2018.10.016
IndexedSCI(E)
EI
Appears in Collections:信息科学技术学院
机器感知与智能教育部重点实验室

Files in This Work
There are no files associated with this item.

Web of Science®



Checked on Last Week

Scopus®



Checked on Current Time

百度学术™



Checked on Current Time

Google Scholar™





License: See PKU IR operational policies.