TitleDynamic Knowledge Distillation for Pre-trained Language Models
AuthorsLi, Lei
Lin, Yankai
Ren, Shuhuai
Li, Peng
Zhou, Jie
Sun, Xu
AffiliationPeking Univ, Sch EECS, MOE Key Lab Computat Linguist, Beijing, Peoples R China
Tencent Inc, Pattern Recognit Ctr, WeChat AI, Shenzhen, Peoples R China
Issue Date2021
Publisher2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021)
AbstractKnowledge distillation (KD) has been proved effective for compressing large-scale pre-trained language models. However, existing methods conduct KD statically, e.g., the student model aligns its output distribution to that of a selected teacher model on the pre-defined training dataset. In this paper, we explore whether a dynamic knowledge distillation that empowers the student to adjust the learning procedure according to its competency, regarding the student performance and learning efficiency. We explore the dynamical adjustments on three aspects: teacher model adoption, data selection, and KD objective adaptation. Experimental results show that (1) proper selection of teacher model can boost the performance of student model; (2) conducting KD with 10% informative instances achieves comparable performance while greatly accelerates the training; (3) the student performance can be boosted by adjusting the supervision contribution of different alignment objective. We find dynamic knowledge distillation is promising and provide discussions on potential future directions towards more efficient KD methods.(1)
URIhttp://hdl.handle.net/20.500.11897/654798
ISBN978-1-955917-09-4
IndexedCPCI-SSH(ISSHP)
CPCI-S(ISTP)
Appears in Collections:信息科学技术学院
计算语言学教育部重点实验室

Files in This Work
There are no files associated with this item.

Web of Science®



Checked on Last Week

百度学术™



Checked on Current Time




License: See PKU IR operational policies.