TitleMulti-View Feature Representation for Dialogue Generation with Bidirectional Distillation
AuthorsFeng, Shaoxiong
Ren, Xuancheng
Li, Kan
Sun, Xu
AffiliationBeijing Inst Technol, Sch Comp Sci & Technol, Beijing, Peoples R China
Peking Univ, Sch EECS, MOE Key Lab Computat Linguist, Beijing, Peoples R China
Peking Univ, Ctr Data Sci, Beijing, Peoples R China
Issue Date2021
PublisherTHIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE
AbstractNeural dialogue models suffer from low-quality responses when interacted in practice, demonstrating difficulty in generalization beyond training data. Recently, knowledge distillation has been used to successfully regularize the student by transferring knowledge from the teacher. However, the teacher and the student are trained on the same dataset and tend to learn similar feature representations, whereas the most general knowledge should be found through differences. The finding of general knowledge is further hindered by the unidirectional distillation, as the student should obey the teacher and may discard some knowledge that is truly general but refuted by the teacher. To this end, we propose a novel training framework, where the learning of general knowledge is more in line with the idea of reaching consensus, i.e., finding common knowledge that is beneficial to different yet all datasets through diversified learning partners. Concretely, the training task is divided into a group of subtasks with the same number of students. Each student assigned to one subtask not only is optimized on the allocated subtask but also imitates multiview feature representation aggregated from other students (i.e., student peers), which induces students to capture common knowledge among different subtasks and alleviates the over-fitting of students on the allocated subtasks. To further enhance generalization, we extend the unidirectional distillation to the bidirectional distillation that encourages the student and its student peers to co-evolve by exchanging complementary knowledge with each other. Empirical results and analysis demonstrate that our training framework effectively improves the model generalization without sacrificing training efficiency.
URIhttp://hdl.handle.net/20.500.11897/623191
ISBN978-1-57735-866-4
ISSN2159-5399
IndexedEI
CPCI-S(ISTP)
Appears in Collections:信息科学技术学院
计算语言学教育部重点实验室
其他研究院

Files in This Work
There are no files associated with this item.

Web of Science®



Checked on Last Week

百度学术™



Checked on Current Time




License: See PKU IR operational policies.