TitleAutoencoder as Assistant Supervisor: Improving Text Representation for Chinese Social Media Text Summarization
AuthorsMa, Shuming
Sun, Xu
Lin, Junyang
Wang, Houfeng
AffiliationPeking Univ, Sch EECS, Key Lab Computat Linguist, MOE, Beijing, Peoples R China.
Peking Univ, Beijing Inst Big Data Res, Deep Learning Lab, Beijing, Peoples R China.
Peking Univ, Sch Foreign Languages, Beijing, Peoples R China.
Issue Date2018
PublisherPROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 2
CitationPROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 2. 2018, 725-731.
AbstractMost of the current abstractive text summarization models are based on the sequence-to-sequence model (Seq2Seq). The source content of social media is long and noisy, so it is difficult for Seq2Seq to learn an accurate semantic representation. Compared with the source content, the annotated summary is short and well written. Moreover, it shares the same meaning as the source content. In this work, we supervise the learning of the representation of the source content with that of the summary. In implementation, we regard a summary autoencoder as an assistant supervisor of Seq2Seq. Following previous work, we evaluate our model on a popular Chinese social media dataset. Experimental results show that our model achieves the state-of-the-art performances on the benchmark dataset.(1)
URIhttp://hdl.handle.net/20.500.11897/575182
IndexedCPCI-S(ISTP)
Appears in Collections:信息科学技术学院
计算语言学教育部重点实验室
外国语学院

Files in This Work
There are no files associated with this item.

Web of Science®



Checked on Last Week

百度学术™



Checked on Current Time




License: See PKU IR operational policies.