TitleKey Fact as Pivot: A Two-Stage Model for Low Resource Table-to-Text Generation
AuthorsMa, Shuming
Yang, Pengcheng
Liu, Tianyu
Li, Peng
Zhou, Jie
Sun, Xu
AffiliationPeking Univ, Sch EECS, MOE Key Lab Computat Linguist, Beijing, Peoples R China
Peking Univ, Beijing Inst Big Data Res, Deep Learning Lab, Beijing, Peoples R China
Tencent Inc, WeChat AI, Pattern Recognit Ctr, Shanghai, Peoples R China
Issue Date2019
Publisher57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019)
AbstractTable-to-text generation aims to translate the structured data into the unstructured text. Most existing methods adopt the encoder-decoder framework to learn the transformation, which requires large-scale training samples. However, the lack of large parallel data is a major practical problem for many domains. In this work, we consider the scenario of low resource table-to-text generation, where only limited parallel data is available. We propose a novel model to separate the generation into two stages: key fact prediction and surface realization. It first predicts the key facts from the tables, and then generates the text with the key facts. The training of key fact prediction needs much fewer annotated data, while surface realization can be trained with pseudo parallel corpus. We evaluate our model on a biography generation dataset. Our model can achieve 27.34 BLEU score with only 1, 000 parallel data, while the baseline model only obtain the performance of 9.71 BLEU score.(1)
URIhttp://hdl.handle.net/20.500.11897/552789
IndexedISSHP
CPCI-S(ISTP)
Appears in Collections:信息科学技术学院
计算语言学教育部重点实验室

Files in This Work
There are no files associated with this item.

Web of Science®



Checked on Last Week

百度学术™



Checked on Current Time




License: See PKU IR operational policies.