TitleCross-Modal Commentator: Automatic Machine Commenting Based on Cross-Modal Information
AuthorsYang, Pengcheng
Zhang, Zhihan
Luo, Fuli
Li, Lei
Huang, Chengyang
Sun, Xu
AffiliationPeking Univ, Beijing Inst Big Data Res, Deep Learning Lab, Beijing, Peoples R China
Peking Univ, Sch EECS, MOE Key Lab Computat Linguist, Beijing, Peoples R China
Beijing Univ Posts & Telecommun, Beijing, Peoples R China
Issue Date2019
Publisher57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019)
AbstractAutomatic commenting of online articles can provide additional opinions and facts to the reader, which improves user experience and engagement on social media platforms. Previous work focuses on automatic commenting based solely on textual content. However, in real-scenarios, online articles usually contain multiple modal contents. For instance, graphic news contains plenty of images in addition to text. Contents other than text are also vital because they are not only more attractive to the reader but also may provide critical information. To remedy this, we propose a new task: cross-model automatic commenting (CMAC), which aims to make comments by integrating multiple modal contents. We construct a large-scale dataset for this task and explore several representative methods. Going a step further, an effective co-attention model is presented to capture the dependency between textual and visual information. Evaluation results show that our proposed model can achieve better performance than competitive baselines.(1)
URIhttp://hdl.handle.net/20.500.11897/552792
IndexedISSHP
CPCI-S(ISTP)
Appears in Collections:信息科学技术学院
计算语言学教育部重点实验室

Files in This Work
There are no files associated with this item.

Web of Science®



Checked on Last Week

百度学术™



Checked on Current Time




License: See PKU IR operational policies.