TitleDiMBERT: Learning Vision-Language Grounded Representations with Disentangled Multimodal-Attention
AuthorsLiu, Fenglin
Wu, Xian
Ge, Shen
Ren, Xuancheng
Fan, Wei
Sun, Xu
Zou, Yuexian
AffiliationPeking Univ, Sch ECE, ADSPLAB, 2199 Lishui Rd, Shenzhen 100871, Guangdong, Peoples R China
Tencent, Yinke Bldg,38 Haidian St, Beijing 100080, Peoples R China
Peking Univ, Sch EECS, MOE Key Lab Computat Linguist, 5 YiHeYuan Rd, Beijing 100871, Peoples R China
Peking Univ, Sch EECS, 5 YiHeYuan Rd, Beijing 100871, Peoples R China
Peking Univ, Ctr Data Sci, 5 YiHeYuan Rd, Beijing 100871, Peoples R China
Peng Cheng Lab, Shenzhen, Peoples R China
Peking Univ, Peng Cheng Lab, Sch ECE, ADSPLAB, 2199 Lishui Rd, Shenzhen 100871, Guangdong, Peoples R China
Issue DateJun-2021
AbstractVision-and-language (V-L) tasks require the system to understand both vision content and natural language, thus learning fine-grained joint representations of vision and language (a.k.a. V-L representations) is of paramount importance. Recently, various pre-trained V-L models are proposed to learn V-L representations and achieve improved results in many tasks. However, the mainstream models process both vision and language inputs with the same set of attention matrices. As a result, the generated V-L representations are entangled in one common latent space. To tackle this problem, we propose DiMBERT (short for Disentangled Multimodal-Attention BERT), which is a novel framework that applies separated attention spaces for vision and language, and the representations of multi-modalities can thus be disentangled explicitly. To enhance the correlation between vision and language in disentangled spaces, we introduce the visual concepts to DiMBERT which represent visual information in textual format. In this manner, visual concepts help to bridge the gap between the two modalities. We pre-train DiMBERT on a large amount of image-sentence pairs on two tasks: bidirectional language modeling and sequence-to-sequence language modeling. After pre-train, DiMBERT is further fine-tuned for the downstream tasks. Experiments show that DiMBERT sets new state-of-the-art performance on three tasks (over four datasets), including both generation tasks (image captioning and visual storytelling) and classification tasks (referring expressions). The proposed DiM (short for Disentangled Multimodal-Attention) module can be easily incorporated into existing pre-trained V-L models to boost their performance, up to a 5% increase on the representative task. Finally, we conduct a systematic analysis and demonstrate the effectiveness of our DiM and the introduced visual concepts.
Appears in Collections:信息工程学院

Files in This Work
There are no files associated with this item.

Web of Science®

Checked on Last Week


Checked on Current Time


Checked on Current Time

Google Scholar™

License: See PKU IR operational policies.