scholarly journals Multi-Modality Global Fusion Attention Network for Visual Question Answering

Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1882
Author(s):  
Cheng Yang ◽  
Weijia Wu ◽  
Yuxing Wang ◽  
Hong Zhou

Visual question answering (VQA) requires a high-level understanding of both questions and images, along with visual reasoning to predict the correct answer. Therefore, it is important to design an effective attention model to associate key regions in an image with key words in a question. Up to now, most attention-based approaches only model the relationships between individual regions in an image and words in a question. It is not enough to predict the correct answer for VQA, as human beings always think in terms of global information, not only local information. In this paper, we propose a novel multi-modality global fusion attention network (MGFAN) consisting of stacked global fusion attention (GFA) blocks, which can capture information from global perspectives. Our proposed method computes co-attention and self-attention at the same time, rather than computing them individually. We validate our proposed method on the two most commonly used benchmarks, the VQA-v2 datasets. Experimental results show that the proposed method outperforms the previous state-of-the-art. Our best single model achieves 70.67% accuracy on the test-dev set of VQA-v2.

2021 ◽  
Author(s):  
Pufen Zhang ◽  
Hong Lan

Abstract In recently years, some visual question answering (VQA) methods that emphasize the simultaneous understanding of both the context of image and question have been proposed. Despite the effectiveness of these methods, they fail to explore a more comprehensive and generalized context learning tactics. To address this issue, we propose a novel Multiple Context Learning Networks (MCLN) to model the multiple contexts for VQA. Three kinds of contexts are investigated, namely visual context, textual context and a special visual-textual context that ignored by previous methods. Moreover, three corresponding context learning modules are proposed. These modules endow image and text representations with context-aware information based on a uniform context learning strategy. And they work together to form a multiple context learning layer (MCL). Such MCL can be stacked in depth and which describe high-level context information by associating intra-modal contexts with inter-modal context. On the VQA v2.0 datasets, the proposed model achieves 71.05% and 71.48% on test-dev set and test-std set respectively, and gains better performance than the previous state-of-the-art methods. In addition, extensive ablation studies have been carried out to examine the effectiveness of the proposed method.


2021 ◽  
Vol 11 (7) ◽  
pp. 3009
Author(s):  
Sungjin Park ◽  
Taesun Whang ◽  
Yeochan Yoon ◽  
Heuiseok Lim

Visual dialog is a challenging vision-language task in which a series of questions visually grounded by a given image are answered. To resolve the visual dialog task, a high-level understanding of various multimodal inputs (e.g., question, dialog history, and image) is required. Specifically, it is necessary for an agent to (1) determine the semantic intent of question and (2) align question-relevant textual and visual contents among heterogeneous modality inputs. In this paper, we propose Multi-View Attention Network (MVAN), which leverages multiple views about heterogeneous inputs based on attention mechanisms. MVAN effectively captures the question-relevant information from the dialog history with two complementary modules (i.e., Topic Aggregation and Context Matching), and builds multimodal representations through sequential alignment processes (i.e., Modality Alignment). Experimental results on VisDial v1.0 dataset show the effectiveness of our proposed model, which outperforms previous state-of-the-art methods under both single model and ensemble settings.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 40771-40781 ◽  
Author(s):  
Chao Yang ◽  
Mengqi Jiang ◽  
Bin Jiang ◽  
Weixin Zhou ◽  
Keqin Li

2020 ◽  
Vol 1624 ◽  
pp. 022022
Author(s):  
Jianing Zhang ◽  
Zhaochang Wu ◽  
Huajie Zhang ◽  
Yunfang Chen

2018 ◽  
Vol 78 (3) ◽  
pp. 3843-3858 ◽  
Author(s):  
Liang Peng ◽  
Yang Yang ◽  
Yi Bin ◽  
Ning Xie ◽  
Fumin Shen ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document