scholarly journals Co-Attention Network With Question Type for Visual Question Answering

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 40771-40781 ◽  
Author(s):  
Chao Yang ◽  
Mengqi Jiang ◽  
Bin Jiang ◽  
Weixin Zhou ◽  
Keqin Li
Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1882
Author(s):  
Cheng Yang ◽  
Weijia Wu ◽  
Yuxing Wang ◽  
Hong Zhou

Visual question answering (VQA) requires a high-level understanding of both questions and images, along with visual reasoning to predict the correct answer. Therefore, it is important to design an effective attention model to associate key regions in an image with key words in a question. Up to now, most attention-based approaches only model the relationships between individual regions in an image and words in a question. It is not enough to predict the correct answer for VQA, as human beings always think in terms of global information, not only local information. In this paper, we propose a novel multi-modality global fusion attention network (MGFAN) consisting of stacked global fusion attention (GFA) blocks, which can capture information from global perspectives. Our proposed method computes co-attention and self-attention at the same time, rather than computing them individually. We validate our proposed method on the two most commonly used benchmarks, the VQA-v2 datasets. Experimental results show that the proposed method outperforms the previous state-of-the-art. Our best single model achieves 70.67% accuracy on the test-dev set of VQA-v2.


Author(s):  
Yangyang Guo ◽  
Liqiang Nie ◽  
Zhiyong Cheng ◽  
Feng Ji ◽  
Ji Zhang ◽  
...  

A number of studies point out that current Visual Question Answering (VQA) models are severely affected by the language prior problem, which refers to blindly making predictions based on the language shortcut. Some efforts have been devoted to overcoming this issue with delicate models. However, there is no research to address it from the view of the answer feature space learning, despite the fact that existing VQA methods all cast VQA as a classification task. Inspired by this, in this work, we attempt to tackle the language prior problem from the viewpoint of the feature space learning. An adapted margin cosine loss is designed to discriminate the frequent and the sparse answer feature space under each question type properly. In this way, the limited patterns within the language modality can be largely reduced to eliminate the language priors. We apply this loss function to several baseline models and evaluate its effectiveness on two VQA-CP benchmarks. Experimental results demonstrate that our proposed adapted margin cosine loss can enhance the baseline models with an absolute performance gain of 15\% on average, strongly verifying the potential of tackling the language prior problem in VQA from the angle of the answer feature space learning.


2018 ◽  
Vol 78 (3) ◽  
pp. 3843-3858 ◽  
Author(s):  
Liang Peng ◽  
Yang Yang ◽  
Yi Bin ◽  
Ning Xie ◽  
Fumin Shen ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document