NExT-OOD : Overcoming Dual Multiple-Choice VQA Biases

In recent years, multiple-choice Visual Question Answering (VQA) has become topical and achieved remarkable progress. However, most pioneer multiple-choice VQA models are heavily driven by statistical correlations in datasets, which cannot perform well on multimodal understanding and suffer from poor generalization. In this paper, we identify two kinds of spurious correlations, i.e., a Vision-Answer bias (VA bias) and a Question-Answer bias (QA bias). To systematically and scientifically study these biases, we construct a new video question answering (videoQA) benchmark NExT-OOD in OOD setting and propose a graph-based cross-sample method for bias reduction. Specifically, the NExT-OOD is designed to quantify models' generalizability and measure their reasoning ability comprehensively. It contains three sub-datasets including NExT-OOD-VA, NExT-OOD-QA, and NExT-OOD-VQA, which are designed for the VA bias, QA bias, and VA&QA bias, respectively. We evaluate several existing multiple-choice VQA models on our NExT-OOD, and illustrate that their performance degrades significantly compared with the results obtained on the original multiple-choice VQA dataset. Besides, to mitigate the VA bias and QA bias, we explicitly consider the cross-sample information and design a contrastive graph matching loss in our approach, which provides adequate debiasing guidance from the perspective of whole dataset, and encourages the model to focus on multimodal contents instead of spurious statistical regularities. Extensive experimental results illustrate that our method significantly outperforms other bias reduction strategies, demonstrating the effectiveness and generalizability of the proposed approach.

Medienart:

E-Artikel

Erscheinungsjahr:

2024

Erschienen:

2024

Enthalten in:

Zur Gesamtaufnahme - volume:46

Enthalten in:

IEEE transactions on pattern analysis and machine intelligence - 46(2024), 4 vom: 01. März, Seite 1913-1931

Sprache:

Englisch

Beteiligte Personen:

Zhang, Xi [VerfasserIn]
Zhang, Feifei [VerfasserIn]
Xu, Changsheng [VerfasserIn]

Links:

Volltext

Themen:

Journal Article

Anmerkungen:

Date Revised 07.03.2024

published: Print-Electronic

Citation Status PubMed-not-MEDLINE

doi:

10.1109/TPAMI.2023.3269429

funding:

Förderinstitution / Projekttitel:

PPN (Katalog-ID):

NLM35597424X