Skip to content

Counterfactual Samples Synthesizing for Robust VQA

Notifications You must be signed in to change notification settings

yanxinzju/CSS-VQA

Repository files navigation

CVPR2020 Counterfactual Samples Synthesizing for Robust VQA

This repo contains code for our paper "Counterfactual Samples Synthesizing for Robust Visual Question Answering" This repo contains code modified from here,many thanks!

Prerequisites

Make sure you are on a machine with a NVIDIA GPU and Python 2.7 with about 100 GB disk space.
h5py==2.10.0
pytorch==1.1.0
Click==7.0
numpy==1.16.5
tqdm==4.35.0

Data Setup

You can use

bash tools/download.sh

to download the data
and the rest of the data and trained model can be obtained from BaiduYun(passwd:3jot) or MEGADrive unzip feature1.zip and feature2.zip and merge them into data/rcnn_feature/
use

bash tools/process.sh 

to process the data

Training

Run

CUDA_VISIBLE_DEVICES=0 python main.py --dataset cpv2 --mode q_v_debias --debias learned_mixin --topq 1 --topv -1 --qvp 5 --output [] --seed 0

to train a model

Testing

Run

CUDA_VISIBLE_DEVICES=0 python eval.py --dataset cpv2 --debias learned_mixin --model_state []

to eval a model

Citation

If you find this code useful, please cite the following paper:

@inproceedings{chen2020counterfactual,
title={Counterfactual Samples Synthesizing for Robust Visual Question Answering},
author={Chen, Long and Yan, Xin and Xiao, Jun and Zhang, Hanwang and Pu, Shiliang and Zhuang, Yueting},
booktitle={CVPR},
year={2020}
}

About

Counterfactual Samples Synthesizing for Robust VQA

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published