[go: up one dir, main page]

Relation-Associated Instructions & Hallucination Benchmark

Citation Author(s):
Zhiyang
Chen
Yousong
Zhu
Yufei
Zhan
Zhaowen
Li
Chaoyang
Zhao
Jinqiao
Wang
Ming
Tang
Submitted by:
Zhiyang Chen
Last updated:
Mon, 07/08/2024 - 01:14
DOI:
10.21227/33jh-2m65
License:
0
0 ratings - Please login to submit your rating.

Abstract 

Large vision-language models (LVLMs) suffer from hallucination, generating responses that apparently contradict to the image content occasionally. The key problem lies in its weak ability to comprehend detailed content in multi-modal contexts, which can be mainly attributed its training data. The vision instruction dataset primarily focuses on global description that are highly relevant to the image, with few samples containing image details. Therefore, we construct a fine-grained vision instruction dataset, RAI-30k, by generate image-text pairs with detailed relationship annotations in panoptic scene graph dataset (PSG). These conversations pay more attention on detailed facts in the image, encouraging the model to answer questions based on multi-modal contexts. Moreover, to provide a deeper evaluation on the hallucination in LVLMs, we propose a new benchmark, RAH-Bench. It divides vision hallucination into three different types that contradicts the image with wrong categories, attributes or relations, and introduces False Positive Rate as detailed sub-metric for each type. We hope the provided dataset and benchmark will benefit the future research in large vision-language models.

Instructions: 

We provide jsonline files for the dataset and the benchmark.
Each line is a sample, containing the image identifier, the question and the answer.
The images are from COCO 2017 dataset.

Comments

Enjoy!

Submitted by Zhiyang Chen on Mon, 07/08/2024 - 01:15