VIGC: Visual Instruction Generation and Correction

Authors

  • Bin Wang Shanghai Artificial Intelligence Laboratory
  • Fan Wu Shanghai Artificial Intelligence Laboratory
  • Xiao Han Shanghai Artificial Intelligence Laboratory
  • Jiahui Peng Shanghai Artificial Intelligence Laboratory
  • Huaping Zhong SenseTime Research
  • Pan Zhang Shanghai Artificial Intelligence Laboratory
  • Xiaoyi Dong Shanghai Artificial Intelligence Laboratory The Chinese University of Hong Kong
  • Weijia Li Sun Yat-sen University
  • Wei Li Shanghai Artificial Intelligence Laboratory
  • Jiaqi Wang Shanghai Artificial Intelligence Laboratory
  • Conghui He Shanghai Artificial Intelligence Laboratory

DOI:

https://doi.org/10.1609/aaai.v38i6.28338

Keywords:

CV: Language and Vision

Abstract

The integration of visual encoders and large language models (LLMs) has driven recent progress in multimodal large language models (MLLMs). However, the scarcity of high-quality instruction-tuning data for vision-language tasks remains a challenge. The current leading paradigm, such as LLaVA, relies on language-only GPT-4 to generate data, which requires pre-annotated image captions and detection bounding boxes, suffering from understanding image details. A practical solution to this problem would be to utilize the available multimodal large language models to generate instruction data for vision-language tasks. However, it's worth noting that the currently accessible MLLMs are not as powerful as their LLM counterparts, as they tend to produce inadequate responses and generate false information. As a solution for addressing the current issue, this paper proposes the Visual Instruction Generation and Correction (VIGC) framework that enables multimodal large language models to generate instruction-tuning data and progressively enhance its quality on-the-fly. Specifically, Visual Instruction Generation (VIG) guides the vision-language model to generate diverse instruction-tuning data. To ensure generation quality, Visual Instruction Correction (VIC) adopts an iterative update mechanism to correct any inaccuracies in data produced by VIG, effectively reducing the risk of hallucination. Leveraging the diverse, high-quality data generated by VIGC, we finetune mainstream models and validate data quality based on various evaluations. Experimental results demonstrate that VIGC not only compensates for the shortcomings of language-only data generation methods, but also effectively enhances the benchmark performance. The models, datasets, and code are available at https://opendatalab.github.io/VIGC

Published

2024-03-24

How to Cite

Wang, B., Wu, F., Han, X., Peng, J., Zhong, H., Zhang, P., Dong, X., Li, W., Li, W., Wang, J., & He, C. (2024). VIGC: Visual Instruction Generation and Correction. Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 5309-5317. https://doi.org/10.1609/aaai.v38i6.28338

Issue

Section

AAAI Technical Track on Computer Vision V