Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content
/ TPO Public

Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment

Notifications You must be signed in to change notification settings

OpenGVLab/TPO

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

👫 TPO

| | Hugging Face Model

💡 Introduction

Task Preference Optimization (TPO) is a new method designed to enhance the performance of multimodal large language models (MLLMs) in handling visual tasks. Current MLLMs face challenges in precisely understanding visuals despite their capabilities in various vision applications. TPO addresses this by integrating differentiable task preferences from fine-grained visual tasks, introducing learnable task tokens to bridge the gap between task-specific heads and the MLLM. This results in improved multimodal capabilities and task-specific performance, with significant improvements demonstrated across multiple benchmarks and tasks.

TPO uses differentiable task preferences from dense visual supervisions via task-specific heads to enhance MLLMs in
    fine-grained understanding.

Figure 1: TPO uses differentiable task preferences from dense visual supervisions via task-specific heads to enhance MLLMs in fine-grained understanding.

  • Enhanced Multimodal Performance: Achieves an average 14.6% improvement in multimodal performance compared to baseline models on various image and video tasks, and demonstrates scalability across different MLLM architectures such as VideoChat and LLaVA.
  • Robust Zero-Shot Capabilities: Performs comparably to state-of-the-art supervised models in zero-shot scenarios across various vision tasks.
  • Synergistic Training: Multi-task co-training within TPO leads to mutual benefits, enhancing individual task performance beyond single-task training.

TPO uses differentiable task preferences from dense visual supervisions via task-specific heads to enhance MLLMs in
    fine-grained understanding.

Figure 2: Overall Pipeline of TPO. The architecture of Task Preference Optimization (TPO) consists of four main components: (1) a vision encoder, (2) a connector, (3) a large language model, and (4) a series of visual task heads. Differently colored flame symbols indicate which components are unfrozen at various stages of the training process.

🏃 Installation

  1. Clone the repository:
git clone https://github.com/OpenGVLab/TPO.git
  1. Navigate to the project directory:
cd TPO
  1. Install the required dependencies:
pip install -r requirements.txt
  1. Try the demo
python app.py

🤖 Model Zoo

MLLM Link MVBench
VideoChat-TPO huggingface 66.8
LlaVA-OV-TPO TBD 64.8

Citation

@article{yan2024tpo,
  title={Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment},
  author={Yan, Ziang and Li, Zhilin and He, Yinan and Wang, Chenting and Li, Kunchang and Li, Xinhao and Zeng, Xiangyu and Wang, Zilei and Wang, Yali and Qiao, Yu, and Wang, Limin and Wang, Yi},
  journal={arXiv preprint arXiv:2412.19326},
  year={2024}
}

Acknowledgement

TPO is built with reference to the following projects: VideoChat, Llava-OV, UMT, InternVideo2, CG-DETR, and SAM2. Thanks for their work!

About

Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages