An Empirical Study of CLIP for Text-Based Person Search

Authors

  • Min Cao Soochow University
  • Yang Bai Soochow University
  • Ziyin Zeng Soochow University
  • Mang Ye Wuhan University
  • Min Zhang Harbin Institute of Technology, Shenzhen

DOI:

https://doi.org/10.1609/aaai.v38i1.27801

Keywords:

CMS: Simulating Human Behavior, CV: Image and Video Retrieval, CV: Multi-modal Vision

Abstract

Text-based Person Search (TBPS) aims to retrieve the person images using natural language descriptions. Recently, Contrastive Language Image Pretraining (CLIP), a universal large cross-modal vision-language pre-training model, has remarkably performed over various cross-modal downstream tasks due to its powerful cross-modal semantic learning capacity. TPBS, as a fine-grained cross-modal retrieval task, is also facing the rise of research on the CLIP-based TBPS. In order to explore the potential of the visual-language pre-training model for downstream TBPS tasks, this paper makes the first attempt to conduct a comprehensive empirical study of CLIP for TBPS and thus contribute a straightforward, incremental, yet strong TBPS-CLIP baseline to the TBPS community. We revisit critical design considerations under CLIP, including data augmentation and loss function. The model, with the aforementioned designs and practical training tricks, can attain satisfactory performance without any sophisticated modules. Also, we conduct the probing experiments of TBPS-CLIP in model generalization and model compression, demonstrating the effectiveness of TBPS-CLIP from various aspects. This work is expected to provide empirical insights and highlight future CLIP-based TBPS research.

Published

2024-03-25

How to Cite

Cao, M., Bai, Y., Zeng, Z., Ye, M., & Zhang, M. (2024). An Empirical Study of CLIP for Text-Based Person Search. Proceedings of the AAAI Conference on Artificial Intelligence, 38(1), 465-473. https://doi.org/10.1609/aaai.v38i1.27801

Issue

Section

AAAI Technical Track on Cognitive Modeling & Cognitive Systems