Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Fine-Grained Image-Text Retrieval via Complementary Feature Learning

  • Conference paper
  • First Online:
MultiMedia Modeling (MMM 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12572))

Included in the following conference series:

  • 2798 Accesses

Abstract

Fine-grained image-text retrieval task aims to search the sample of same fine-grained subcategory from one modal (e.g., image) to another (e.g., text). The key is to learn an effective feature representation and accomplish the alignment between images and texts. This paper proposes a novel Complementary Feature Learning (CFL) method for fine-grained image-text retrieval. Firstly CFL encodes images and texts by Convolutional Neural Network and Bidirectional Encoder Representations from Transformers. Further, with the help of Frequent Pattern Mining technique (for images) and special classification token of Bidirectional Encoder Representations from Transformers (for texts), a stronger fine-grained feature is learned. Secondly the image information and text information are aligned in a common latent space by pairwise dictionary learning. Finally, a score function can be learned to measure the relevance between image-text pairs. Further, we verify our method on two specific fine-grained image-text retrieval tasks. Extensive experiments demonstrate the effectiveness of our CFL.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Agrawal, R., Imielinski, T., Swami, A.N.: Mining association rules between sets of items in large databases. In: SIGMOD 1993 (1993)

    Google Scholar 

  2. Chen, Y., Bai, Y., Zhang, W., Mei, T.: Destruction and construction learning for fine-grained image recognition. In: CVPR, pp. 5152–5161 (2019)

    Google Scholar 

  3. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. ArXiv abs/1810.04805 (2019)

    Google Scholar 

  4. Gu, J., Cai, J., Joty, S.R., Niu, L., Wang, G.: Look, imagine and match: Improving textual-visual cross-modal retrieval with generative models. In: CVPR, pp. 7181–7189 (2018)

    Google Scholar 

  5. Han, J., Pei, J., Yin, Y.: Mining frequent patterns without candidate generation. In: SIGMOD 2000 (2000)

    Google Scholar 

  6. Huang, X., Peng, Y., Yuan, M.: MHTN: modal-adversarial hybrid transfer network for cross-modal retrieval. IEEE Trans. Cybern. 50, 1047–1059 (2017)

    Article  Google Scholar 

  7. Lee, K.H., Chen, X.D., Hua, G., Hu, H., He, X.: Stacked cross attention for image-text matching. ArXiv abs/1803.08024 (2018)

    Google Scholar 

  8. Lin, Y., Zhong Ji, H.: An attentive fine-grained entity typing model with latent type representation. In: EMNLP/IJCNLP (2019)

    Google Scholar 

  9. Mandal, D., Chaudhury, K.N., Biswas, S.: Generalized semantic preserving hashing for n-label cross-modal retrieval. In: CVPR, pp. 2633–2641 (2017)

    Google Scholar 

  10. Mehrotra, A., Hendley, R., Musolesi, M.: Prefminer: mining user’s preferences for intelligent mobile notification management. In: Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (2016)

    Google Scholar 

  11. Onoe, Y., Durrett, G.: Fine-grained entity typing for domain independent entity linking. ArXiv abs/1909.05780 (2020)

    Google Scholar 

  12. Peng, Y., Huang, X., Qi, J.: Cross-media shared representation by hierarchical learning with multiple deep networks. In: IJCAI (2016)

    Google Scholar 

  13. Reed, S., Akata, Z., Lee, H., Schiele, B.: Learning deep representations of fine-grained visual descriptions. In: CVPR, pp. 49–58 (2016)

    Google Scholar 

  14. Sarker, I.H., Salim, F.D.: Mining user behavioral rules from smartphone data through association analysis. ArXiv abs/1804.01379 (2018)

    Google Scholar 

  15. Song, Y., Soleymani, M.: Polysemous visual-semantic embedding for cross-modal retrieval. In: CVPR, June 2019

    Google Scholar 

  16. Su, S., Zhong, Z., Zhang, C.: Deep joint-semantics reconstructing hashing for large-scale unsupervised cross-modal retrieval. In: ICCV, October 2019

    Google Scholar 

  17. Tlili, R., Slimani, Y.: Executing association rule mining algorithms under a grid computing environment. In: PADTAD 2011 (2011)

    Google Scholar 

  18. Wah, C., Branson, S., Welinder, P., Perona, P., Belongie, S.: The Caltech-UCSD Birds-200-2011 Dataset. Technical report (2011)

    Google Scholar 

  19. Wang, B., Yang, Y., Xu, X., Hanjalic, A., Shen, H.T.: Adversarial cross-modal retrieval. ACM MM (2017)

    Google Scholar 

  20. Wang, K., Yin, Q., Wang, W., Wu, S., Wang, L.: A comprehensive survey on cross-modal retrieval. ArXiv abs/1607.06215 (2016)

    Google Scholar 

  21. Wei, X., Zhang, Y., Gong, Y., Zhang, J., Zheng, N.: Grassmann pooling as compact homogeneous bilinear pooling for fine-grained visual classification. In: ECCV (2018)

    Google Scholar 

  22. He, X., Peng, Y., Xie, L.: A new benchmark and approach for fine-grained cross-media retrieval. In: ACM MM (2019)

    Google Scholar 

  23. Yu, C., Zhao, X., Zheng, Q., Zhang, P., You, X.: Hierarchical bilinear pooling for fine-grained visual recognition. In: ECCV (2018)

    Google Scholar 

  24. Zhai, X., Peng, Y., Xiao, J.: Learning cross-media joint representation with sparse and semisupervised regularization. TCSVT 24, 965–978 (2014)

    Google Scholar 

  25. Zheng, H., Fu, J., Zha, Z.J., Luo, J.: Looking for the devil in the details: learning trilinear attention sampling network for fine-grained image recognition. In: CVPR, pp. 5007–5016 (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Min Zheng .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zheng, M., Jia, Y., Jiang, H. (2021). Fine-Grained Image-Text Retrieval via Complementary Feature Learning. In: Lokoč, J., et al. MultiMedia Modeling. MMM 2021. Lecture Notes in Computer Science(), vol 12572. Springer, Cham. https://doi.org/10.1007/978-3-030-67832-6_48

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-67832-6_48

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-67831-9

  • Online ISBN: 978-3-030-67832-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics