Abstract
Fine-grained image-text retrieval task aims to search the sample of same fine-grained subcategory from one modal (e.g., image) to another (e.g., text). The key is to learn an effective feature representation and accomplish the alignment between images and texts. This paper proposes a novel Complementary Feature Learning (CFL) method for fine-grained image-text retrieval. Firstly CFL encodes images and texts by Convolutional Neural Network and Bidirectional Encoder Representations from Transformers. Further, with the help of Frequent Pattern Mining technique (for images) and special classification token of Bidirectional Encoder Representations from Transformers (for texts), a stronger fine-grained feature is learned. Secondly the image information and text information are aligned in a common latent space by pairwise dictionary learning. Finally, a score function can be learned to measure the relevance between image-text pairs. Further, we verify our method on two specific fine-grained image-text retrieval tasks. Extensive experiments demonstrate the effectiveness of our CFL.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Agrawal, R., Imielinski, T., Swami, A.N.: Mining association rules between sets of items in large databases. In: SIGMOD 1993 (1993)
Chen, Y., Bai, Y., Zhang, W., Mei, T.: Destruction and construction learning for fine-grained image recognition. In: CVPR, pp. 5152–5161 (2019)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. ArXiv abs/1810.04805 (2019)
Gu, J., Cai, J., Joty, S.R., Niu, L., Wang, G.: Look, imagine and match: Improving textual-visual cross-modal retrieval with generative models. In: CVPR, pp. 7181–7189 (2018)
Han, J., Pei, J., Yin, Y.: Mining frequent patterns without candidate generation. In: SIGMOD 2000 (2000)
Huang, X., Peng, Y., Yuan, M.: MHTN: modal-adversarial hybrid transfer network for cross-modal retrieval. IEEE Trans. Cybern. 50, 1047–1059 (2017)
Lee, K.H., Chen, X.D., Hua, G., Hu, H., He, X.: Stacked cross attention for image-text matching. ArXiv abs/1803.08024 (2018)
Lin, Y., Zhong Ji, H.: An attentive fine-grained entity typing model with latent type representation. In: EMNLP/IJCNLP (2019)
Mandal, D., Chaudhury, K.N., Biswas, S.: Generalized semantic preserving hashing for n-label cross-modal retrieval. In: CVPR, pp. 2633–2641 (2017)
Mehrotra, A., Hendley, R., Musolesi, M.: Prefminer: mining user’s preferences for intelligent mobile notification management. In: Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (2016)
Onoe, Y., Durrett, G.: Fine-grained entity typing for domain independent entity linking. ArXiv abs/1909.05780 (2020)
Peng, Y., Huang, X., Qi, J.: Cross-media shared representation by hierarchical learning with multiple deep networks. In: IJCAI (2016)
Reed, S., Akata, Z., Lee, H., Schiele, B.: Learning deep representations of fine-grained visual descriptions. In: CVPR, pp. 49–58 (2016)
Sarker, I.H., Salim, F.D.: Mining user behavioral rules from smartphone data through association analysis. ArXiv abs/1804.01379 (2018)
Song, Y., Soleymani, M.: Polysemous visual-semantic embedding for cross-modal retrieval. In: CVPR, June 2019
Su, S., Zhong, Z., Zhang, C.: Deep joint-semantics reconstructing hashing for large-scale unsupervised cross-modal retrieval. In: ICCV, October 2019
Tlili, R., Slimani, Y.: Executing association rule mining algorithms under a grid computing environment. In: PADTAD 2011 (2011)
Wah, C., Branson, S., Welinder, P., Perona, P., Belongie, S.: The Caltech-UCSD Birds-200-2011 Dataset. Technical report (2011)
Wang, B., Yang, Y., Xu, X., Hanjalic, A., Shen, H.T.: Adversarial cross-modal retrieval. ACM MM (2017)
Wang, K., Yin, Q., Wang, W., Wu, S., Wang, L.: A comprehensive survey on cross-modal retrieval. ArXiv abs/1607.06215 (2016)
Wei, X., Zhang, Y., Gong, Y., Zhang, J., Zheng, N.: Grassmann pooling as compact homogeneous bilinear pooling for fine-grained visual classification. In: ECCV (2018)
He, X., Peng, Y., Xie, L.: A new benchmark and approach for fine-grained cross-media retrieval. In: ACM MM (2019)
Yu, C., Zhao, X., Zheng, Q., Zhang, P., You, X.: Hierarchical bilinear pooling for fine-grained visual recognition. In: ECCV (2018)
Zhai, X., Peng, Y., Xiao, J.: Learning cross-media joint representation with sparse and semisupervised regularization. TCSVT 24, 965–978 (2014)
Zheng, H., Fu, J., Zha, Z.J., Luo, J.: Looking for the devil in the details: learning trilinear attention sampling network for fine-grained image recognition. In: CVPR, pp. 5007–5016 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Zheng, M., Jia, Y., Jiang, H. (2021). Fine-Grained Image-Text Retrieval via Complementary Feature Learning. In: Lokoč, J., et al. MultiMedia Modeling. MMM 2021. Lecture Notes in Computer Science(), vol 12572. Springer, Cham. https://doi.org/10.1007/978-3-030-67832-6_48
Download citation
DOI: https://doi.org/10.1007/978-3-030-67832-6_48
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-67831-9
Online ISBN: 978-3-030-67832-6
eBook Packages: Computer ScienceComputer Science (R0)