Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Multimodal Representations for Teacher-Guided Compositional Visual Reasoning

  • Conference paper
  • First Online:
Advanced Concepts for Intelligent Vision Systems (ACIVS 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14124))

  • 345 Accesses

Abstract

Neural Module Networks (NMN) are a compelling method for visual question answering, enabling the translation of a question into a program consisting of a series of reasoning sub-tasks that are sequentially executed on the image to produce an answer. NMNs provide enhanced explainability compared to integrated models, allowing for a better understanding of the underlying reasoning process. To improve the effectiveness of NMNs we propose to exploit features obtained by a large-scale cross-modal encoder. Also, the current training approach of NMNs relies on the propagation of module outputs to subsequent modules, leading to the accumulation of prediction errors and the generation of false answers. To mitigate this, we introduce an NMN learning strategy involving scheduled teacher guidance. Initially, the model is fully guided by the ground-truth intermediate outputs, but gradually transitions to an autonomous behavior as training progresses. This reduces error accumulation, thus improving training efficiency and final performance. We demonstrate that by incorporating cross-modal features and employing more effective training techniques for NMN, we achieve a favorable balance between performance and transparency in the reasoning process.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Aissa, W., Ferecatu, M., Crucianu, M.: Curriculum learning for compositional visual reasoning. In: Proceedings of VISIGRAPP 2023, Volume 5: VISAPP (2023)

    Google Scholar 

  2. Bengio, S., Vinyals, O., Jaitly, N., Shazeer, N.: Scheduled sampling for sequence prediction with recurrent neural networks. CoRR (2015)

    Google Scholar 

  3. Bojanowski, P., Grave, E., Joulin, A., Mikolov, T.: Enriching word vectors with subword information. Trans. ACL 5, 135–146 (2016)

    Google Scholar 

  4. Chen, W., Gan, Z., Li, L., Cheng, Y., Wang, W.Y., Liu, J.: Meta module network for compositional visual reasoning. In: WACV (2021)

    Google Scholar 

  5. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the NAACL: Human Language Technologies, Volume 1 (2019)

    Google Scholar 

  6. Goyal, Y., Khot, T., Summers-Stay, D., Batra, D., Parikh, D.: Making the V in VQA matter: elevating the role of image understanding in visual question answering. In: CVPR (2017)

    Google Scholar 

  7. Hu, R., Andreas, J., Rohrbach, M., Darrell, T., Saenko, K.: Learning to reason: end-to-end module networks for visual question answering. In: ICCV (2017)

    Google Scholar 

  8. Hudson, D.A., Manning, C.D.: GQA: a new dataset for real-world visual reasoning and compositional question answering (2019)

    Google Scholar 

  9. Kervadec, C., Antipov, G., Baccouche, M., Wolf, C.: Roses are red, violets are blue... but should VQA expect them to? In: CVPR (2021)

    Google Scholar 

  10. Kervadec, C., Wolf, C., Antipov, G., Baccouche, M., Nadri, M.: Supervising the transfer of reasoning patterns in VQA, vol. 34. Curran Associates, Inc. (2021)

    Google Scholar 

  11. Li, G., Wang, X., Zhu, W.: Perceptual visual reasoning with knowledge propagation. In: ACM MM, p. 530–538. MM 2019, ACM, New York, NY, USA (2019)

    Google Scholar 

  12. Li, L.H., Yatskar, M., Yin, D., Hsieh, C.J., Chang, K.W.: VisualBERT: a simple and performant baseline for vision and language. In: Arxiv (2019)

    Google Scholar 

  13. Lu, J., Batra, D., Parikh, D., Lee, S.: VilBERT: pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In: NeurIPS (2019)

    Google Scholar 

  14. Mihaylova, T., Martins, A.F.T.: Scheduled sampling for transformers. In: Proceedings of ACL: Student Research Workshop. Florence, Italy (2019)

    Google Scholar 

  15. Tan, H., Bansal, M.: LXMERT: learning cross-modality encoder representations from transformers. In: Proceedings of EMNLP-IJCNLP (2019)

    Google Scholar 

  16. Vaswani, A., et al.: Attention is all you need. In: NeurIPS, vol. 30 (2017)

    Google Scholar 

  17. Williams, R.J., Zipser, D.: A Learning algorithm for continually running fully recurrent neural networks. Neural Comput. 1(2), 270–280 (1989)

    Article  Google Scholar 

Download references

Acknowledgments

We thank Souheil Hanoune for his insightful comments. This work was partly supported by the French Cifre fellowship 2018/1601 granted by ANRT, and by XXII Group.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wafa Aissa .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Aissa, W., Ferecatu, M., Crucianu, M. (2023). Multimodal Representations for Teacher-Guided Compositional Visual Reasoning. In: Blanc-Talon, J., Delmas, P., Philips, W., Scheunders, P. (eds) Advanced Concepts for Intelligent Vision Systems. ACIVS 2023. Lecture Notes in Computer Science, vol 14124. Springer, Cham. https://doi.org/10.1007/978-3-031-45382-3_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-45382-3_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-45381-6

  • Online ISBN: 978-3-031-45382-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics