Abstract
This paper studies the Visual Question Answering (VQA) topic, which combines Computer Vision (CV), Natural Language Processing (NLP) and Knowledge Representation & Reasoning (KR&R) in order to automatically provide natural language responses to questions asked by users over images. A review of the state of the art for this technology is initially carried out. Among the different approaches, we select the model known as Pythia to build upon it, because this approach is one of the most popularized and successful methods in the public VQA Challenge. Recently, an exhaustive breakdown was done to the Pythia code by Facebook AI Research (FAIR). We choose to use this updated framework after confirming that the two implementations had analog characteristics. We introduce the different modules of the FAIR implementation and how to train our model, proposing some improvements regarding the baseline. Different fine-tuned models are trained, obtaining an accuracy of 66.22% in the best case for the test set of the public VQA-v2 dataset. A comparison of the quantitative results for the most important experiments jointly some qualitative results are discussed. This experimentation is performed with the aim of finally applying it to eCommerce and store observation use cases for VQA in further research.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Antol, S., Agrawal, A., Lu, J., Mitchell, M., Batra, D., Zitnick, C.L., Parikh, D.: VQA: visual question answering. In: International Conference on Computer Vision (ICCV), pp. 2425–2433 (2015)
Yang, Z., He, X., Gao, J., Deng, L., Smola, A.: Stacked attention networks for image question answering. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 21–29 (2016)
Malinowski, M., Fritz, M.: A multi-world approach to question answering about real-world scenes based on uncertain input. In: Conference on Neural Information Processing Systems (NeurIPS), pp. 1682–1690 (2014)
Teney, D., Anderson, P., He, X., Van Den Hengel, A.: Tips and tricks for visual question answering: learnings from the 2017 challenge. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4223–4232 (2018)
Yi, K., Wu, J., Gan, C., Torralba, A., Kohli, P., Tenenbaum, J.: Neural-symbolic VQA: disentangling reasoning from vision and language understanding. In: Conference on Neural Information Processing Systems (NeurIPS), pp. 1031–1042 (2018)
Johnson, J., Hariharan, B., van der Maaten, L., Fei-Fei, L., Lawrence Zitnick, C., Girshick, R.: CLEVR: a diagnostic dataset for compositional language and elementary visual reasoning. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2901–2910 (2017)
Liang, J., Jiang, L., Cao, L., Li, L.-J., Hauptmann, A.G.: Focal visual-text attention for visual question answering. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6135–6143 (2018)
Anderson, P., He, X., Buehler, C., Teney, D., Johnson, M., Gould, S., Zhang, L.: Bottom-up and top-down attention for image captioning and visual question answering. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6077–6086 (2018)
Jiang, Y., Natarajan, V., Chen, X., Rohrbach, M., Batra, D., Parikh, D.: Pythia v0.1: The Winning Entry to the VQA Challenge 2018. arXiv preprint arXiv:1807.09956 (2018)
Wu, C., Liu, J., Wang, X., Li, R.: Differential networks for visual question answering. In: AAAI Conference on Artificial Intelligence (AAAI), vol. 33, pp. 8997–9004 (2019)
Singh, A., Natarajan, V., Shah, M., Jiang, Y., Chen, X., Batra, D., Parikh, D., Rohrbach, M.: Towards VQA models that can read. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8317–8326 (2019)
Lin, T., Maire, M., Belongie, S.J., Bourdev, L.D., Girshick, R.B., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft COCO: Common Objects in Context. arXiv preprint arXiv:1405.0312 (2014)
VQA-v2 (Online) (2020). https://visualqa.org/download.html
PyTorch (Online) (2020). https://pytorch.org/
FAIR Framework (Online) (2020). https://github.com/facebookresearch/mmf
Detectron (Online) (2020). https://github.com/facebookresearch/Detectron
Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1492–1500 (2017)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)
Pennington, J., Socher, R., Manning, C.D.: GloVe: global vectors for word representation. In: Conference on Empirical Methods in Natural Language Processing (EMNLP), vol. 14, pp. 1532–1543 (2014)
Kingma, D.P., Ba, J.: Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980 (2014)
Acknowledgments
Authors want to thank Nielsen Connect for its support and funding in the development of this project. This work has been also funded in part from the Spanish MICINN/FEDER through the Techs4AgeCar project (RTI2018-099263-B-C21) and from the RoboCity2030-DIH-CM project (P2018/NMT- 4331), funded by Programas de actividades I+D (CAM) and cofunded by EU Structural Funds.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Ortiz, M.E., Bergasa, L.M., Arroyo, R., Álvarez, S., Aller, A. (2021). Towards Fine-Tuning of VQA Models in Public Datasets. In: Bergasa, L.M., Ocaña, M., Barea, R., López-Guillén, E., Revenga, P. (eds) Advances in Physical Agents II. WAF 2020. Advances in Intelligent Systems and Computing, vol 1285. Springer, Cham. https://doi.org/10.1007/978-3-030-62579-5_18
Download citation
DOI: https://doi.org/10.1007/978-3-030-62579-5_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-62578-8
Online ISBN: 978-3-030-62579-5
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)