Abstract
The deployment of autonomous robots in various domains has raised significant concerns about their trustworthiness and accountability. This study explores the potential of Large Language Models (LLMs) in analyzing ROS 2 logs generated by autonomous robots and proposes a framework for log analysis that categorizes log files into different aspects. The study evaluates the performance of three different language models in answering questions related to StartUp, Warning, and PDDL logs. The results suggest that GPT 4, a transformer-based model, outperforms other models, however, their verbosity is not enough to answer why or how questions for all kinds of actors involved in the interaction.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
Release notes (March 23) https://help.openai.com/en/articles/6825453-chatgpt-release-notes.
References
Fernández-López, M., Gómez-Pérez, A., Juristo, N.: Methontology: from ontological art towards ontological engineering (1997)
Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pp. 80–89. IEEE (2018)
González-Santamarta, M.A., Fernández-Becerra, L., Sobrín-Hidalgo, D., Guerrero-Higueras, Á.M., González, I., Lera, F.J.R.: Using large language models for interpreting autonomous robots behaviors (2023). https://arxiv.org/abs/2304.14844
Gunning, D., Aha, D.: Darpa’s explainable artificial intelligence (XAI) program. AI Mag. 40(2), 44–58 (2019)
Gunning, D., Vorm, E., Wang, J.Y., Turek, M.: Darpa’s explainable AI (XAI) program: a retrospective. Appl. AI Lett. 2(4), e61 (2021). https://doi.org/10.1002/ail2.61
Kerzel, M., et al.: What’s on your mind, NICO? XHRI: a framework for explainable human-robot interaction. KI-Künstliche Intelligenz 1–18 (2022)
Langley, P.: Explainable agency in human-robot interaction. In: AAAI Fall Symposium Series (2016)
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS 2017, pp. 4768–4777. Curran Associates Inc., Red Hook (2017)
Macenski, S., Foote, T., Gerkey, B., Lalancette, C., Woodall, W.: Robot operating system 2: design, architecture, and uses in the wild. Sci. Robot. 7(66), eabm6074 (2022)
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)
Olszewska, J.I., et al.: Robotic standard development life cycle in action. J. Intell. Robot. Syst. 98, 119–131 (2020)
OpenAI: GPT-4 technical report (2023). https://arxiv.org/abs/2303.08774
Ribeiro, M.T., Singh, S., Guestrin, C.: Why should I trust you?: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
Sakai, T., Nagai, T.: Explainable autonomous robots: a survey and perspective. Adv. Robot. 36(5–6), 219–238 (2022)
Sanneman, L., Shah, J.A.: Trust considerations for explainable robots: a human factors perspective. arXiv preprint arXiv:2005.05940 (2020)
Sanneman, L., Shah, J.A.: The situation awareness framework for explainable AI (SAFE-AI) and human factors considerations for XAI systems. Int. J. Hum.-Comput. Interact. 38(18–20), 1772–1788 (2022)
Taori, R., et al.: Stanford alpaca: an instruction-following llama model (2023)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30, pp. 5998–6008 (2017)
Acknowledgments
This work has been partially funded by an FPU fellowship provided by the Spanish Ministry of Universities (FPU21/01438) and the Grant PID2021-126592OB-C21 funded by MCIN/AEI/10.13039/5011000 11033.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
González-Santamarta, M.Á., Fernández-Becerra, L., Sobrín-Hidalgo, D., Guerrero-Higueras, Á.M., González, I., Lera, F.J.R. (2023). Using Large Language Models for Interpreting Autonomous Robots Behaviors. In: García Bringas, P., et al. Hybrid Artificial Intelligent Systems. HAIS 2023. Lecture Notes in Computer Science(), vol 14001. Springer, Cham. https://doi.org/10.1007/978-3-031-40725-3_45
Download citation
DOI: https://doi.org/10.1007/978-3-031-40725-3_45
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-40724-6
Online ISBN: 978-3-031-40725-3
eBook Packages: Computer ScienceComputer Science (R0)