Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Using Large Language Models for Interpreting Autonomous Robots Behaviors

  • Conference paper
  • First Online:
Hybrid Artificial Intelligent Systems (HAIS 2023)

Abstract

The deployment of autonomous robots in various domains has raised significant concerns about their trustworthiness and accountability. This study explores the potential of Large Language Models (LLMs) in analyzing ROS 2 logs generated by autonomous robots and proposes a framework for log analysis that categorizes log files into different aspects. The study evaluates the performance of three different language models in answering questions related to StartUp, Warning, and PDDL logs. The results suggest that GPT 4, a transformer-based model, outperforms other models, however, their verbosity is not enough to answer why or how questions for all kinds of actors involved in the interaction.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://github.com/ros2/rcl_logging/tree/humble.

  2. 2.

    Release notes (March 23) https://help.openai.com/en/articles/6825453-chatgpt-release-notes.

References

  1. Fernández-López, M., Gómez-Pérez, A., Juristo, N.: Methontology: from ontological art towards ontological engineering (1997)

    Google Scholar 

  2. Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pp. 80–89. IEEE (2018)

    Google Scholar 

  3. González-Santamarta, M.A., Fernández-Becerra, L., Sobrín-Hidalgo, D., Guerrero-Higueras, Á.M., González, I., Lera, F.J.R.: Using large language models for interpreting autonomous robots behaviors (2023). https://arxiv.org/abs/2304.14844

  4. Gunning, D., Aha, D.: Darpa’s explainable artificial intelligence (XAI) program. AI Mag. 40(2), 44–58 (2019)

    Google Scholar 

  5. Gunning, D., Vorm, E., Wang, J.Y., Turek, M.: Darpa’s explainable AI (XAI) program: a retrospective. Appl. AI Lett. 2(4), e61 (2021). https://doi.org/10.1002/ail2.61

    Article  Google Scholar 

  6. Kerzel, M., et al.: What’s on your mind, NICO? XHRI: a framework for explainable human-robot interaction. KI-Künstliche Intelligenz 1–18 (2022)

    Google Scholar 

  7. Langley, P.: Explainable agency in human-robot interaction. In: AAAI Fall Symposium Series (2016)

    Google Scholar 

  8. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS 2017, pp. 4768–4777. Curran Associates Inc., Red Hook (2017)

    Google Scholar 

  9. Macenski, S., Foote, T., Gerkey, B., Lalancette, C., Woodall, W.: Robot operating system 2: design, architecture, and uses in the wild. Sci. Robot. 7(66), eabm6074 (2022)

    Google Scholar 

  10. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    Article  MathSciNet  Google Scholar 

  11. Olszewska, J.I., et al.: Robotic standard development life cycle in action. J. Intell. Robot. Syst. 98, 119–131 (2020)

    Article  Google Scholar 

  12. OpenAI: GPT-4 technical report (2023). https://arxiv.org/abs/2303.08774

  13. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should I trust you?: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  14. Sakai, T., Nagai, T.: Explainable autonomous robots: a survey and perspective. Adv. Robot. 36(5–6), 219–238 (2022)

    Article  Google Scholar 

  15. Sanneman, L., Shah, J.A.: Trust considerations for explainable robots: a human factors perspective. arXiv preprint arXiv:2005.05940 (2020)

  16. Sanneman, L., Shah, J.A.: The situation awareness framework for explainable AI (SAFE-AI) and human factors considerations for XAI systems. Int. J. Hum.-Comput. Interact. 38(18–20), 1772–1788 (2022)

    Article  Google Scholar 

  17. Taori, R., et al.: Stanford alpaca: an instruction-following llama model (2023)

    Google Scholar 

  18. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30, pp. 5998–6008 (2017)

    Google Scholar 

Download references

Acknowledgments

This work has been partially funded by an FPU fellowship provided by the Spanish Ministry of Universities (FPU21/01438) and the Grant PID2021-126592OB-C21 funded by MCIN/AEI/10.13039/5011000 11033.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Miguel Á. González-Santamarta .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

González-Santamarta, M.Á., Fernández-Becerra, L., Sobrín-Hidalgo, D., Guerrero-Higueras, Á.M., González, I., Lera, F.J.R. (2023). Using Large Language Models for Interpreting Autonomous Robots Behaviors. In: García Bringas, P., et al. Hybrid Artificial Intelligent Systems. HAIS 2023. Lecture Notes in Computer Science(), vol 14001. Springer, Cham. https://doi.org/10.1007/978-3-031-40725-3_45

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-40725-3_45

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-40724-6

  • Online ISBN: 978-3-031-40725-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics