Abstract
The need for social robot systems has become even more critical as a result of the ongoing pandemic. Labour shortages in the services sector and public health concerns around infection transmission combine to favour the deployment of autonomous systems in a number of traditional roles including server robots in restaurants, companion robots in long-term care homes and security robots in public spaces, to identify but a few examples. To be successful, social robots must communicate with a wide range of individuals under a wide range of different scenarios. Understanding and reacting to the sentiment being expressed by an individual is key in human-human interaction, especially in critical situations that require de-escalation. This paper takes as a starting point that user sentiment is also critical for the successful deployment of social robot systems. Although much can be learned from experiments performed in simulation, real-world experiments in the development of sentiment-aware social robots requires an infrastructure upon which to explore questions related to the role of sentiment in social robotics. This includes the development of an appropriate robot morphology and user/robot interface. This paper reports early results in the development of sentiment and display technologies as part of the development of a sentiment-informed social robot named Sentrybot, an autonomous robot intended for deployment in the security domain.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
[4] p. 11.
References
Braezeal, C., Scassellati, B.: How to build robots that makes friends and influence people. In: IEEE/RSJ IROS. Kyongju, Korea (1999)
Daily, S.B., et al.: Affective computing: historical foundations, current applications, and future trends. In: Jeon, M. (ed.) Emotions and Affect in Human Factors and Human-Computer Interaction, pp. 213–231. Academic Press, San Diego (2017)
Henschel, A., Laban, G., Cross, E.S.: What makes a robot social? a review of social robots from science fiction to a home or hospital near you. Cogn. Robot. 2, 9–19 (2021)
Sarrica, M., Brondi, S., Fortunati, L.: How many facets does a “social robot’’ have? a review of scientific and popular definitions online. Inf. Techol. People 33, 1–21 (2020)
Inbar, O., Meyer, J.: Manners matter: trust in robotic peacekeepers. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 59, pp. 185–189 (2016)
Lyons, J.B., Vo, T., Wynne, K.T., Majoney, S., Nam, C.S., Gallimore, D.: Trusting autonomous security robots: the role of reliability and stated social intent. J. Hum. Factors Ergon. Soc. 63(4), 603–618 (2020)
Mavandadi, V., Bieling, P.J., Madsen, V.: Effective ingredients of verbal de-escalation: validating an English modified version of the ‘de-escalating aggressive behaviour scale. J. Psychiatr. Ment. Health Nurs. 23(6–7), 357–368 (2016)
Hallett, N., Dickens, G.L.: De-escalation of aggressive behaviour in healthcare settings: concept analysis. Int. J. Nurs. Stud. 75, 10–20 (2017)
Mavandadi, V., Bieling, P.J., Madsen, V.: Effective ingredients of verbal de-escalation: validating an English modified version of the ’de-escalating aggressive behaviour scale. J. Psychiatr. Ment. Health Nurs. 23(6–7), 357–68 (2016)
Rabenschlag, F., Cassidy, C., Steinauer, R.: Nursing perspectives: reflecting history and informal coercion in de-escalation strategies. Front. Psychiatry 10, 231 (2019)
Goodman, H., Papastavrou Brooks, C., Price, O., Barley, E.A.: Barriers and facilitators to the effective de-escalation of conflict behaviours in forensic high-secure settings: a qualitative study. Int. J. Men. Health Syst. 14, 1–16 (2020)
Toichoa Eyam, A., Mohammed, W.M., Martinez Lastra, J.L.: Emotion-driven analysis and control of human-robot interactions in collaborative applications. Sensors 21, 4626 (2021)
Clearpath Robotics, R.: Dingo indoor mobile robot. https://clearpathrobotics.com/dingo-indoor-mobile-robot/
Das, S.: Robot localization in a mapped environment using adaptive monte carlo algorithm. Int. J. Sci. Eng. Res. 9, 10 (2018)
Yang, X.: Slam and navigation of indoor robot based on ROS and LIDAR. J. Phys. 1748, 1 (2021)
Altarawneh, Enas, Jenkin, Michael, Scott MacKenzie, I..: An extensible cloud based avatar: implementation and evaluation. In: Brooks, Anthony Lewis, Brahman, Sheryl, Kapralos, Bill, Nakajima, Amy, Tyerman, Jane, Jain, Lakhmi C.. (eds.) Recent Advances in Technologies for Inclusive Well-Being. ISRL, vol. 196, pp. 503–522. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-59608-8_27
Huggins-Daines, D., Kumar, M., Chan, A., Black, A., Ravishankar, M., Rudnicky, A.: Pocketsphinx: a free, real-time continuous speech recognition system for hand-held devices. In: 2006 IEEE International Conference on Acoustics Speed and Signal Processing Proceedings, May 2006
Ravulavaru, A.: Google Cloud AI Services Quick Start Guide: Build Intelligent Applications with Google Cloud AI Services. Packt Publishing, Birmingham (2018)
Packowski, S., Lakhana, A.: Using IBM WATSON cloud services to build natural language processing solutions to leverage chat tools. In: Proceedings of the 27th Annual International Conference on Computer Science and Software Engineering (CASCON), Markham, Ontario, Canada, pp. 211–218 (2017)
Larsen, L.: Learning Microsoft Cognitive Services: Use Cognitive Services APIs to Add AI Capabilities to Your Applications, 3rd edn. Packt Publishing, Birmingham (2018)
Biswas, M., Wit.ai and Dialogflow. Apress, Berkeley, CA, pp. 67–100 (2018). https://doi.org/10.1007/978-1-4842-3754-0_3
Aronsson, J., Lu, P., Strüber, D., Berger, T.: A maturity assessment framework for conversational AI development platforms. New York, NY, USA, Association for Computing Machinery, pp. 1736–1745 (2021). https://doi.org/10.1145/3412841.3442046
Altarawneh, E., jenkin, M.: System and method for rendering of an animated avatar, U.S. Patent 10 580 187B2, 7 March 2020
Altarawneh, E., Jenkin, M.: Leveraging cloud-based tools to talk with robots. In: Proceedings of 16th International Conference On Informatics in Control, Automation and Robotics (ICINCO), July 2019
Valenza, E.: Blender Cycles: Materials and Textures Cookbook, Third Edition, 3rd ed. Packt Publishing, Birmingham (2015)
Paradis, D.J., Segee, B.: Remote rendering and rendering in virtual machines. In. International Conference on Computational Science and Computational Intelligence (CSCI), vol. 2016, pp. 218–221 (2016)
Doshi, U., Barot, V., Gavhane, S.: Emotion detection and sentiment analysis of static images. In: IEEE International Conference on Convergence to Digital World, Mumbai, India (2000)
Rajesh, K.M., Naveenkumar, M.: A robust method for face recognition and face emotion detection system using support vector machines. In: 2016 International Conference on Electrical, Electronics, Communication, Computer and Optimization Techniques (ICEECCOT), pp. 1–5 (2016)
Reney, D., Tripathi, N.: An efficient method to face and emotion detection In: Fifth International Conference on Communication Systems and Network Technologies, vol. 2015, pp. 493–497 (2015)
Li, W., Xu, H.: Text-based emotion classification using emotion cause extraction. Expert Syst. Appl. 41(4), 1742–1749 (2014)
Agrawal, A., An, A.: Unsupervised emotion detection from text using semantic and syntactic relations. In: 2012 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology, vol. 1. pp. 346–353. IEEE (2012)
Abdi, A., Shamsuddin, S.M., Hasan, S., Piran, J.: Deep learning-based sentiment classification of evaluative text based on multi-feature fusion. Inf. Process. Manag. 56(4), 1245–1259 (2019)
Demszky, D., Movshovitz-Attias, D., Ko, J., Cowen, A., Nemade, G., Ravi, S.: Goemotions: a dataset of fine-grained emotions, arXiv preprint arXiv:2005.00547 (2020)
Fersini, E., Messina, E., Arosio, G., Archetti, F.: Audio-based emotion recognition in judicial domain: a multilayer support vector machines approach. In: Perner, P. (ed.) MLDM 2009. LNCS (LNAI), vol. 5632, pp. 594–602. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-03070-3_45
Lalitha, S., Geyasruti, D., Narayanan, R., Shravani, M.: Emotion detection using MFCC and cepstrum features. Procedia Comput. Sci. 70, 29–35 (2015)
Sayedelahl, A., Fewzee, P., Kamel, M.S., Karray, F.: Audio-based emotion recognition from natural conversations based on co-occurrence matrix and frequency domain energy distribution features. In: D’Mello, S., Graesser, A., Schuller, B., Martin, J.-C. (eds.) ACII 2011. LNCS, vol. 6975, pp. 407–414. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-24571-8_52
Chernykh, V., Sterling, G., Prihodko, P.: Emotion recognition from speech with recurrent neural networks, CoRR, vol. abs/1701.08071 (2017)
Cai, L., Hu, Y., Dong, J., Zhou, S.: Audio-textual emotion recognition based on improved neural networks. Math. Prob. Eng. 2019, 1–9 (2019). https://www.hindawi.com/journals/mpe/2019/2593036/
Ren, M., Nie, W., Liu, A., Su, Y.: Multi-modal correlated network for emotion recognition in speech. Vis. Inf. 3(3), 150–155 (2019)
Sebe, N., Cohen, I., Huang, T.S.: Multimodal emotion recognition. In: Handbook of Pattern Recognition and Computer Vision. World Scientific, pp. 387–409 (2005)
Soleymani, M., Garcia, D., Jou, B., Schuller, B., Chang, S.-F., Pantic, M.: A survey of multimodal sentiment analysis. Image Vis. Comput. 65, 3–14 (2017)
Busso, C., et al.: IEMOCAP: interactive emotional dyadic motion capture database. Lang. Resour. Eval. 42(4), 335–359 (2008)
Tripathi, S., Beigi, H.S.M.: Multi-modal emotion recognition on IEMOCAP dataset using deep learning, CoRR, vol. abs/1804.05788 (2018). http://arxiv.org/abs/1804.05788
Chernykh, V., Prihodko, P.: Emotion recognition from speech with recurrent neural networks (2018)
Poria, S., Majumder, N., Hazarika, D., Cambria, E., Hussain, A., Gelbukh, A.: Multimodal sentiment analysis: addressing key issues and setting up the baselines. IEEE Intell. Syst. 33, 17–25 (2018)
Acheampong, F.A., Wenyu, C., Nunoo-Mensah, H.: Text-based emotion detection: advances, challenges, and opportunities. Eng. Rep. 2(7), e12189 (2020)
Xu, D., Tian, Z., Lai, R., Kong, X., Tan, Z., Shi, W.: Deep learning based emotion analysis of microblog texts. Inf. Fusion 64, 1–11 (2020)
Rashid, U., Iqbal, M.W., Skiandar, M.A., Raiz, M.Q., Naqvi, M.R., Shahzad, S.K.: Emotion detection of contextual text using deep learning. In: 2020 4th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), pp. 1–5. IEEE (2020)
Acheampong, F.A., Nunoo-Mensah, H., Chen, W.: Transformer models for text-based emotion detection: a review of BERT-based approaches. Artif. Intell. Rev. 54(8), 5789–5829 (2021). https://doi.org/10.1007/s10462-021-09958-2
Su, M.-H., Wu, C.-H., Huang, K.-Y., Hong, Q.-B.: Lstm-based text emotion recognition using semantic and emotional word vectors. In: First Asian Conference on Affective Computing and Intelligent Interaction (ACII Asia), vol. 2018, pp. 1–6 (2018)
Luo, L., Wang, Y.: Emotionx-hsu: adopting pre-trained BERT for emotion classification, CoRR, vol. abs/1907.09669 (2019)
Majumder, N., Poria, S., Hazarika, D., Mihalcea, R., Gelbukh, A., Cambria, E.: Dialoguernn: an attentive RNN for emotion detection in conversations. In: AAAI, pp. 6818–6825 (2019)
Ghosal, D., Majumder, N., Poria, S., Chhaya, N., Gelbukh, A.: DialogueGCN: a graph convolutional neural network for emotion recognition in conversation. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, Association for Computational Linguistics, pp. 154–164. November 2019
Ghosal, D., Majumder, N., Gelbukh, A., Mihalcea, R., Poria, S.: COSMIC: commonsense knowledge for emotion identification in conversations. In: Findings of the Association for Computational Linguistics: EMNLP 2020, Association for Computational Linguistics, pp. 2470–2481, November 2020
Pennington, J., Socher, R., Manning, C.: GloVe: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, Association for Computational Linguistics, pp. 1532–1543, October 2014. https://aclanthology.org/D14-1162
Spezialetti, M., Placidi, G., Rossi, S.: Emotion recognition for human-robot interaction: recent advances and future perspectives. Front. Robot. AI 7, 532279 (2020)
Ishiguro, H., Ono, T., Imai, M., Maeda, T., Kanda, T., Nakatsu, R.: Robovie: an interactive humanoid robot. Int. J. Ind. Robot 28(6), 498–504 (2001)
Tian, Z., et al.: Emotion-aware multimodal pre-training for image-grounded emotional response generation. In: International Conference on Database Systems for Advanced Applications, pp. 3–19, vol. 13247. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-00129-1_1
Mao, Y., Cai, F., Guo, Y., Chen, H.: Incorporating emotion for response generation in multi-turn dialogues. Appl. Intell. 52(7), 7218–7229 (2022)
Cox, G.: Chatterbot. https://pypi.org/project/ChatterBot/
Malle, B.F., Ullman, D.: A multi-dimensional conception and measure of human-robot trust. In: Nam, C.S., Lyons, J.B. (eds.) Trust in Human-Robot Interaction: Research and Applications, Elsevier, pp. 3–2 (2021)
Schaefer, K.E., Sanders, T.L., Yordon, R.E., Billings, D.R., Hancock, P.: Classification of robot form: factors predicting perceived trustworthiness. In: Proceedings of the Human Factors and Ergonomics Society 56th Annual Meeting, Nam, C.S., Lyons, J.B., (eds.), pp. 1548–1552 (2012)
Acknowledgments
The development of the infrastructure detailed in this paper was funded by the Innovation for Defence Excellence and Security (IDEaS) program of the Department of National Defence of the Government of Canada, in support of the Canadian Armed Forces. The support of the NSERC Canadian Robotics Network is gratefully acknowledged. The authors are solely responsible for the content of this publication and thank Helio Perroni Filho for his helpful comments and suggestions.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 Springer Nature Switzerland AG
About this paper
Cite this paper
Tarawneh, E. et al. (2023). An Infrastructure for Studying the Role of Sentiment in Human-Robot Interaction. In: Rousseau, JJ., Kapralos, B. (eds) Pattern Recognition, Computer Vision, and Image Processing. ICPR 2022 International Workshops and Challenges. ICPR 2022. Lecture Notes in Computer Science, vol 13646. Springer, Cham. https://doi.org/10.1007/978-3-031-37745-7_7
Download citation
DOI: https://doi.org/10.1007/978-3-031-37745-7_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-37744-0
Online ISBN: 978-3-031-37745-7
eBook Packages: Computer ScienceComputer Science (R0)