Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

In the shades of the uncanny valley: : An experimental study of human–chatbot interaction

Published: 01 March 2019 Publication History

Abstract

This project has been carried out in the context of recent major developments in botics and more widespread usage of virtual agents in personal and professional sphere. The general purpose of the experiment was to thoroughly examine the character of the human–non-human interaction process. Thus, in the paper, we present a study of human–chatbot interaction, focusing on the affective responses of users to different types of interfaces with which they interact. The experiment consisted of two parts: measurement of psychophysiological reactions of chatbot users and a detailed questionnaire that focused on assessing interactions and willingness to collaborate with a bot. In the first quantitative stage, participants interacted with a chatbot, either with a simple text chatbot (control group) or an avatar reading its responses in addition to only presenting them on the screen (experimental group. We gathered the following psychophysiological data from participants: electromyography (EMG), respirometer (RSP), electrocardiography (ECG), and electrodermal activity (EDA). In the last, declarative stage, participants filled out a series of questionnaires related to the experience of interacting with (chat)bots and to the overall human–(chat)bot collaboration assessment. The theory of planned behaviour survey investigated attitude towards cooperation with chatbots in the future. The social presence survey checked how much the chatbot was considered to be a “real” person. The anthropomorphism scale measured the extent to which the chatbot seems humanlike. Our particular focus was on the so-called uncanny valley effect, consisting of the feeling of eeriness and discomfort towards a given medium or technology that frequently appears in various kinds of human–machine interactions. Our results show that participants were experiencing lesser uncanny effects and less negative affect in cooperation with a simpler text chatbot than with the more complex, animated avatar chatbot. The simple chatbot have also induced less intense psychophysiological reactions. Despite major developments in botics, the user’s affective responses towards bots have frequently been neglected. In our view, understanding the user’s side may be crucial for designing better chatbots in the future and, thus, can contribute to advancing the field of human–computer interaction.

Highlights

A two-stage experiment focusing on human–chatbot interaction was conducted.
Methodology: psychophysiology and questionnaires.
We found increased uncanny valley effect in the more human-like chatbot.
Participants declared future interest in cooperation with both types of chatbots.

References

[1]
Hightower R.R., Ring L.T., Helfman J.I., Bederson B.B., Hollan J.D., Graphical multiscale web histories: A study of padprints, in: Proceedings of the Ninth ACM Conference on Hypertext and Hypermedia : Links, Objects, Time and Space—Structure in Hypermedia Systems: Links, Objects, Time and Space—Structure in Hypermedia Systems, ACM, New York, NY, USA, 1998, pp. 58–65.
[2]
Hutchins E., Cognition in the Wild, MIT Press, 1995.
[3]
Barakova E.I., Social interaction in robotic agents emulating the mirror neuron function, in: Nature Inspired Problem-Solving Methods in Knowledge Engineering, Springer, Berlin, Heidelberg, 2007, pp. 389–398.
[4]
Jenkins M.-C., Churchill R., Cox S., Smith D., Analysis of user interaction with service oriented chatbot systems, in: Human–Computer Interaction. HCI Intelligent Multimodal Interaction Environments, Springer, Berlin, Heidelberg, 2007, pp. 76–83.
[5]
Reeves B., Nass C., How people treat computers television and new media like real people and places, CSLI Publications and Cambridge, 1996, http://www.humanityonline.com/docs/the%20media%20equation.pdf.
[6]
Yun K., Watanabe K., Shimojo S., Interpersonal body and neural synchronization as a marker of implicit social interaction, Sci. Rep. 2 (2012) 959.
[7]
Decety J., Lamm C., The role of the right temporoparietal junction in social interaction: how low-level computational processes contribute to meta-cognition, Neuroscientist 13 (2007) 580–593.
[8]
Sung K., Dolcos S., Flor-Henry S., Zhou C., Gasior C., Argo J., Dolcos F., Brain imaging investigation of the neural correlates of observing virtual social interactions, J. Vis. Exp. (2011) e2379.
[9]
Corti K., Gillespie A., A truly human interface: interacting face-to-face with someone whose words are determined by a computer program, Front. Psychol. 6 (2015) 634.
[10]
Hofree G., Ruvolo P., Bartlett M.S., Winkielman P., Bridging the mechanical and the human mind: spontaneous mimicry of a physically present android, PLoS One 9 (2014) e99934.
[11]
Kacprzyk J., Zadrozny S., Computing with words is an implementable paradigm: Fuzzy queries, linguistic data summaries, and natural-language generation, IEEE Trans. Fuzzy Syst. 18 (2010) 461–472.
[12]
Morrissey K., Kirakowski J., “Realness” in chatbots: Establishing quantifiable criteria, in: International Conference on Human–Computer Interaction, Springer, 2013, pp. 87–96.
[13]
Weizenbaum J., ELIZA — a computer program for the study of natural language communication between man and machine, Commun. ACM. 9 (1966) 36–45.
[14]
J. Weizenbaum, J. McCarthy, Computer power and human reason: From judgment to calculation, 1977.
[15]
R. Wilensky, Planning and understanding: A computational approach to human reasoning, 1983. http://www.osti.gov/scitech/biblio/5673187. (Accessed 5 June 2017).
[16]
Basili V.R., Selby R.W., Hutchens D.H., Experimentation in software engineering, IEEE Trans. Softw. Eng. SE-12 (1986) 733–743.
[17]
Batacharia B., Levy D., Catizone R., Krotov A., Wilks Y., CONVERSE: a conversational companion, in: Wilks Y. (Ed.), Machine Conversations, Springer US, 1999, pp. 205–215.
[18]
B.A. Shawar, E. Atwell, Using dialogue corpora to train a chatbot, in: Proceedings of the Corpus Linguistics 2003 Conference, 2003: pp. 681–690.
[19]
Mark H., Battle of the digital assistants: Cortana, Siri, and Google Now, PC World 13 (2014).
[20]
Moemeka E., Moemeka E., Leveraging Cortana and speech, in: Real World Windows 10 Development, Apress, 2015, pp. 471–520.
[21]
Hayes A., Amazon Alexa: A Quick-start Beginner’s Guide, CreateSpace Independent Publishing Platform, 2017.
[22]
Rane P., Mhatre V., Kurup L., Study of a home robot: Jibo, Int. J. Eng. Res. Technol. (2014).
[23]
Guizzo E., Cynthia Breazeal unveils Jibo, a social robot for the home, IEEE Spectr. (2014).
[24]
K.F. MacDorman, T. Minato, M. Shimada, Assessing human likeness by eye contact in an android testbed, Proceedings of the, 2005. http://www.psy.herts.ac.uk/pub/SJCowley/docs/humanlikeness.pdf.
[25]
Mori M., Bukimi no tani [the uncanny valley], Energy 7 (1970) 33–35.
[26]
Walters M.L., Syrdal D.S., Dautenhahn K., te Boekhorst R., Koay K.L., Avoiding the uncanny valley: robot appearance, personality and consistency of behavior in an attention-seeking home scenario for a robot companion, Auton. Robots 24 (2008) 159–178.
[27]
’ichiro Seyama J., Nagayama R.S., The uncanny valley: Effect of realism on the impression of artificial human faces, Presence: Teleoperators Virtual Environ. 16 (2007) 337–351.
[28]
Shaffer J.R., Orlova E., Lee M.K., Leslie E.J., Raffensperger Z.D., Heike C.L., Cunningham M.L., Hecht J.T., Kau C.H., Nidey N.L., Moreno L.M., Wehby G.L., Murray J.C., Laurie C.A., Laurie C.C., Cole J., Ferrara T., Santorico S., Klein O., Mio W., Feingold E., Hallgrimsson B., Spritz R.A., Marazita M.L., Weinberg S.M., Genome-wide association study reveals multiple loci influencing normal human facial morphology, PLoS Genet. 12 (2016) e1006149.
[29]
MacDorman K.F., Ishiguro H., The uncanny advantage of using androids in cognitive and social science research, Interact. Stud. 7 (2006) 297–337.
[30]
K.F. MacDorman, Androids as an experimental apparatus: Why is there an uncanny valley and can we exploit it, in: CogSci-2005 Workshop: Toward Social Mechanisms of Android Science, 2005: pp. 106–118.
[31]
Hanson D., Exploring the aesthetic range for humanoid robots, in: Proceedings of the ICCS/CogSci-2006 Long Symposium: Toward Social Mechanisms of Android Science, Citeseer, 2006, pp. 39–42.
[32]
Boucher J.-D., Pattacini U., Lelong A., Bailly G., Elisei F., Fagel S., Dominey P.F., Ventre-Dominey J., I reach faster when I see you look: Gaze effects in human–human and human–robot face-to-face cooperation, Front. Neurorobot. 6 (2012),.
[33]
Gillespie A., Corti K., The body that speaks: Recombining bodies and speech sources in unscripted face-to-face communication, Front. Psychol. 7 (2016) 1300.
[34]
Schrammel F., Pannasch S., Graupner S.-T., Mojzisch A., Velichkovsky B.M., Virtual friend or threat? The effects of facial expression and gaze interaction on psychophysiological responses and emotional experience, Psychophysiology 46 (2009) 922–931.
[35]
Fridlund A.J., Cacioppo J.T., Guidelines for human electromyographic research, Psychophysiology 23 (1986) 567–589.
[36]
Appelhans B.M., Luecken L.J., Heart rate variability as an index of regulated emotional responding, Rev. Gen. Psychol. 10 (2006) 229.
[37]
Pochwatko G., Giger J.-C., Różańska-Walczuk M., Świdrak J., Kukiełka K., Możaryn J., Piçarra N., Polish version of the negative attitude toward robots scale (NARS-PL), J. Autom. Mobile Robot. Intell. Syst. 9 (2015) https://yadda.icm.edu.pl/baztech/element/bwmeta1.element.baztech-70bedbb8-141c-40e6-b3b9-596032d8abda/c/Pochwatko_polish_version.pdf.
[38]
Fong T., Nourbakhsh I., Dautenhahn K., A Survey Of Socially Interactive Robots: Concepts, Design And Applications, Technical Report No. Cmu-Ri-Tr-02-29, Carnegie Mellon University, 2002, https://pdfs.semanticscholar.org/a764/15c475a8e40ded6697982ebbe7b6141505ca.pdf.
[39]
Fiske S.T., Cuddy A.J.C., Glick P., Xu J., A model of (often mixed) stereotype content: competence and warmth respectively follow from perceived status and competition, J. Pers. Soc. Psychol. 82 (2002) 878–902.
[40]
Bach D.R., Flandin G., Friston K.J., Dolan R.J., Modelling event-related skin conductance responses, Int. J. Psychophysiol. 75 (2010) 349–356.
[41]
Delacre M., Lakens D., Leys C., Why psychologists should by default use Welch’s t-test instead of student’s t-test, Int. Rev. Soc. Psychol. 30 (2017) http://rips.ubiquitypress.com/articles/10.5334/irsp.82/.
[42]
C. Carreiras, A.P. Alves, A. Lourenço, F. Canento, H. Silva, A. Fred, BioSPPy — Biosignal Processing in Python, 2015. https://github.com/PIA-Group/BioSPPy. (Accessed 3 January 2018).
[43]
P. Hamilton, Open source ECG analysis, in: Computers in Cardiology, 2002: pp. 101–104.
[44]
Mauss I.B., Robinson M.D., Measures of emotion: A review, Cogn. Emot. 23 (2009) 209–237.
[46]
Lang P.J., Bradley M.M., Cuthbert B.N., Motivated attention: Affect, activation, and action, in: Lang P.J. (Ed.), Attention and Orienting: Sensory and Motivational Processes, books.google.com, 1997.
[47]
Cacioppo J.T., Petty R.E., Losch M.E., Kim H.S., Electromyographic activity over facial muscle regions can differentiate the valence and intensity of affective reactions, J. Pers. Soc. Psychol. 50 (1986) 260–268.
[48]
Topolinski S., Strack F., Corrugator activity confirms immediate negative affect in surprise, Front. Psychol. 6 (2015) 134.
[49]
Kordik A., Eska K., Schultheiss O.C., Implicit need for affiliation is associated with increased corrugator activity in a non-positive, but not in a positive social interaction, J. Res. Pers. 46 (2012) 604–608.
[50]
Cheetham M., Wu L., Pauli P., Jancke L., Arousal, valence, and the uncanny valley: psychophysiological and self-report findings, Front. Psychol. 6 (2015) 981.
[51]
S. Druga, R. Williams, C. Breazeal, M. Resnick, Hey Google is it OK if I eat you? in: Proceedings of the 2017 Conference on Interaction Design and Children, IDC ’17, 2017. https://doi.org/10.1145/3078072.3084330.
[52]
M. Xuetao, F. Bouchet, J.-P. Sansonnet, Impact of agent’s answers variability on its believability and human-likeness and consequent chatbot improvements, in: Proc. of AISB, 2009, pp. 31–36.
[53]
S A., John D., Survey on chatbot design techniques in speech conversation systems, Int. J. Adv. Comput. Sci. Appl. 6 (2015),.
[54]
Hill J., Randolph Ford W., Farreras I.G., Real conversations with artificial intelligence: A comparison between human–human online conversations and human–chatbot conversations, Comput. Hum. Behav. 49 (2015) 245–250.

Cited By

View all
  • (2024)Computers as Bad Social Actors: Dark Patterns and Anti-Patterns in Interfaces that Act SociallyProceedings of the ACM on Human-Computer Interaction10.1145/36536938:CSCW1(1-25)Online publication date: 26-Apr-2024
  • (2024)Passive Haptics and Conversational Avatars for Interacting with Ancient Egypt Remains in High-Fidelity Virtual Reality ExperiencesJournal on Computing and Cultural Heritage 10.1145/364800317:2(1-28)Online publication date: 17-Apr-2024
  • (2024)"This Chatbot Would Never...": Perceived Moral Agency of Mental Health ChatbotsProceedings of the ACM on Human-Computer Interaction10.1145/36374108:CSCW1(1-28)Online publication date: 26-Apr-2024
  • Show More Cited By

Index Terms

  1. In the shades of the uncanny valley: An experimental study of human–chatbot interaction
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image Future Generation Computer Systems
          Future Generation Computer Systems  Volume 92, Issue C
          Mar 2019
          1192 pages

          Publisher

          Elsevier Science Publishers B. V.

          Netherlands

          Publication History

          Published: 01 March 2019

          Author Tags

          1. Human–computer interaction
          2. Chatbots
          3. Affective computing
          4. Psychophysiology
          5. Uncanny valley

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 14 Oct 2024

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)Computers as Bad Social Actors: Dark Patterns and Anti-Patterns in Interfaces that Act SociallyProceedings of the ACM on Human-Computer Interaction10.1145/36536938:CSCW1(1-25)Online publication date: 26-Apr-2024
          • (2024)Passive Haptics and Conversational Avatars for Interacting with Ancient Egypt Remains in High-Fidelity Virtual Reality ExperiencesJournal on Computing and Cultural Heritage 10.1145/364800317:2(1-28)Online publication date: 17-Apr-2024
          • (2024)"This Chatbot Would Never...": Perceived Moral Agency of Mental Health ChatbotsProceedings of the ACM on Human-Computer Interaction10.1145/36374108:CSCW1(1-28)Online publication date: 26-Apr-2024
          • (2024)Developing conversational Virtual Humans for social emotion elicitation based on large language modelsExpert Systems with Applications: An International Journal10.1016/j.eswa.2024.123261246:COnline publication date: 15-Jul-2024
          • (2024)EmoBotCognitive Systems Research10.1016/j.cogsys.2023.10116883:COnline publication date: 4-Mar-2024
          • (2024)Impact of Digital Assistant Attributes on Millennials’ Purchasing Intentions: A Multi-Group Analysis using PLS-SEM, Artificial Neural Network and fsQCAInformation Systems Frontiers10.1007/s10796-022-10339-526:3(943-966)Online publication date: 1-Jun-2024
          • (2024)Design Implications for Next Generation Chatbots with Education 5.0New Technology in Education and Training10.1007/978-981-97-3883-0_1(1-12)Online publication date: 5-Jan-2024
          • (2023)Determinants Affecting Consumer Trust in Communication With AI ChatbotsJournal of Organizational and End User Computing10.4018/JOEUC.32808935:1(1-24)Online publication date: 11-Aug-2023
          • (2023)Examining Customer Experience in Using a ChatbotInternational Journal of Asian Business and Information Management10.4018/IJABIM.32243814:1(1-16)Online publication date: 23-May-2023
          • (2023)I Know This Looks Bad, But I Can Explain: Understanding When AI Should Explain Actions In Human-AI TeamsACM Transactions on Interactive Intelligent Systems10.1145/363547414:1(1-23)Online publication date: 2-Dec-2023
          • Show More Cited By

          View Options

          View options

          Get Access

          Login options

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media