Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Understanding users’ responses to disclosed vs. undisclosed customer service chatbots: a mixed methods study

Published: 05 January 2024 Publication History

Abstract

Due to huge advancements in natural language processing (NLP) and machine learning, chatbots are gaining significance in the field of customer service. For users, it may be hard to distinguish whether they are communicating with a human or a chatbot. This brings ethical issues, as users have the right to know who or what they are interacting with (European Commission in Regulatory framework proposal on artificial intelligence. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai, 2022). One of the solutions is to include a disclosure at the start of the interaction (e.g., “this is a chatbot”). However, companies are reluctant to use disclosures, as consumers may perceive artificial agents as less knowledgeable and empathetic than their human counterparts (Luo et al. in Market Sci 38(6):937–947, 2019). The current mixed methods study, combining qualitative interviews (n = 8) and a quantitative experiment (n = 194), delves into users’ responses to a disclosed vs. undisclosed customer service chatbot, focusing on source orientation, anthropomorphism, and social presence. The qualitative interviews reveal that it is the willingness to help the customer and the friendly tone of voice that matters to the users, regardless of the artificial status of the customer care representative. The experiment did not show significant effects of the disclosure (vs. non-disclosure). Implications for research, legislators and businesses are discussed.

References

[1]
Amazon Mechanical Turk (2021) Qualifications and worker task quality: happenings at MTurk. MTurk. https://blog.mturk.com/qualifications-and-worker-task-quality-best-practices-886f1f4e03fc. Accessed 9 Jan 2023
[2]
Ameen N, Tarhini A, Reppel A, and Anand A Customer experiences in the age of artificial intelligence Comput Hum Behav 2021
[3]
Araujo T Living up to the chatbot hype: the influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions Comput Hum Behav 2018 85 183-189
[4]
Araujo T Conversational agent research toolkit: an alternative for creating and managing chatbots for experimental research Comput Commun Res 2020 2 1 25-51
[5]
Biocca F, Harms C, and Burgoon JK Toward a more robust theory and measure of social presence: review and suggested criteria Presence 2003 12 5 456-480
[6]
Borau S, Otterbring T, Laporte S, and Fosso Wamba S The most human bot: female gendering increases humanness perceptions of bots and acceptance of AI Psychol Mark 2021 38 7 1052-1068
[7]
Brandtzaeg PB and Følstad A Chatbots: changing user needs and motivations Interactions 2018 25 5 38-43
[8]
California Legislative Information (2018) SB-1001 Bots: disclosure. https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id5201720180SB1001. Accessed 9 Jan 2023
[9]
Charmaz K Constructing grounded theory: a practical guide through qualitative analysis 2006 London Sage
[10]
De Cicco R, Silva SC, Palumbo R, et al. Følstad A et al. Should a chatbot disclose itself? Implications for an online conversational retailer Conversations 2020 (LNCS) 2021 Cham Springer 190-204 12604
[11]
European Commission (2022) Regulatory framework proposal on artificial intelligence. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai. Accessed 9 Jan 2023
[12]
European Parliament (2023) AI Act: a step closer to the first rules on Artificial Intelligence. https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence. Accessed 9 Jan 2023
[13]
Federal Trade Commission (2020) Using artificial intelligence and algorithms. https://www.ftc.gov/business-guidance/blog/2020/04/using-artificial-intelligence-algorithms. Accessed 9 Jan 2023
[14]
Go E and Sundar SS Humanizing chatbots: the effects of visual, identity and conversational cues on humanness perceptions Comput Hum Behav 2019 97 304-316
[15]
Greenbaum T Moderating focus groups: a practical guide for group facilitation 2000 Thousand Oaks Sage
[16]
Guzman AL Voices in and of the machine: source orientation toward mobile virtual assistants Comput Hum Behav 2019 90 343-350
[17]
Hu P, Lu Y, and Gong Y Dual humanness and trust in conversational AI: a person-centered approach Comput Hum Behav 2021
[18]
Ischen C, Araujo T, van Noort G, Voorveld H, and Smit E “I am here to assist you today”: the role of entity, interactivity and experiential perceptions in chatbot persuasion J Broadcast Electron Media 2020 64 4 615-639
[19]
Kim Y and Sundar SS Anthropomorphism of computers: is it mindful or mindless? Comput Hum Behav 2012 28 241-250
[20]
Klowait NO The quest for appropriate models of human-likeness: anthropomorphism in media equation research AI Soc 2018 33 4 527-536
[21]
Lee KM Presence, explicated Commun Theory 2004 14 1 27-50
[22]
Lee KM, Jung Y, Kim J, and Kim SR Are physically embodied social agents better than disembodied social agents? The effects of physical embodiment, tactile interaction, and people’s loneliness in human–robot interaction Int J Hum Comput Stud 2006 64 10 962-973
[23]
Luo X, Tong S, Fang Z, and Qu Z Frontiers: machines vs. humans: the impact of artificial intelligence chatbot disclosure on customer purchases Market Sci 2019 38 6 937-947
[24]
Menold N and Tausch A Measurement of latent variables with different rating scales: testing reliability and measurement equivalence by varying the verbalization and number of categories Sociol Methods Res 2016 45 4 678-699
[25]
Mozafari N, Weiger W, Hammerschmidt M (2020) The chatbot disclosure dilemma: desirable and undesirable effects of disclosing the non-human identity of chatbots. In: Proceedings of the 41st international conference on information systems
[26]
Mozafari N, Weiger W, Hammerschmidt M (2021a) Resolving the chatbot disclosure dilemma: leveraging selective self-presentation to mitigate the negative effect of chatbot disclosure. In: Proceedings of the 54th Hawaii conference on system sciences
[27]
Mozafari N, Weiger W, and Hammerschmidt M Trust me, I’m a bot: repercussions of chatbot disclosure in different service frontline setting J Serv Manag 2021 33 2 221-245
[28]
Nass C and Moon Y Machines and mindlessness: social responses to computers J Soc Issues 2000 56 1 81-103
[29]
Nißen M, Selimi D, Janssen A, Cardona DR, Breitner MH, Kowatsch T, and von Wangenheim F See you soon again, chatbot? A design taxonomy to characterize user-chatbot relationships with different time horizons Comput Hum Behav 2022
[30]
Powers A, Kiesler S (2006) The advisor robot: tracing people’s mental model from a robot’s physical attributes. In: Proceedings of the 1st ACM SIGCHI/SIGART conference on human-robot interaction, pp 218–225.
[31]
Rhim J, Kwak M, Gong Y, and Gweon G Application of humanization to survey chatbots: change in chatbot perception, interaction experience, and survey data quality Comput Hum Behav 2022 126 107034
[32]
Sheehan KB Crowdsourcing research: data collection with Amazon’s mechanical Turk Commun Monogr 2018 85 1 140-156
[33]
Shumanov M and Johnson L Making conversations with chatbots more personalized Comput Hum Behav 2021
[34]
Toader DC, Boca G, Toader R, Macelaru M, Toader C, Ighian D, and Radulescu AT The effect of social presence and chatbot errors on trust Sustainability 2020 12 1 256
[35]
Van der Goot MJ, Hafkamp L, Dankfort Z, et al. Følstad A et al. Customer service chatbots: a qualitative interview study into the communication journey of customers Conversations 2020 (LNCS) 2021 Cham Springer 190-204 12604
[36]
Van Dis EAM, Bollen J, Zuidema W, Van Rooij R, and Bockting CL ChatGPT: five priorities for research Nature 2023 614 7947 224-226
[37]
Verhagen T, van Nes J, Feldberg F, and van Dolen W Virtual customer service agents: using social presence and personalization to shape online service encounters J Comput-Mediat Commun 2014 19 3 529-545
[38]
Wulf AJ and Seizov O “Please understand we cannot provide further information”: evaluating content and transparency of GDPR-mandated AI disclosures AI Soc 2022
[39]
Youn S and Jin SV In A.I. we trust?” The effects of parasocial interaction and technopian versus luddite ideological views on chatbot-based customer relationship management in the emerging “feeling economy” Comput Hum Behav 2021
[40]
Zalando (n.d.). Alle veelgestelde vragen [All frequently asked questions]. https://www.zalando.nl/faq/. Accessed 9 Jan 2023
[41]
Zarouali B, Makhortykh M, Bastian M, and Araujo T Overcoming polarization with chatbot news? Investigating the impact of news content containing opposing views on agreement and credibility Eur J Commun 2021 36 1 53-68
[42]
Zemčík T Failure of chatbot Tay was evil, ugliness and uselessness in its nature or do we judge it through cognitive shortcuts and biases? AI and Soc 2021 36 361-367

Recommendations

Comments

Information & Contributors

Information

Published In

cover image AI & Society
AI & Society  Volume 39, Issue 6
Dec 2024
408 pages

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 05 January 2024
Accepted: 06 November 2023
Received: 17 May 2023

Author Tags

  1. Anthropomorphism
  2. Chatbots
  3. Disclosure
  4. Online customer care
  5. Social presence, source orientation

Author Tag

  1. Information and Computing Sciences
  2. Artificial Intelligence and Image Processing

Qualifiers

  • Research-article

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 30 Jan 2025

Other Metrics

Citations

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media