Abstract
With the desired outcome of social good within the wider robotics ecosystem, trust is identified as the central adhesive of the human–robot interaction (HRI) interface. However, building trust between humans and robots involves more than improving the machine’s technical reliability or trustworthiness in function. This paper presents a holistic, community-based approach to trust-building, where trust is understood as a multifaceted and multi-staged looped relation that depends heavily on context and human perceptions. Building on past literature that identifies dispositional and learned stages of trust, our proposed decision to trust model considers more extensively the human and situational factors influencing how trust manifests within social relations. Priority is given to the human user of technology—the initiator of human—robot trust relations—at all stages of decision-making. The envisioned formation of optimal conditions in which trust emerges requires the collective participation of practitioners, policymakers, and members of the community. With trust facilitating the smooth transition of robots into more socially embedded roles, positive receptivity of the best engineering project arises from the presence of harmonious robot-human trust relations in community spaces.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data availability
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
Notes
The scope of our discussion does not cover such industrial robots, since they are designed to primarily serve functional purposes and elements of human interaction are not of essential significance.
Notably, emotions do not necessarily equate to irrationality that hinders optimal decision making [Lerner et al. 2015]. They can manifest as an efficient tool in decision-making that helps avoid excessively effortful cognitive processes.
On the other hand, humans also have the tendency to underestimate the likelihood of low-probability, high-risk events, in part because people lack direct experience with such risk and the risks are less salient in the human mind. For instance, despite road accidents being one of the leading causes of death in the U.S., motorists tend to perceive their risk of suffering a severe accident as very low [Camerer and Kunreuther 1989].
References
A*STAR (2019) Making robots smarter with AI. (October 2019). Retrieved 30 Nov 2022 from https://www.a-star.edu.sg/News/a-star-news/news/visits/making-robots-smarter-with-ai
A*STAR (2021) Why robots are a game-changer during and post-pandemic. (November 2021). Retrieved 30 Nov 2022 from https://www.a-star.edu.sg/News/a-star-news/news/features/why-robots-are-a-game-changer-during-and-post-pandemic
Alaiad A, Zhou L (2014) The determinants of home healthcare robots adoption: an empirical investigation. Int J Med Inform 83(11):825–840
Barberis NC (2013) Thirty years of prospect theory in economics: a review and assessment. J Econ Perspect 27(1):173–196
Broadbent E (2017) Interactions with robots: the truths we reveal about ourselves. Annu Rev Psychol 68(1):627–652
Camerer CF, Kunreuther H (1989) Decision processes for low probability events: policy implications. J Policy Anal Manage 8(4):565–592
Campbell TD, Cotterrell R (2008) Community as a legal concept? Some uses of a law-and-community approach in legal theory. Living Law, Routledge
Cave S, Craig C, Dihal K, Dillon S, Montgomery J, Singler B, Taylor L (2018) Portrayals and perceptions of AI and why they matter. The Royal Society. https://doi.org/10.17863/CAM.34502
Chi OH, Jia S, Li Y, Gursoy D (2021) Developing a formative scale to measure consumers’ trust toward interaction with artificially intelligent (AI) social robots in service delivery. Comput Hum Behav 118:106700
Cotterrell R (2018) Law emotion and affective community. Social Science Research Network, Rochester
DellaVigna S (2009) Psychology and economics: evidence from the field. J Econ Literature 47(2):315–372
Dolan P, Hallsworth M, Halpern D, King D, Metcalfe R, Vlaev I (2012) Influencing behaviour: the mindspace way. J Econ Psychol 33(1):264–277
Dolan P, Kavetsos G, Krekel C et al (2019) Quantifying the intangible impact of the Olympics using subjective well-being data. J Public Econ 177:104043
Duffy BR (2003) Anthropomorphism and the social robot. Robot Auton Syst 42(3):177–190
Dunn JR, Schweitzer ME (2005) Feeling and believing: the influence of emotion on trust. J Pers Soc Psychol 88(5):736–748
Eren O, Mocan N (2018) Emotional judges and unlucky juveniles. Am Econ J Appl Econ 10(3):171–205
Fabris A (2020) Can we trust machines? The role of trust in technological environments. In: Fabris, A. (eds) Trust: A Philosophical Approach. Studies in Applied Philosophy, Epistemology and Rational Ethics. 54(1):123–135
Findlay M, Seah J (2020) An ecosystem approach to ethical AI and data use: experimental reflections. (May 2020). Retrieved 19 Jan 2022 from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3597912
Findlay M, Wong W (2021) Trust and regulation: an analysis of emotion. Social Science Research Network, Rochester
Fung A (2006) Varieties of participation in complex governance. Public Adm Rev 66(s1):66–75
Gallon J (2021) Ganz So Einfach Ist es Nicht: Zur Frage, ob die Bundesregierung die Pandemiemaßnahmen Auch Selbst Erlassen Kann. (April 2021). Retrieved 19 Jan 2022 from https://verfassungsblog.de/aenderung-ifschg/
Griva K, Tan KYK, Chan FHF et al (2021) Evaluating rates and determinants of COVID-19 vaccine hesitancy for adults and children in the singapore population: strengthening our community’s resilience against threats from emerging infections (SOCRATEs) cohort. Vaccines 9(12):1415
Hals T (2021) Factbox: COVID-19 and the U.S. courts: challenges to vaccine requirements. Reuters. (August 2021). Retrieved 19 Jan 2022 from https://www.reuters.com/world/us/covid-19-us-courts-challenges-vaccine-requirements-2021-08-06/
Hancock PA, Billings DR, Schaefer KE, Chen JYC, de Visser EJ, Parasuraman R (2011) A meta-analysis of factors affecting trust in human-robot interaction. Hum Factors 53(5):517–527
Hoff KA, Bashir M (2015) Trust in automation: integrating empirical evidence on factors that influence trust. Hum Factors 57(3):407–434
Holländer K, Wintersberger P, Butz A (2019) Overtrust in external cues of automated vehicles: an experimental investigation. Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Association for Computing Machinery, 211–221
Huerta E, Glandon T, Petrides Y (2012) Framing, decision-aid systems, and culture: exploring influences on fraud investigations. Int J Account Inf Syst 13(4):316–333
Israelsen BW, Ahmed NR (2019) “Dave...I can assure you ...that it’s going to be all right ...” A definition, case for, and survey of algorithmic assurances in human-autonomy trust relationships. ACM Comput Surveys 51(6):1–37
Joshi S, Šabanović S (2017) A communal perspective on shared robots as social catalysts. 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 732–738
Kahneman D, Knetsch JL, Thaler RH (1991) Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias. J Econ Perspect 5(1):193–206
Kahneman D (2011) Thinking, fast and slow. Penguin Books
Karl J (2021) How far are the unvaccinated willing to go? (September 2021). Retrieved 19 Jan 2022 from https://www.bloomberg.com/opinion/articles/2021-09-12/covid-19-vaccine-s-merits-are-simple-and-effective-kth60xpt
Kidd CD, Taggart W, Turkle S (2006) A sociable robot to encourage social interaction among the elderly. Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006, pp 3972–3976
Kirby R, Forlizzi J, Simmons R (2010) Affective social robots. Robot Auton Syst 58(3):322–332
Kok BC, Soh H (2020) Trust in robots: challenges and opportunities. Curr Robot Rep 1(4):297–309
Kraus J (2020) Psychological processes in the formation and calibration of trust in automation. Ph.D. dissertation. Ulm University, Ulm. Retrieved 19 Feb 2023 from https://oparu.uni-ulm.de/xmlui/handle/123456789/32645
Kurohi R (2021) S’pore to invest $50m over 5 years to build digital trust. (July 2021). Retrieved 19 Jan 2022 from https://www.straitstimes.com/tech/spore-to-invest-50m-over-5-years-to-build-digital-trust
Küster D, Swiderska A, Gunkel D (2021) I saw it on YouTube! How online videos shape perceptions of mind, morality, and fears about robots. New Media Soc 23(11):3312–3331
Kwon M, Jung MF, Knepper RA (2016) Human expectations of social robots. 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 463–464
Landwehr JR, McGill AL, Herrmann A (2011) It’s got the look: the effect of friendly and aggressive “facial” expressions on product liking and sales. J Mark 75(3):132–146
Langer A, Feingold-Polak R, Mueller O, Kellmeyer P, Levy-Tzedek S (2019) Trust in socially assistive robots: considerations for use in rehabilitation. Neurosci Biobehav Rev 104:231–239
Lee J, Moray N (1992) Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35(10):1243–1270
Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Factors 46(1):50–80
Lerner JS, Li Y, Valdesolo P, Kassam KS (2015) Emotion and decision making. Annu Rev Psychol 66(1):799–823
Lewis M, Sycara K, Walker P (2018) The role of trust in human-robot interaction. In: Abbass HA, Scholz J, Reid DJ (eds) Foundations of trusted autonomy. Springer International Publishing, Cham, pp 135–159
Lin H, Chi OH, Gursoy D (2020) Antecedents of customers’ acceptance of artificially intelligent robotic device use in hospitality services. J Hosp Market Manag 29(5):530–549
Madhavan P, Wiegmann DA (2007) Similarities and differences between human-human and human-automation trust: an integrative review. Theor Issues Ergon Sci 8(4):277–301
Malhotra P (2020) A relationship-centered and culturally informed approach to studying misinformation on COVID-19. Social Media + Society 6(3):1–4
Mayer RC, Davis JH, Schoorman FD (1995) An integrative model of organizational trust. Acad Manag Rev 20(3):709–734
Meltzoff AN, Prinz W (2002) The imitative mind: development, evolution and brain bases. Cambridge University Press, Cambridge
Merritt SM, Ilgen DR (2008) Not all trust is created equal: dispositional and history-based trust in human-automation interactions. Hum Factors 50(2):194–210
Miller L, Kraus J, Babel F, Baumann M (2021) More than a feeling—interrelation of trust layers in human-robot interaction and the role of user dispositions and state anxiety. Front Psychol 12:1–18
Möller J, Trilling D, Helberger N, van Es B (2018) Do not blame it on the algorithm: an empirical assessment of multiple recommender systems and their impact on content diversity. Inf Commun Soc 21(7):959–977
Mubin O, Wadibhasme K, Jordan P, Obaid M (2019) Reflecting on the presence of science fiction robots in computing literature. ACM Transact Hum-Robot Interaction 8(1):1–25
Nichols M (2019) Safety is a priority when designing collaborative robots. (March 2019). Retrieved 19 Jan 2022 from https://www.robotshop.com/community/blog/show/safety-is-a-priority-when-designing-collaborative-robots
O’Neill O (2013) What we don’t understand about trust. Video. (June 2013). Retrieved 31 Dec 2021 from https://www.ted.com/talks/onora_o_neill_what_we_don_t_understand_about_trust?language=eo
Olhede S, Wolfe PJ (2020) Blame the algorithm? Significance 17(5):12–12
Ostrowski AK, DiPaola D, Partridge E, Park HW, Breazeal C (2019) Older adults living with social robots: promoting social connectedness in long-term communities. IEEE Robot Autom Mag 26(2):59–70
Pfeiffer J, Gabriel A, Gandorfer M (2021) Understanding the public attitudinal acceptance of digital farming technologies: a nationwide survey in Germany. Agric Hum Values 38(1):107–128
Portugal D, Araújo AG, Couceiro MS (2021) Improving the robustness of a service robot for continuous indoor monitoring: an incremental approach. Int J Adv Rob Syst 18(3):1–15
PytlikZillig LM, Kimbrough CD (2016) Consensus on conceptualizations and definitions of trust: are we there yet? In: Shockley E, Neal TMS, PytlikZillig LM, Bornstein BH (eds) Interdisciplinary perspectives on trust: towards theoretical and methodological integration. Springer International Publishing, Cham, pp 17–47
Reuters Staff (2020) Singapore about-turns on masks, making them compulsory in virus fight. (April 2020). Retrieved 19 Feb 2023 from https://www.reuters.com/article/us-health-coronavirus-singapore/singapore-about-turns-on-masks-making-them-compulsory-in-virus-fight-idUSKCN21W1EW
Rosenthal-von der Pütten AM, Schulte FP, Eimler SC et al (2014) Investigations on empathy towards humans and robots using fMRI. Comput Hum Behav 33:201–212
Rousseau DM, Sitkin SB, Burt RS, Camerer C (1998) Not so different after all: a cross-discipline view of trust. Acad Manag Rev 23(3):393–404
Ryan S (2018) Trust: a recipe. Think 17(50):113–125
Sakura O (2022) Robot and ukiyo-e: implications to cultural varieties in human–robot relationships. AI & Soc 37(4):1563–1573
Samani H, Saadatian E, Pang N et al (2013) Cultural robotics: the culture of robotics and robotics in culture. Int J Adv Rob Syst 10(12):400
Schaefer KE, Chen JYC, Szalma JL, Hancock PA (2016) A meta-analysis of factors influencing the development of trust in automation: implications for understanding autonomy in future systems. Hum Factors 58(3):377–400
Seah J, Findlay M (2021) Communicating ethics across the AI ecosystem. Social Science Research Network, Rochester
Sharma VM, Klein A (2020) Consumer perceived value, involvement, trust, susceptibility to interpersonal influence, and intention to participate in online group buying. J Retail Consum Serv 52:101946
Shaw K (2018) Affectiva, SoftBank Robotics Team Up to Broaden pepper’s emotional intelligence. (August 2021). Retrieved 19 Jan 2022 from https://www.roboticsbusinessreview.com/news/emotional-intelligence-affectiva-softbank-robotics-team-up/
Sheridan TB (1989) Trustworthiness of command and control systems. In: Ranta J (ed) Analysis, design and evaluation of man-machine systems 1988. Pergamon, Amsterdam, pp 427–431
Shields M, Neghaiwi BH (2021) Swiss will need COVID certificates to go to bars, restaurants. (September 2021). Retrieved 19 Jan 2022 from https://www.reuters.com/world/europe/swiss-cabinet-tightens-coronavirus-curbs-protect-hospitals-2021-09-08/
Shneiderman B (2020) Human-centered artificial intelligence: reliable, safe & trustworthy. Int J Hum-Comput Interact 36(6):495–504
Simon HA (1955) A behavioral model of rational choice. Q J Econ 69(1):99–118
Slovic P, Finucane ML, Peters E, MacGregor DG (2007) The affect heuristic. Eur J Oper Res 177(3):1333–1352
Soh H, Xie Y, Chen M, Hsu D (2020) Multi-task trust transfer for human–robot interaction. Int J Robot Res 39(2–3):233–249
Stokes CK, Lyons JB, Littlejohn K, Natarian J, Case E, Speranza N (2010) Accounting for the human in cyberspace: effects of mood on trust in automation. 2010 International Symposium on Collaborative Technologies and Systems, 180–187
Sullins JP (2008) Friends by design: a design philosophy for personal robotics technology. In: Kroes P, Vermaas PE, Light A, Moore SA (eds) Philosophy and design: from engineering to architecture. Springer, Netherlands, pp 143–157
Szollosy M (2015) Why are we afraid of robots? The role of projection in the popular conception of robots. In: Romportl J, Zackova E, Kelemen J (eds) Beyond artificial intelligence: the disappearing human-machine divide. Springer International Publishing, Cham, pp 121–131
Taddeo M (2020) On the risks of trusting artificial intelligence: the case of cybersecurity. Social Science Research Network, Rochester
Tapus A, Maja M, Scassellatti B (2007) The grand challenges in socially assistive robotics. IEEE Robot Automat Mag 14(1):35
Tversky A, Kahneman D (1973) Availability: a heuristic for judging frequency and probability. Cogn Psychol 5(2):207–232
Tversky A, Kahneman D (1992) Advances in prospect theory: cumulative representation of uncertainty. J Risk Uncertain 5(4):297–323
van der Linden S, Roozenbeek J, Compton J (2020) Inoculating against fake news about COVID-19. Front Psychol 11:566790
Vandemeulebroucke T, de Casterlé BD, Gastmans C (2018) How do older adults experience and perceive socially assistive robots in aged care: a systematic review of qualitative evidence. Aging Ment Health 22(2):149–167
Wagner AR, Robinette P, Howard A (2018) Modelling the human-robot trust phenomenon: a conceptual framework based on risk. ACM Transact Interact Intell Syst 8(4):1–24
Wang X, Elkin D, Chan, EKF, Lam WF, Chan KN (2021) Ethical surveillance: developing smart city surveillance through civic participation. (September 2021). Presented at the Ethics in AI Research Initiative for the Asia Pacific Roundtable
Wee A, Findlay MJ (2020) AI and data use: surveillance technology and community disquiet in the age of COVID-19. (November 2020). Retrieved 19 Jan 2022 from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3715993
Welge J, Hassenzahl M (2016) Better than human: about the psychological superpowers of robots. Social Robotics, Springer International Publishing, pp 993–1002
Winkle K, Caleb-Solly P, Turton A, Bremner P (2018) Social robots for engagement in rehabilitative therapies: design implications from a study with therapists. Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery, pp 289–297
Workman M (2005) Expert decision support system use, disuse, and misuse: a study using the theory of planned behavior. Comput Hum Behav 21(2):211–231
Wright J (2021) Suspect AI: vibraimage technology and society emotion recognition technology and algorithmic opacity. Sci Technol Soc (2021):1–20
Yagoda RE, Gillan DJ (2012) You want me to trust a ROBOT? The development of a human-robot interaction trust scale. Int J Soc Robot 4(3):235–248
Złotowski J, Proudfoot D, Yogeeswaran K, Bartneck C (2015) Anthropomorphism: opportunities and challenges in human-robot interaction. Int J Soc Robot 7(3):347–360
Funding
This research is supported by the National Research Foundation, Singapore under its Emerging Areas Research Projects (EARP) Funding Initiative. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Zhang, W., Wong, W. & Findlay, M. Trust and robotics: a multi-staged decision-making approach to robots in community. AI & Soc 39, 2463–2478 (2024). https://doi.org/10.1007/s00146-023-01705-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-023-01705-1