Abstract
Recent advances in artificial intelligence and deep learning have made it possible for bots to pass as humans, as is the case with the recent Google Duplexâan automated voice assistant capable of generating realistic speech that can fool humans into thinking they are talking to another human. Such technologies have drawn sharp criticism due to their ethical implications, and have fueled a push towards transparency in humanâmachine interactions. Despite the legitimacy of these concerns, it remains unclear whether bots would compromise their efficiency by disclosing their true nature. Here, we conduct a behavioural experiment with participants playing a repeated prisonerâs dilemma game with a human or a bot, after being given either true or false information about the nature of their associate. We find that bots do better than humans at inducing cooperation, but that disclosing their true nature negates this superior efficiency. Human participants do not recover from their prior bias against bots despite experiencing cooperative attitudes exhibited by bots over time. These results highlight the need to set standards for the efficiency cost we are willing to pay in order for machines to be transparent about their non-human nature.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 /Â 30Â days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Buy this article
- Purchase on SpringerLink
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
Similar content being viewed by others
Data availability
The data that support the findings of this study have been deposited in the Open Science Framework (https://doi.org/10.17605/OSF.IO/AK3TF).
Code availability
The software and all code used to generate the findings of this study have been deposited in the Open Science Framework (https://doi.org/10.17605/OSF.IO/AK3TF).
References
Berkeley, J., Dietvorst, J. P. S. & Massey, C. Algorithm aversion: people erroneously avoid algorithms after seeing them err. J. Exp. Psychol. 144, 114â126 (2015).
Kiesler, S., Sproull, L. & Miller, J. A prisonerâs dilemma experiment on cooperation with people and human-like computers. J. Pers. Soc. Psychol. 70, 47â65 (1996).
Oudah, M., Babushkin, V., Chenlinangjia, T. & Crandall, J. W. Learning to interact with a human partner. In Proc. Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction 311â318 (ACM, 2015).
Merritt, T. & McGee, K. Protecting artificial team-mates: more seems like less. In Proc. SIGCHI Conference on Human Factors in Computing Systems 2793â2802 (ACM, 2012).
Eastwood, J., Snook, B. & Luther, K. What people want from their professionals: attitudes toward decision-making strategies. J. Behav. Decis. Mak. 25, 458â468 (2012).
Promberger, M. & Baron, J. Do patients trust computers? J. Behav. Decis. Mak. 19, 455â468 (2006).
Neda Ratanawongsa, M. et al. Association between clinician computer use and communication with patients in safety-net clinics. JAMA Intern. Med. 176, 125â128 (2016).
Mori, M., MacDorman, K. F. & Kageki, N. The uncanny valley [from the field]. IEEE Robot. Autom. Mag. 19, 98â100 (2012).
Leviathan, Y. & Matias, Y. Google Duplex: an AI system for accomplishing real-world tasks over the phone. Google AI Blog https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html (2018).
Statt, N. Google now says controversial AI voice calling system will identify itself to humans. The Verge https://www.theverge.com/2018/5/10/17342414/google-duplex-ai-assistant-voice-calling-identify-itself-update (2018).
Vomiero, J. Googleâs AI assistant must identify itself as a robot during phone calls: report. Global News https://globalnews.ca/news/4204648/googles-ai-identify-itself-robot-phone-calls/ (2018).
Hern, A. Googleâs âdeceitfulâ AI assistant to identify itself as a robot during calls. The Guardian https://www.theguardian.com/technology/2018/may/11/google-duplex-ai-identify-itself-as-robot-during-calls (2018).
Bergen, M. Google grapples with âhorrifyingâ reaction to uncanny AI tech. Bloomberg https://www.bloomberg.com/news/articles/2018-05-10/google-grapples-with-horrifying-reaction-to-uncanny-ai-tech (2018).
Harwell, D. A google program can pass as a human on the phone. should it be required to tell people itâs a machine? The Washington Post https://www.washingtonpost.com/news/the-switch/wp/2018/05/08/a-google-program-can-pass-as-a-human-on-the-phone-should-it-be-required-to-tell-people-its-a-machine/ (2018).
Axelrod, R. The Evolution of Cooperation (Basic Books, 1984).
Nowak, M. A. & May, R. M. Evolutionary games and spatial chaos. Nature 359, 826â829 (1992).
Nowak, M. A. & Sigmund, K. Evolution of indirect reciprocity by image scoring. Nature 393, 573â577 (1998).
Fehr, E. & Gachter, S. Altruistic punishment in humans. Nature 415, 137â140 (2002).
Ohtsuki, H., Hauert, C., Lieberman, E. & Nowak, M. A. A simple rule for the evolution of cooperation on graphs and social networks. Nature 441, 502â505 (2006).
Nowak, M. A. Five rules for the evolution of cooperation. Science 314, 1560â1563 (2006).
Rand, D. G., Dreber, A., Ellingsen, T., Fudenberg, D. & Nowak, M. A. Positive interactions promote public cooperation. Science 325, 1272â1275 (2009).
Fudenberg, D., Rand, D. G. & Dreber, A. Slow to anger and fast to forgive: cooperation in an uncertain world. Am. Econ. Rev. 102, 720â749 (2012).
Dorrough, A. R. & Glöckner, A. Multinational investigation of cross-societal cooperation. Proc. Natl Acad. Sci. USA 113, 10836â10841 (2016).
Bear, A. & Rand, D. G. Intuition, deliberation, and the evolution of cooperation. Proc. Natl Acad. Sci. USA 113, 936â941 (2016).
Nowak, M. & Sigmund, K. A strategy of win-stay, lose-shift that outperforms tit-for-tat in the prisonerâs dilemma game. Nature 364, 56â58 (1993).
Bó, P. D. Cooperation under the shadow of the future: experimental evidence from infinitely repeated games. Am. Econ. Rev. 364, 1591â1604 (2005).
Dreber, A., Rand, D. G., Fudenberg, D. & Nowak, Ma Winners donât punish. Nature 452, 348â351 (2008).
Crandall, J. W. Towards minimizing disappointment in repeated games. J. Artif. Intell. Res. 49, 111â142 (2014).
Fudenberg, D. & Levine, D. K. The Theory of Learning in Games (The MIT Press, 1998).
Littman, M. L. Markov games as a framework for multi-agent reinforcement learning. In Proc. 11th International Conference on Machine Learning 157â163 (ACM, 1994).
Auer, P., Cesa-Bianchi, N., Freund, Y. & Schapire, R. E. Gambling in a rigged casino: the adversarial multi-armed bandit problem. In Proc. 36th Symposium on the Foundations of Computer Science 322â331 (IEEE, 1995).
Sandholm, T. W. & Crites, R. H. Multiagent reinforcement learning in the iterated prisonerâs dilemma. Biosystems 37, 147â166 (1996).
Karandikar, R., Mookherjee, D., Ray, D. & Vega-Redondo, F. Evolving aspirations and cooperation. J. Econ. Theory 80, 292â331 (1998).
Claus, C. & Boutilier, C. The dynamics of reinforcement learning in cooperative multiagent systems. In Proc. 15th National Conference on Artificial Intelligence 746â752 (AAAI, 1998).
de Farias, D. & Megiddo, N. Explorationâexploitation tradeoffs for expert algorithms in reactive environments. In Advances in Neural Information Processing Systems 17 (eds Saul, L. K. et al.) 409â416 (NIPS, 2004).
Bouzy, B. & Metivier, M. Multi-agent learning experiments in repeated matrix games. In Proc. 27th International Conference on Machine Learning 119â126 (Omnipress, 2010).
Iliopoulos, D., Hintze, A. & Adami, C. Critical dynamics in the evolution of stochastic strategies for the iterated prisonerâs dilemma. PLoS Comput. Biol. 6, e1000948 (2010).
Littman, M. L. & Stone, P. A polynomial-time Nash equilibrium algorithm for repeated games. Decis. Support Syst. 39, 55â66 (2005).
Crandall, J. W. et al. Cooperating with machines. Nat. Commun. 9, 233 (2018).
Chaudhuri, A. Sustaining cooperation in laboratory public goods experiments: a selective survey of the literature. Exp. Econ. 14, 47â83 (2011).
Wang, J., Suri, S. & Watts, D. J. Cooperation and assortativity with dynamic partner updating. Proc. Natl Acad. Sci. USA 109, 14363â14368 (2012).
Citron, D. K. & Pasquale, F. The scored society: due process for automated predictions. Wash. Law Rev. 89, 1 (2014).
Diakopoulos, N. Accountability in algorithmic decision making. Commun. ACM 59, 56â62 (2016).
Selbst, A. D. & Barocas, S. The intuitive appeal of explainable machines. Fordham Law Rev. 87, 1085 (2018).
Bonnefon, J. F., Feeney, A. & De Neys, W. The risk of polite misunderstandings. Curr. Dir. Psychol. Sci. 20, 321â324 (2011).
Goldin, C. & Rouse, C. Orchestrating impartiality: the impact of âblindâ auditions on female musicians. Am. Econ. Rev. 90, 715â741 (2000).
Acknowledgements
We thank E. Awad for his help running the experiments on MTurk. J.-F.B. acknowledges support from the ANR-Labex Institute for Advanced Study in Toulouse, the ANR-3IA Artificial and Natural Intelligence Toulouse Institute, and the grant ANR-17-EURE-0010 Investissements dâAvenir.
Author information
Authors and Affiliations
Contributions
All authors conceived and designed the experiments. F.I.-O. and Z.S. conducted the experiments. F.I.-O. and J.-F.B. analysed the data and produced the figures and tables. F.I.-O., J.-F.B., J.C., I.R. and T.R. wrote the manuscript.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisherâs note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
About this article
Cite this article
Ishowo-Oloko, F., Bonnefon, JF., Soroye, Z. et al. Behavioural evidence for a transparencyâefficiency tradeoff in humanâmachine cooperation. Nat Mach Intell 1, 517â521 (2019). https://doi.org/10.1038/s42256-019-0113-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s42256-019-0113-5
This article is cited by
-
Perception of experience influences altruism and perception of agency influences trust in humanâmachine interactions
Scientific Reports (2024)
-
A new sociology of humans and machines
Nature Human Behaviour (2024)
-
Quantifying the use and potential benefits of artificial intelligence in scientific research
Nature Human Behaviour (2024)
-
Engineering Optimal Cooperation Levels with Prosocial Autonomous Agents in Hybrid Human-Agent Populations: An Agent-Based Modeling Approach
Computational Economics (2024)
-
The effects of social presence on cooperative trust with algorithms
Scientific Reports (2023)