Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Behavioural evidence for a transparency–efficiency tradeoff in human–machine cooperation

Abstract

Recent advances in artificial intelligence and deep learning have made it possible for bots to pass as humans, as is the case with the recent Google Duplex—an automated voice assistant capable of generating realistic speech that can fool humans into thinking they are talking to another human. Such technologies have drawn sharp criticism due to their ethical implications, and have fueled a push towards transparency in human–machine interactions. Despite the legitimacy of these concerns, it remains unclear whether bots would compromise their efficiency by disclosing their true nature. Here, we conduct a behavioural experiment with participants playing a repeated prisoner’s dilemma game with a human or a bot, after being given either true or false information about the nature of their associate. We find that bots do better than humans at inducing cooperation, but that disclosing their true nature negates this superior efficiency. Human participants do not recover from their prior bias against bots despite experiencing cooperative attitudes exhibited by bots over time. These results highlight the need to set standards for the efficiency cost we are willing to pay in order for machines to be transparent about their non-human nature.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Prejudice against purported bots early in the game.
Fig. 2: The tradeoff between efficiency and transparency.
Fig. 3: Bots learn to expect less from humans, especially when they are transparent.

Similar content being viewed by others

Data availability

The data that support the findings of this study have been deposited in the Open Science Framework (https://doi.org/10.17605/OSF.IO/AK3TF).

Code availability

The software and all code used to generate the findings of this study have been deposited in the Open Science Framework (https://doi.org/10.17605/OSF.IO/AK3TF).

References

  1. Berkeley, J., Dietvorst, J. P. S. & Massey, C. Algorithm aversion: people erroneously avoid algorithms after seeing them err. J. Exp. Psychol. 144, 114–126 (2015).

    Article  Google Scholar 

  2. Kiesler, S., Sproull, L. & Miller, J. A prisoner’s dilemma experiment on cooperation with people and human-like computers. J. Pers. Soc. Psychol. 70, 47–65 (1996).

    Article  Google Scholar 

  3. Oudah, M., Babushkin, V., Chenlinangjia, T. & Crandall, J. W. Learning to interact with a human partner. In Proc. Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction 311–318 (ACM, 2015).

  4. Merritt, T. & McGee, K. Protecting artificial team-mates: more seems like less. In Proc. SIGCHI Conference on Human Factors in Computing Systems 2793–2802 (ACM, 2012).

  5. Eastwood, J., Snook, B. & Luther, K. What people want from their professionals: attitudes toward decision-making strategies. J. Behav. Decis. Mak. 25, 458–468 (2012).

    Article  Google Scholar 

  6. Promberger, M. & Baron, J. Do patients trust computers? J. Behav. Decis. Mak. 19, 455–468 (2006).

    Article  Google Scholar 

  7. Neda Ratanawongsa, M. et al. Association between clinician computer use and communication with patients in safety-net clinics. JAMA Intern. Med. 176, 125–128 (2016).

    Article  Google Scholar 

  8. Mori, M., MacDorman, K. F. & Kageki, N. The uncanny valley [from the field]. IEEE Robot. Autom. Mag. 19, 98–100 (2012).

    Article  Google Scholar 

  9. Leviathan, Y. & Matias, Y. Google Duplex: an AI system for accomplishing real-world tasks over the phone. Google AI Blog https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html (2018).

  10. Statt, N. Google now says controversial AI voice calling system will identify itself to humans. The Verge https://www.theverge.com/2018/5/10/17342414/google-duplex-ai-assistant-voice-calling-identify-itself-update (2018).

  11. Vomiero, J. Google’s AI assistant must identify itself as a robot during phone calls: report. Global News https://globalnews.ca/news/4204648/googles-ai-identify-itself-robot-phone-calls/ (2018).

  12. Hern, A. Google’s ‘deceitful’ AI assistant to identify itself as a robot during calls. The Guardian https://www.theguardian.com/technology/2018/may/11/google-duplex-ai-identify-itself-as-robot-during-calls (2018).

  13. Bergen, M. Google grapples with ‘horrifying’ reaction to uncanny AI tech. Bloomberg https://www.bloomberg.com/news/articles/2018-05-10/google-grapples-with-horrifying-reaction-to-uncanny-ai-tech (2018).

  14. Harwell, D. A google program can pass as a human on the phone. should it be required to tell people it’s a machine? The Washington Post https://www.washingtonpost.com/news/the-switch/wp/2018/05/08/a-google-program-can-pass-as-a-human-on-the-phone-should-it-be-required-to-tell-people-its-a-machine/ (2018).

  15. Axelrod, R. The Evolution of Cooperation (Basic Books, 1984).

  16. Nowak, M. A. & May, R. M. Evolutionary games and spatial chaos. Nature 359, 826–829 (1992).

    Article  Google Scholar 

  17. Nowak, M. A. & Sigmund, K. Evolution of indirect reciprocity by image scoring. Nature 393, 573–577 (1998).

    Article  Google Scholar 

  18. Fehr, E. & Gachter, S. Altruistic punishment in humans. Nature 415, 137–140 (2002).

    Article  Google Scholar 

  19. Ohtsuki, H., Hauert, C., Lieberman, E. & Nowak, M. A. A simple rule for the evolution of cooperation on graphs and social networks. Nature 441, 502–505 (2006).

    Article  Google Scholar 

  20. Nowak, M. A. Five rules for the evolution of cooperation. Science 314, 1560–1563 (2006).

    Article  Google Scholar 

  21. Rand, D. G., Dreber, A., Ellingsen, T., Fudenberg, D. & Nowak, M. A. Positive interactions promote public cooperation. Science 325, 1272–1275 (2009).

    Article  MathSciNet  Google Scholar 

  22. Fudenberg, D., Rand, D. G. & Dreber, A. Slow to anger and fast to forgive: cooperation in an uncertain world. Am. Econ. Rev. 102, 720–749 (2012).

    Article  Google Scholar 

  23. Dorrough, A. R. & Glöckner, A. Multinational investigation of cross-societal cooperation. Proc. Natl Acad. Sci. USA 113, 10836–10841 (2016).

    Article  Google Scholar 

  24. Bear, A. & Rand, D. G. Intuition, deliberation, and the evolution of cooperation. Proc. Natl Acad. Sci. USA 113, 936–941 (2016).

    Article  Google Scholar 

  25. Nowak, M. & Sigmund, K. A strategy of win-stay, lose-shift that outperforms tit-for-tat in the prisoner’s dilemma game. Nature 364, 56–58 (1993).

    Article  Google Scholar 

  26. Bó, P. D. Cooperation under the shadow of the future: experimental evidence from infinitely repeated games. Am. Econ. Rev. 364, 1591–1604 (2005).

    Article  Google Scholar 

  27. Dreber, A., Rand, D. G., Fudenberg, D. & Nowak, Ma Winners don’t punish. Nature 452, 348–351 (2008).

    Article  Google Scholar 

  28. Crandall, J. W. Towards minimizing disappointment in repeated games. J. Artif. Intell. Res. 49, 111–142 (2014).

    Article  MathSciNet  Google Scholar 

  29. Fudenberg, D. & Levine, D. K. The Theory of Learning in Games (The MIT Press, 1998).

  30. Littman, M. L. Markov games as a framework for multi-agent reinforcement learning. In Proc. 11th International Conference on Machine Learning 157–163 (ACM, 1994).

  31. Auer, P., Cesa-Bianchi, N., Freund, Y. & Schapire, R. E. Gambling in a rigged casino: the adversarial multi-armed bandit problem. In Proc. 36th Symposium on the Foundations of Computer Science 322–331 (IEEE, 1995).

  32. Sandholm, T. W. & Crites, R. H. Multiagent reinforcement learning in the iterated prisoner’s dilemma. Biosystems 37, 147–166 (1996).

    Article  Google Scholar 

  33. Karandikar, R., Mookherjee, D., Ray, D. & Vega-Redondo, F. Evolving aspirations and cooperation. J. Econ. Theory 80, 292–331 (1998).

    Article  MathSciNet  Google Scholar 

  34. Claus, C. & Boutilier, C. The dynamics of reinforcement learning in cooperative multiagent systems. In Proc. 15th National Conference on Artificial Intelligence 746–752 (AAAI, 1998).

  35. de Farias, D. & Megiddo, N. Exploration–exploitation tradeoffs for expert algorithms in reactive environments. In Advances in Neural Information Processing Systems 17 (eds Saul, L. K. et al.) 409–416 (NIPS, 2004).

  36. Bouzy, B. & Metivier, M. Multi-agent learning experiments in repeated matrix games. In Proc. 27th International Conference on Machine Learning 119–126 (Omnipress, 2010).

  37. Iliopoulos, D., Hintze, A. & Adami, C. Critical dynamics in the evolution of stochastic strategies for the iterated prisoner’s dilemma. PLoS Comput. Biol. 6, e1000948 (2010).

    Article  MathSciNet  Google Scholar 

  38. Littman, M. L. & Stone, P. A polynomial-time Nash equilibrium algorithm for repeated games. Decis. Support Syst. 39, 55–66 (2005).

    Article  Google Scholar 

  39. Crandall, J. W. et al. Cooperating with machines. Nat. Commun. 9, 233 (2018).

    Article  Google Scholar 

  40. Chaudhuri, A. Sustaining cooperation in laboratory public goods experiments: a selective survey of the literature. Exp. Econ. 14, 47–83 (2011).

    Article  Google Scholar 

  41. Wang, J., Suri, S. & Watts, D. J. Cooperation and assortativity with dynamic partner updating. Proc. Natl Acad. Sci. USA 109, 14363–14368 (2012).

    Article  Google Scholar 

  42. Citron, D. K. & Pasquale, F. The scored society: due process for automated predictions. Wash. Law Rev. 89, 1 (2014).

    Google Scholar 

  43. Diakopoulos, N. Accountability in algorithmic decision making. Commun. ACM 59, 56–62 (2016).

    Article  Google Scholar 

  44. Selbst, A. D. & Barocas, S. The intuitive appeal of explainable machines. Fordham Law Rev. 87, 1085 (2018).

    Google Scholar 

  45. Bonnefon, J. F., Feeney, A. & De Neys, W. The risk of polite misunderstandings. Curr. Dir. Psychol. Sci. 20, 321–324 (2011).

    Article  Google Scholar 

  46. Goldin, C. & Rouse, C. Orchestrating impartiality: the impact of ‘blind’ auditions on female musicians. Am. Econ. Rev. 90, 715–741 (2000).

    Article  Google Scholar 

Download references

Acknowledgements

We thank E. Awad for his help running the experiments on MTurk. J.-F.B. acknowledges support from the ANR-Labex Institute for Advanced Study in Toulouse, the ANR-3IA Artificial and Natural Intelligence Toulouse Institute, and the grant ANR-17-EURE-0010 Investissements d’Avenir.

Author information

Authors and Affiliations

Authors

Contributions

All authors conceived and designed the experiments. F.I.-O. and Z.S. conducted the experiments. F.I.-O. and J.-F.B. analysed the data and produced the figures and tables. F.I.-O., J.-F.B., J.C., I.R. and T.R. wrote the manuscript.

Corresponding authors

Correspondence to Iyad Rahwan or Talal Rahwan.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ishowo-Oloko, F., Bonnefon, JF., Soroye, Z. et al. Behavioural evidence for a transparency–efficiency tradeoff in human–machine cooperation. Nat Mach Intell 1, 517–521 (2019). https://doi.org/10.1038/s42256-019-0113-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42256-019-0113-5

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing