Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Calibrating Human-AI Collaboration: Impact of Risk, Ambiguity and Transparency on Algorithmic Bias

  • Conference paper
  • First Online:
Machine Learning and Knowledge Extraction (CD-MAKE 2020)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12279))

Abstract

Transparent Machine Learning (ML) is often argued to increase trust into predictions of algorithms however the growth of new interpretability approaches is not accompanied by a growth in studies investigating how interaction of humans and Artificial Intelligence (AI) systems benefits from transparency. The right level of transparency can increase trust in an AI system, while inappropriate levels of transparency can lead to algorithmic bias. In this study we demonstrate that depending on certain personality traits, humans exhibit different susceptibilities for algorithmic bias. Our main finding is that susceptibility to algorithmic bias significantly depends on annotators’ affinity to risk. These findings help to shed light on the previously underrepresented role of human personality in human-AI interaction. We believe that taking these aspects into account when building transparent AI systems can help to ensure more responsible usage of AI systems.

P. Schmidt and F. Biessmann—Equal contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://www.mturk.com/.

  2. 2.

    https://www.imdb.com/conditions.

References

  1. Arrow, K.: Aspects of the theory of risk-bearing. Yrjö Jahnsson lectures, Yrjö Jahnssonin Säätiö (1965). https://books.google.de/books?id=hnNEAAAAIAAJ

  2. Aumann, R.J.: Agreeing to disagree. Ann. Stat. 4, 1236–1239 (1976)

    Article  MathSciNet  Google Scholar 

  3. Bell, D.E., Raiffa, H., Tversky, A.: Decision Making: Descriptive, Normative, and Prescriptive Interactions. Cambridge university Press, Cambridge (1988)

    Book  Google Scholar 

  4. Camerer, C., Weber, M.: Recent developments in modeling preferences: uncertainty and ambiguity. J. Risk Uncertainty (1992). https://doi.org/10.1007/BF00122575

    Article  MATH  Google Scholar 

  5. Cook, R.D.: Detection of influential observation in linear regression. Technometrics 19(1), 15–18 (1977). http://www.jstor.org/stable/1268249

  6. Curley, S.P., Yates, J.F., Abrams, R.A.: Psychological sources of ambiguity avoidance. Organ. Behav. Hum. Decis. Processes 38(2), 230–256 (1986)

    Article  Google Scholar 

  7. Dietvorst, B.J., Simmons, J.P., Massey, C.: Algorithm aversion: people erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 144(1), 114–126 (2015). https://doi.org/10.1037/xge0000033

    Article  Google Scholar 

  8. Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017)

  9. Einhorn, H.J., Hogarth, R.M.: Decision making under ambiguity. J. Bus. 59, S225–S250 (1986)

    Article  Google Scholar 

  10. Ellsberg, D.: Risk, ambiguity, and the savage axioms. Q. J. Econ. 75, 643–669 (1961)

    Article  MathSciNet  Google Scholar 

  11. FeldmanHall, O., Glimcher, P., Baker, A.L., Phelps, E.A.: Emotion and decision-making under uncertainty: physiological arousal predicts increased gambling during ambiguity but not risk. J. Exp. Psychol. Gen. 145(10), 1255 (2016)

    Article  Google Scholar 

  12. Gilboa, I., Schmeidler, D.: Maxmin expected utility with non-unique prior. In: Uncertainty in Economic Theory, pp. 141–151. Routledge (2004)

    Google Scholar 

  13. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. CoRR abs/1412.6572 (2014). http://arxiv.org/abs/1412.6572

  14. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 1–42 (2018). https://doi.org/10.1145/3236009. http://dl.acm.org/citation.cfm?doid=3271482.3236009

  15. Hampel, F.R., Ronchetti, E.M., Rousseeuw, P.J., Stahel, W.A.: Robust Statistics: The Approach Based on Influence Functions, vol. 196. Wiley, Hoboken (2011)

    MATH  Google Scholar 

  16. Haufe, S., et al.: On the interpretation of weight vectors of linear models in multivariate neuroimaging. Neuroimage 87, 96–110 (2014)

    Article  Google Scholar 

  17. Herman, B.: The promise and peril of human evaluation for model interpretability. CoRR abs/1711.07414 (2017)

    Google Scholar 

  18. Huysmans, J., Dejaeger, K., Mues, C., Vanthienen, J., Baesens, B.: An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decis. Support Syst. 51(1), 141–154 (2011). https://doi.org/10.1016/j.dss.2010.12.003, http://www.sciencedirect.com/science/article/pii/S0167923610002368

  19. Kahneman, D., Tversky, A.: Choices, values, and frames. In: Handbook of the Fundamentals of Financial Decision Making: Part I, pp. 269–278. World Scientific (2013)

    Google Scholar 

  20. Kahneman, D., Tversky, A.: Prospect theory: an analysis of decision under risk. In: Handbook of the Fundamentals of Financial Decision Making: Part I, pp. 99–127. World Scientific (2013)

    Google Scholar 

  21. Kindermans, P., Schütt, K.T., Alber, M., Müller, K., Dähne, S.: Patternnet and patternlrp - improving the interpretability of neural networks. CoRR abs/1705.05598 (2017). http://arxiv.org/abs/1705.05598

  22. Klibanoff, P., Marinacci, M., Mukerji, S.: A smooth model of decision making under ambiguity. Econometrica 73(6), 1849–1892 (2005)

    Article  MathSciNet  Google Scholar 

  23. Knight, F.H.: Risk, Uncertainty and Profit. Courier Corporation, North Chelmsford (2012)

    Google Scholar 

  24. Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: Precup, D., Teh, Y.W. (eds.) ICML. vol. 70, pp. 1885–1894 (2017). http://proceedings.mlr.press/v70/koh17a.html

  25. Lai, V., Tan, C.: On human predictions with explanations and predictions of machine learning models: a case study on deception detection (2019). https://doi.org/10.1145/3287560.3287590

  26. Lakkaraju, H., Bach, S.H., Leskovec, J.: Interpretable decision sets: a joint framework for description and prediction. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1675–1684. ACM (2016)

    Google Scholar 

  27. Levy, I., Snell, J., Nelson, A.J., Rustichini, A., Glimcher, P.W.: Neural representation of subjective value under risk and ambiguity. J. Neurophysiol. 103(2), 1036–1047 (2009)

    Article  Google Scholar 

  28. Lipton, Z.C.: The mythos of model interpretability. arXiv preprint arXiv:1606.03490 (2016)

  29. Lipton, Z.C.: The doctor just won’t accept that! arXiv preprint arXiv:1711.08037 (2017)

  30. Lundberg, S.M., Lee, S.: A unified approach to interpreting model predictions. In: NIPS, pp. 4768–4777 (2017)

    Google Scholar 

  31. Maas, A.L., Daly, R.E., Pham, P.T., Huang, D., Ng, A.Y., Potts, C.: Learning word vectors for sentiment analysis. In: ACL, pp. 142–150 (2011). http://www.aclweb.org/anthology/P11-1015

  32. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. arXiv preprint arXiv:1706.07269 (2017)

  33. Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Recogn. 65, 211–222 (2017). https://doi.org/10.1016/j.patcog.2016.11.008

  34. Pratt, J.W.: Risk aversion in the small and in the large. In: Uncertainty in Economics, pp. 59–79. Elsevier (1978)

    Google Scholar 

  35. Pulford, B.D.: Short article: is luck on my side? optimism, pessimism, and ambiguity aversion. Q. J. Exp. Psychol. 62(6), 1079–1087 (2009)

    Article  Google Scholar 

  36. Rahwan, I., et al.: Machine behaviour. Nature 12(11), 26. https://doi.org/10.1038/s41586-019-1138-y

  37. Ribeiro, M.T., Singh, S., Guestrin, C.: “why should I trust you?”: explaining the predictions of any classifier. In: SIGKDD, pp. 1135–1144 (2016)

    Google Scholar 

  38. Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: AAAI Conference on Artificial Intelligence (2018)

    Google Scholar 

  39. Rieger, M.O., Wang, M.: Cumulative prospect theory and the St. Petersburg paradox. Econ. Theory 28(3), 665–679 (2006)

    Article  MathSciNet  Google Scholar 

  40. Samek, W., Binder, A., Montavon, G., Lapuschkin, S., Müller, K.: Evaluating the visualization of what a deep neural network has learned. IEEE Trans. Neural Netw. Learning Syst. 28(11), 2660–2673 (2017). https://doi.org/10.1109/TNNLS.2016.2599820

  41. Savage, L.J.: The Foundations of Statistics. Courier Corporation, North Chelmsford (1972)

    MATH  Google Scholar 

  42. Schmeidler, D.: Subjective probability and expected utility without additivity. Econometrica J. Econometric Soc. 57, 571–587 (1989)

    Article  MathSciNet  Google Scholar 

  43. Schmidt, P., BieĂźmann, F.: Quantifying interpretability and trust in machine learning systems. vol. abs/1901.08558 (2019). http://arxiv.org/abs/1901.08558

  44. Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: Visualising image classification models and saliency maps. CoRR abs/1312.6034 (2013). http://arxiv.org/abs/1312.6034

  45. Slovic, P., Tversky, A.: Who accepts savage’s axiom? Behav. Sci. 19(6), 368–373 (1974)

    Article  Google Scholar 

  46. Stanley Budner, N.: Intolerance of ambiguity as a personality variable 1. J. Pers. 30(1), 29–50 (1962)

    Article  Google Scholar 

  47. Strumbelj, E., Kononenko, I.: An efficient explanation of individual classifications using game theory. J. Mach. Learn. Res. 11, 1–18 (2010). https://doi.org/10.1145/1756006.1756007

  48. Tversky, A., Kahneman, D.: Advances in prospect theory: cumulative representation of uncertainty. J. Risk Uncertainty 5(4), 297–323 (1992)

    Article  Google Scholar 

  49. Tymula, A., et al.: Adolescents’ risk-taking behavior is driven by tolerance to ambiguity. Proc. National Acad. Sci. (2012). https://doi.org/10.1073/pnas.1207144109

  50. Vives, M.L., Feldmanhall, O.: Tolerance to ambiguous uncertainty predicts prosocial behavior. Nat. Commun. (2018). https://doi.org/10.1038/s41467-018-04631-9

    Article  Google Scholar 

  51. Von Neumann, J., Morgenstern, O., Kuhn, H.W.: Theory of Games and Economic Behavior (Commemorative Edition). Princeton University Press, Princeton (2007)

    Google Scholar 

  52. Wally, S., Baum, J.R.: Personal and structural determinants of the pace of strategic decision making. Acad. Manag. J. 37(4), 932–956 (1994)

    Google Scholar 

  53. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: ECCV, pp. 818–833 (2014)

    Google Scholar 

  54. Zien, A., Krämer, N., Sonnenburg, S., Rätsch, G.: The feature importance ranking measure. In: Buntine, W., Grobelnik, M., Mladenić, D., Shawe-Taylor, J. (eds.) ECML PKDD 2009. LNCS (LNAI), vol. 5782, pp. 694–709. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-04174-7_45

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Philipp Schmidt .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Schmidt, P., Biessmann, F. (2020). Calibrating Human-AI Collaboration: Impact of Risk, Ambiguity and Transparency on Algorithmic Bias. In: Holzinger, A., Kieseberg, P., Tjoa, A., Weippl, E. (eds) Machine Learning and Knowledge Extraction. CD-MAKE 2020. Lecture Notes in Computer Science(), vol 12279. Springer, Cham. https://doi.org/10.1007/978-3-030-57321-8_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-57321-8_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-57320-1

  • Online ISBN: 978-3-030-57321-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics