Abstract
Transparent Machine Learning (ML) is often argued to increase trust into predictions of algorithms however the growth of new interpretability approaches is not accompanied by a growth in studies investigating how interaction of humans and Artificial Intelligence (AI) systems benefits from transparency. The right level of transparency can increase trust in an AI system, while inappropriate levels of transparency can lead to algorithmic bias. In this study we demonstrate that depending on certain personality traits, humans exhibit different susceptibilities for algorithmic bias. Our main finding is that susceptibility to algorithmic bias significantly depends on annotators’ affinity to risk. These findings help to shed light on the previously underrepresented role of human personality in human-AI interaction. We believe that taking these aspects into account when building transparent AI systems can help to ensure more responsible usage of AI systems.
P. Schmidt and F. Biessmann—Equal contribution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Arrow, K.: Aspects of the theory of risk-bearing. Yrjö Jahnsson lectures, Yrjö Jahnssonin Säätiö (1965). https://books.google.de/books?id=hnNEAAAAIAAJ
Aumann, R.J.: Agreeing to disagree. Ann. Stat. 4, 1236–1239 (1976)
Bell, D.E., Raiffa, H., Tversky, A.: Decision Making: Descriptive, Normative, and Prescriptive Interactions. Cambridge university Press, Cambridge (1988)
Camerer, C., Weber, M.: Recent developments in modeling preferences: uncertainty and ambiguity. J. Risk Uncertainty (1992). https://doi.org/10.1007/BF00122575
Cook, R.D.: Detection of influential observation in linear regression. Technometrics 19(1), 15–18 (1977). http://www.jstor.org/stable/1268249
Curley, S.P., Yates, J.F., Abrams, R.A.: Psychological sources of ambiguity avoidance. Organ. Behav. Hum. Decis. Processes 38(2), 230–256 (1986)
Dietvorst, B.J., Simmons, J.P., Massey, C.: Algorithm aversion: people erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 144(1), 114–126 (2015). https://doi.org/10.1037/xge0000033
Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017)
Einhorn, H.J., Hogarth, R.M.: Decision making under ambiguity. J. Bus. 59, S225–S250 (1986)
Ellsberg, D.: Risk, ambiguity, and the savage axioms. Q. J. Econ. 75, 643–669 (1961)
FeldmanHall, O., Glimcher, P., Baker, A.L., Phelps, E.A.: Emotion and decision-making under uncertainty: physiological arousal predicts increased gambling during ambiguity but not risk. J. Exp. Psychol. Gen. 145(10), 1255 (2016)
Gilboa, I., Schmeidler, D.: Maxmin expected utility with non-unique prior. In: Uncertainty in Economic Theory, pp. 141–151. Routledge (2004)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. CoRR abs/1412.6572 (2014). http://arxiv.org/abs/1412.6572
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 1–42 (2018). https://doi.org/10.1145/3236009. http://dl.acm.org/citation.cfm?doid=3271482.3236009
Hampel, F.R., Ronchetti, E.M., Rousseeuw, P.J., Stahel, W.A.: Robust Statistics: The Approach Based on Influence Functions, vol. 196. Wiley, Hoboken (2011)
Haufe, S., et al.: On the interpretation of weight vectors of linear models in multivariate neuroimaging. Neuroimage 87, 96–110 (2014)
Herman, B.: The promise and peril of human evaluation for model interpretability. CoRR abs/1711.07414 (2017)
Huysmans, J., Dejaeger, K., Mues, C., Vanthienen, J., Baesens, B.: An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decis. Support Syst. 51(1), 141–154 (2011). https://doi.org/10.1016/j.dss.2010.12.003, http://www.sciencedirect.com/science/article/pii/S0167923610002368
Kahneman, D., Tversky, A.: Choices, values, and frames. In: Handbook of the Fundamentals of Financial Decision Making: Part I, pp. 269–278. World Scientific (2013)
Kahneman, D., Tversky, A.: Prospect theory: an analysis of decision under risk. In: Handbook of the Fundamentals of Financial Decision Making: Part I, pp. 99–127. World Scientific (2013)
Kindermans, P., Schütt, K.T., Alber, M., Müller, K., Dähne, S.: Patternnet and patternlrp - improving the interpretability of neural networks. CoRR abs/1705.05598 (2017). http://arxiv.org/abs/1705.05598
Klibanoff, P., Marinacci, M., Mukerji, S.: A smooth model of decision making under ambiguity. Econometrica 73(6), 1849–1892 (2005)
Knight, F.H.: Risk, Uncertainty and Profit. Courier Corporation, North Chelmsford (2012)
Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: Precup, D., Teh, Y.W. (eds.) ICML. vol. 70, pp. 1885–1894 (2017). http://proceedings.mlr.press/v70/koh17a.html
Lai, V., Tan, C.: On human predictions with explanations and predictions of machine learning models: a case study on deception detection (2019). https://doi.org/10.1145/3287560.3287590
Lakkaraju, H., Bach, S.H., Leskovec, J.: Interpretable decision sets: a joint framework for description and prediction. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1675–1684. ACM (2016)
Levy, I., Snell, J., Nelson, A.J., Rustichini, A., Glimcher, P.W.: Neural representation of subjective value under risk and ambiguity. J. Neurophysiol. 103(2), 1036–1047 (2009)
Lipton, Z.C.: The mythos of model interpretability. arXiv preprint arXiv:1606.03490 (2016)
Lipton, Z.C.: The doctor just won’t accept that! arXiv preprint arXiv:1711.08037 (2017)
Lundberg, S.M., Lee, S.: A unified approach to interpreting model predictions. In: NIPS, pp. 4768–4777 (2017)
Maas, A.L., Daly, R.E., Pham, P.T., Huang, D., Ng, A.Y., Potts, C.: Learning word vectors for sentiment analysis. In: ACL, pp. 142–150 (2011). http://www.aclweb.org/anthology/P11-1015
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. arXiv preprint arXiv:1706.07269 (2017)
Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Recogn. 65, 211–222 (2017). https://doi.org/10.1016/j.patcog.2016.11.008
Pratt, J.W.: Risk aversion in the small and in the large. In: Uncertainty in Economics, pp. 59–79. Elsevier (1978)
Pulford, B.D.: Short article: is luck on my side? optimism, pessimism, and ambiguity aversion. Q. J. Exp. Psychol. 62(6), 1079–1087 (2009)
Rahwan, I., et al.: Machine behaviour. Nature 12(11), 26. https://doi.org/10.1038/s41586-019-1138-y
Ribeiro, M.T., Singh, S., Guestrin, C.: “why should I trust you?”: explaining the predictions of any classifier. In: SIGKDD, pp. 1135–1144 (2016)
Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: AAAI Conference on Artificial Intelligence (2018)
Rieger, M.O., Wang, M.: Cumulative prospect theory and the St. Petersburg paradox. Econ. Theory 28(3), 665–679 (2006)
Samek, W., Binder, A., Montavon, G., Lapuschkin, S., Müller, K.: Evaluating the visualization of what a deep neural network has learned. IEEE Trans. Neural Netw. Learning Syst. 28(11), 2660–2673 (2017). https://doi.org/10.1109/TNNLS.2016.2599820
Savage, L.J.: The Foundations of Statistics. Courier Corporation, North Chelmsford (1972)
Schmeidler, D.: Subjective probability and expected utility without additivity. Econometrica J. Econometric Soc. 57, 571–587 (1989)
Schmidt, P., BieĂźmann, F.: Quantifying interpretability and trust in machine learning systems. vol. abs/1901.08558 (2019). http://arxiv.org/abs/1901.08558
Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: Visualising image classification models and saliency maps. CoRR abs/1312.6034 (2013). http://arxiv.org/abs/1312.6034
Slovic, P., Tversky, A.: Who accepts savage’s axiom? Behav. Sci. 19(6), 368–373 (1974)
Stanley Budner, N.: Intolerance of ambiguity as a personality variable 1. J. Pers. 30(1), 29–50 (1962)
Strumbelj, E., Kononenko, I.: An efficient explanation of individual classifications using game theory. J. Mach. Learn. Res. 11, 1–18 (2010). https://doi.org/10.1145/1756006.1756007
Tversky, A., Kahneman, D.: Advances in prospect theory: cumulative representation of uncertainty. J. Risk Uncertainty 5(4), 297–323 (1992)
Tymula, A., et al.: Adolescents’ risk-taking behavior is driven by tolerance to ambiguity. Proc. National Acad. Sci. (2012). https://doi.org/10.1073/pnas.1207144109
Vives, M.L., Feldmanhall, O.: Tolerance to ambiguous uncertainty predicts prosocial behavior. Nat. Commun. (2018). https://doi.org/10.1038/s41467-018-04631-9
Von Neumann, J., Morgenstern, O., Kuhn, H.W.: Theory of Games and Economic Behavior (Commemorative Edition). Princeton University Press, Princeton (2007)
Wally, S., Baum, J.R.: Personal and structural determinants of the pace of strategic decision making. Acad. Manag. J. 37(4), 932–956 (1994)
Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: ECCV, pp. 818–833 (2014)
Zien, A., Krämer, N., Sonnenburg, S., Rätsch, G.: The feature importance ranking measure. In: Buntine, W., Grobelnik, M., Mladenić, D., Shawe-Taylor, J. (eds.) ECML PKDD 2009. LNCS (LNAI), vol. 5782, pp. 694–709. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-04174-7_45
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 IFIP International Federation for Information Processing
About this paper
Cite this paper
Schmidt, P., Biessmann, F. (2020). Calibrating Human-AI Collaboration: Impact of Risk, Ambiguity and Transparency on Algorithmic Bias. In: Holzinger, A., Kieseberg, P., Tjoa, A., Weippl, E. (eds) Machine Learning and Knowledge Extraction. CD-MAKE 2020. Lecture Notes in Computer Science(), vol 12279. Springer, Cham. https://doi.org/10.1007/978-3-030-57321-8_24
Download citation
DOI: https://doi.org/10.1007/978-3-030-57321-8_24
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-57320-1
Online ISBN: 978-3-030-57321-8
eBook Packages: Computer ScienceComputer Science (R0)