Abstract
The debate on whether or not humans have free will has been raging for centuries. Although there are good arguments based on our current understanding of the laws of nature for the view that it is not possible for humans to have free will, most people believe they do. This discrepancy begs for an explanation. If we accept that we do not have free will, we are faced with two problems: (1) while freedom is a very commonly used concept that everyone intuitively understands, what are we actually referring to when we say that an action or choice is “free” or not? And, (2) why is the belief in free will so common? Where does this belief come from, and what is its purpose, if any? In this paper, we examine these questions from the perspective of reinforcement learning (RL). RL is a framework originally developed for training artificial intelligence agents. However, it can also be used as a computational model of human decision making and learning, and by doing so, we propose that the first problem can be answered by observing that people’s common sense understanding of freedom is closely related to the information entropy of an RL agent’s normalized action values, while the second can be explained by the necessity for agents to model themselves as if they could have taken decisions other than those they actually took, when dealing with the temporal credit assignment problem. Put simply, we suggest that by applying the RL framework as a model for human learning it becomes evident that in order for us to learn efficiently and be intelligent we need to view ourselves as if we have free will.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
For the sake of brevity, we here only give a high-level description of model-based RL. For a more complete formulation see [12].
- 2.
The exact method of normalization in not important. We here ignore the problem that the softmax function is not scale invariant.
References
Alquist, J.L., Ainsworth, S.E., Baumeister, R.F., Daly, M., Stillman, T.F.: The making of might-have-beens: effects of free will belief on counterfactual thinking. Pers. Soc. Psychol. Bull. 41(2), 268–283 (2015). https://doi.org/10.1177/0146167214563673
Deery, O.: Why people believe in indeterminist free will. Philos. Stud. 172(8), 2033–2054 (2014). https://doi.org/10.1007/s11098-014-0396-7
Deutschländer, R., Pauen, M., Haynes, J.D.: Probing folk-psychology: do Libet-style experiments reflect folk intuitions about free action? Conscious. Cogn. 48, 232–245 (2017). Epub 2016 Dec 23. PMID: 28013177. https://doi.org/10.1016/j.concog.2016.11.004
Dennett, D.C.: Elbow room : The Varieties of Free Will Worth Wanting. MIT Press, Cambridge (1984)
Guez, A., et al.: Value-driven Hindsight Modelling (2020)
Harutyunyan, A., et al.: Hindsight credit assignment. In: Advances in Neural Information Processing Systems (2019)
Lai, T., Robbins, H.: Asymptotically efficient adaptive allocation rules. Adv. Appl. Math. 6(1), 4–22 (1985)
McKenna, M., Coates, D.J.: Compatibilism, The Stanford Encyclopedia of Philosophy (Fall 2021 Edition), Edward N. Zalta (ed.) (2021)
Mesnard, T., et al.: Counterfactual Credit Assignment in Model-Free Reinforcement Learning (2020)
Minsky, M.L.: Steps toward artificial intelligence. In: Feigenbaum, E.A., Feldman, J. (eds.) Comput. Thought, pp. 406–450. McGraw-Hill, New York (1963)
Pereboom, D.: Living without Free Will. Cambridge University Press, Cambridge (2001)
Rehn E.M.: Free will belief as a consequence of model-based reinforcement learning. Preprint (2021). https://doi.org/10.48550/arXiv.2111.08435
Silver, D., Singh, S., Precup, D., Sutton, R.S.: Reward is enough, Artificial Intelligence, Volume 299, 2021. ISSN 103535, 0004–3702 (2021). https://doi.org/10.1016/j.artint.2021.103535
Thompson, W.R.: On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika 25(3/4), 285–294 (1933)
Wisniewski, D., Deutschländer, R., Haynes, J.D.: Free will beliefs are better predicted by dualism than determinism beliefs across different cultures. PLoS ONE 14(9), e0221617 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Rehn, E.M. (2023). Free Will Belief as a Consequence of Model-Based Reinforcement Learning. In: Goertzel, B., Iklé, M., Potapov, A., Ponomaryov, D. (eds) Artificial General Intelligence. AGI 2022. Lecture Notes in Computer Science(), vol 13539. Springer, Cham. https://doi.org/10.1007/978-3-031-19907-3_34
Download citation
DOI: https://doi.org/10.1007/978-3-031-19907-3_34
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-19906-6
Online ISBN: 978-3-031-19907-3
eBook Packages: Computer ScienceComputer Science (R0)