Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3514094.3534139acmconferencesArticle/Chapter ViewAbstractPublication PagesaiesConference Proceedingsconference-collections
research-article
Open access

Artificial Moral Advisors: A New Perspective from Moral Psychology

Published: 27 July 2022 Publication History
  • Get Citation Alerts
  • Abstract

    Philosophers have recently put forward the possibility of achieving moral enhancement through artificial intelligence (e.g., Giubilini and Savulescu's version [32]), proposing various forms of "artificial moral advisor" (AMA) to help people make moral decisions without the drawbacks of human cognitive limitations. In this paper, we provide a new perspective on the AMA, drawing on empirical evidence from moral psychology to point out several challenges to these proposals that have been largely neglected by AI ethicists. In particular, we suggest that the AMA at its current conception is fundamentally misaligned with human moral psychology - it incorrectly assumes a static moral values framework underpinning the AMA's attunement to individual users, and people's reactions and subsequent (in)actions in response to the AMA suggestions will likely diverge substantially from expectations. As such, we note the necessity for a coherent understanding of human moral psychology in the future development of AMAs.

    Supplementary Material

    MP4 File (AIES22-aies044.mp4)
    Presentation video for ''Artificial Moral Advisors: A New Perspective from Moral Psychology''. Caption autogenerated by Microsoft PowerPoint for accessibility.

    References

    [1]
    Anderson, R.A. et al. 2020. A theory of moral praise. Trends in Cognitive Sciences. 24, 9 (Sep. 2020), 694--703.
    [2]
    Anderson, S.L. and Anderson, M. 2011. A prima pacie duty approach to machine ethics. Machine ethics. M. Anderson and S.L. Anderson, eds. Cambridge University Press. 476--492.
    [3]
    Baron, J. 1995. A psychological view of moral intuition. The Harvard Review of Philosophy. 5, 1 (1995), 36--40.
    [4]
    Baron, J. 1992. The effect of normative beliefs on anticipated emotions. Journal of Personality and Social Psychology. 63, 2 (1992), 320--330.
    [5]
    Baron, J. and Spranca, M. 1997. Protected values. Organizational Behavior and Human Decision Processes. 70, 1 (Apr. 1997), 1--16.
    [6]
    Beauchamp, T.L. and Childress, J.F. 2019. Principles of biomedical ethics. Oxford University Press.
    [7]
    Bhui, R. and Gershman, S.J. 2018. Decision by sampling implements efficient coding of psychoeconomic functions. Psychological Review. 125, 6 (Nov. 2018), 985--1001.
    [8]
    Bigman, Y.E. and Gray, K. 2018. People are averse to machines making moral decisions. Cognition. 181, (Dec. 2018), 21--34.
    [9]
    Biller-Andorno, N. and Biller, A. 2019. Algorithm-Aided Prediction of Patient Preferences - An Ethics Sneak Peek. New England Journal of Medicine. 381, 15 (Oct. 2019), 1480--1485.
    [10]
    Bonnefon, J.-F. et al. 2016. The social dilemma of autonomous vehicles. Science. 352, 6293 (Jun. 2016), 1573--1576.
    [11]
    Bostyn, D.H. and Roets, A. 2016. The morality of action: The asymmetry between judgments of praise and blame in the action--omission effect. Journal of Experimental Social Psychology. 63, (Mar. 2016), 19--25.
    [12]
    Brewer, P.R. 2012. Polarisation in the USA: Climate change, party politics, and public opinion in the Obama era. European Political Science. 11, 1 (Mar. 2012), 7--17.
    [13]
    Calvillo, D.P. et al. 2020. Political ideology predicts perceptions of the threat of covid-19 (and susceptibility to fake news about it). Social Psychological and Personality Science. 11, 8 (Nov. 2020), 1119--1128.
    [14]
    Castelo, N. et al. 2019. Task-dependent algorithm aversion. Journal of Marketing Research. 56, 5 (Oct. 2019), 809--825.
    [15]
    Cervantes, J.-A. et al. 2020. Artificial moral agents: A survey of the current status. Science and Engineering Ethics. 26, 2 (Apr. 2020), 501--532.
    [16]
    Costello, M. and Hawdon, J. 2020. Hate speech in online spaces. The Palgrave handbook of international cybercrime and cyberdeviance. T.J. Holt and A.M. Bossler, eds. Springer International Publishing. 1397--1416.
    [17]
    Critcher, C.R. and Dunning, D. 2011. No good deed goes unquestioned: Cynical reconstruals maintain belief in the power of self-interest. Journal of Experimental Social Psychology. 47, 6 (Nov. 2011), 1207--1213.
    [18]
    Crockett, M.J. 2013. Models of morality. Trends in Cognitive Sciences. 17, 8 (Aug. 2013), 363--366.
    [19]
    Crockett, M.J. et al. 2021. The relational logic of moral inference. Advances in experimental social psychology. Elsevier. 1--64.
    [20]
    Cushman, F. 2013. Action, outcome, and value: A dual-system framework for morality. Personality and Social Psychology Review. 17, 3 (Aug. 2013), 273--292.
    [21]
    Cushman, F. et al. 2006. The role of conscious reasoning and intuition in moral judgment: Testing three principles of harm. Psychological Science. 17, 12 (Dec. 2006), 1082--1089.
    [22]
    Dancy, J. 2017. Moral Particularism. The Stanford Encyclopedia of Philosophy. E.N. Zalta, ed. Metaphysics Research Lab, Stanford University.
    [23]
    Daw, N.D. et al. 2005. Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nature Neuroscience. 8, 12 (Dec. 2005), 1704--1711.
    [24]
    Dietrich, E. 2011. Homo sapiens 2.0. Machine ethics. M. Anderson and S.L. Anderson, eds. Cambridge University Press. 531--538.
    [25]
    Douglas, T. 2008. Moral enhancement. Journal of Applied Philosophy. 25, 3 (Aug. 2008), 228--245.
    [26]
    Drummond, C. and Fischhoff, B. 2017. Individuals with greater science literacy and education have more polarized beliefs on controversial science topics. Proceedings of the National Academy of Sciences. 114, 36 (Sep. 2017), 9587--9592.
    [27]
    Evans, C. and Kasirzadeh, A. 2021. User tampering in reinforcement learning recommender systems. arXiv:2109.04083 [cs]. (Sep. 2021).
    [28]
    Fenton, E. 2010. The perils of failing to enhance: a response to Persson and Savulescu. Journal of Medical Ethics. 36, 3 (Mar. 2010), 148--151.
    [29]
    Firth, R. 1952. Ethical absolutism and the ideal observer. Philosophy and Phenomenological Research. 12, 3 (Mar. 1952), 317.
    [30]
    Formosa, P. and Ryan, M. 2021. Making moral machines: Why we need artificial moral agents. AI & Society. 36, 3 (Sep. 2021), 839--851.
    [31]
    Gershman, S.J. et al. 2014. Retrospective revaluation in sequential decision making: A tale of two systems. Journal of Experimental Psychology: General. 143, 1 (2014), 182--194.
    [32]
    Giubilini, A. and Savulescu, J. 2018. The artificial moral advisor. The "ideal observer" meets artificial intelligence. Philosophy & Technology. 31, 2 (Jun. 2018), 169--188.
    [33]
    Glinitzer, K. et al. 2021. Learning facts about migration: Politically motivated learning of polarizing information about refugees. Political Psychology. 42, 6 (Mar. 2021), 1053--1069.
    [34]
    Graham, J. et al. 2011. Mapping the moral domain. Journal of Personality and Social Psychology. 101, 2 (2011), 366--385.
    [35]
    Greene, J.D. et al. 2001. An fMRI investigation of emotional engagement in moral judgment. Science. 293, 5537 (Sep. 2001), 2105--2108.
    [36]
    Greene, J.D. 2016. Our driverless dilemma. Science. 352, 6293 (Jun. 2016), 1514--1515.
    [37]
    Greene, J.D. 2015. The cognitive neuroscience of moral judgment and decision making. The moral brain: a multidisciplinary perspective. J. Decety and T. Wheatley, eds. The MIT Press. 197--220.
    [38]
    Greene, J.D. et al. 2004. The neural bases of cognitive conflict and control in moral judgment. Neuron. 44, 2 (Oct. 2004), 389--400.
    [39]
    Guglielmo, S. and Malle, B.F. 2019. Asymmetric morality: Blame is more differentiated and more extreme than praise. PLOS ONE. 14, 3 (Mar. 2019), e0213544.
    [40]
    Haidt, J. 2001. The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review. 108, 4 (2001), 814--834.
    [41]
    Haidt, J. and Joseph, C. 2004. Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. Daedalus. 133, 4 (Sep. 2004), 55--66.
    [42]
    Hainmueller, J. and Hiscox, M.J. 2007. Educated preferences: Explaining attitudes toward immigration in Europe. International Organization. 61, 02 (Apr. 2007).
    [43]
    Hauskeller, M. 2015. Being good enough to prevent the worst. Journal of Medical Ethics. 41, 4 (Apr. 2015), 289--290.
    [44]
    Herzog, C. 2021. Three risks that caution against a premature implementation of artificial moral agents for practical and economical use. Science and Engineering Ethics. 27, 1 (Feb. 2021), 3.
    [45]
    Hofmann, W. et al. 2014. Morality in everyday life. Science. 345, 6202 (Sep. 2014), 1340--1343.
    [46]
    Hubbard, R. and Greenblum, J. 2020. Surrogates and Artificial Intelligence: Why AI Trumps Family. Science and Engineering Ethics. 26, 6 (Dec. 2020), 3217--3227.
    [47]
    Jenni, K. and Loewenstein, G. 1997. Explaining the identifiable victim effect. Journal of Risk and Uncertainty. 14, 3 (1997), 235--257.
    [48]
    Jiang, L. et al. 2021. Delphi: Towards machine ethics and norms. arXiv:2110.07574 [cs]. (Oct. 2021).
    [49]
    Kahan, D.M. et al. 2017. Motivated numeracy and enlightened self-government. Behavioural Public Policy. 1, 1 (May 2017), 54--86.
    [50]
    Kahan, D.M. et al. 2012. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change. 2, 10 (Oct. 2012), 732--735.
    [51]
    Kahne, J. et al. 2016. Redesigning civic education for the digital age: Participatory politics and the pursuit of democratic engagement. Theory & Research in Social Education. 44, 1 (Jan. 2016), 1--35.
    [52]
    Kahneman, D. et al. eds. 1982. Judgment under uncertainty: heuristics and biases. Cambridge University Press.
    [53]
    Kalmoe, N.P. 2020. Uses and abuses of ideology in political psychology. Political Psychology. 41, 4 (Aug. 2020), 771--793.
    [54]
    Klincewicz, M. 2019. Robotic nudges for moral improvement through Stoic practice. Techné: Research in Philosophy and Technology. 23, 3 (2019), 425--455.
    [55]
    Kool, W. et al. 2018. Competition and cooperation between multiple reinforcement learning systems. Goal-Directed Decision Making. Elsevier. 153--178.
    [56]
    Kool, W. et al. 2018. Planning complexity registers as a cost in metacontrol. Journal of Cognitive Neuroscience. 30, 10 (Oct. 2018), 1391--1404.
    [57]
    Kool, W. et al. 2016. When does model-based control pay off? PLOS Computational Biology. 12, 8 (Aug. 2016), e1005090.
    [58]
    Kurdi, B. et al. 2019. Model-free and model-based learning processes in the updating of explicit and implicit evaluations. Proceedings of the National Academy of Sciences. 116, 13 (Mar. 2019), 6035--6044.
    [59]
    Lamanna, C. and Byrne, L. 2018. Should artificial intelligence augment medical decision making? The case for an autonomy algorithm. AMA Journal of Ethics. 20, 9 (Sep. 2018), E902--910.
    [60]
    Lara, F. 2021. Why a virtual assistant for moral enhancement when we could have a Socrates? Science and Engineering Ethics. 27, 4 (Aug. 2021), 42.
    [61]
    Lara, F. and Deckers, J. 2020. Artificial intelligence as a Socratic assistant for moral enhancement. Neuroethics. 13, 3 (Oct. 2020), 275--287.
    [62]
    Lee, M.K. 2018. Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society. 5, 1 (Jan. 2018), 205395171875668.
    [63]
    Levy, R. 2021. Social media, news consumption, and polarization: Evidence from a field experiment. American Economic Review. 111, 3 (Mar. 2021), 831--870.
    [64]
    Lewandowsky, S. and Oberauer, K. 2016. Motivated rejection of science. Current Directions in Psychological Science. 25, 4 (Aug. 2016), 217--222.
    [65]
    Liu, Y. and Moore, A. 2022. A Bayesian multilevel analysis of belief alignment effect predicting human moral intuitions of artificial intelligence judgements [Forthcoming]. Proceedings of the 44th Annual Meeting of the Cognitive Science Society (Toronto, Canada, 2022).
    [66]
    Lohr, S. 2021. What Ever Happened to IBM's Watson? The New York Times.
    [67]
    Malle, B.F. 2016. Integrating robot ethics and machine morality: The study and design of moral competence in robots. Ethics and Information Technology. 18, 4 (Dec. 2016), 243--256.
    [68]
    Martinho, A. et al. 2021. Perspectives about artificial moral agents. AI and Ethics. 1, 4 (Nov. 2021), 477--490.
    [69]
    Mathew, B. et al. 2019. Spread of hate speech in online social media. Proceedings of the 10th ACM Conference on Web Science (Boston Massachusetts USA, Jun. 2019), 173--182.
    [70]
    McDougall, R.J. 2019. Computer knows best? The need for value-flexibility in medical AI. Journal of Medical Ethics. 45, 3 (Mar. 2019), 156--160.
    [71]
    Monroe, A.E. and Malle, B.F. 2019. People systematically update moral judgments of blame. Journal of Personality and Social Psychology. 116, 2 (Feb. 2019), 215--236.
    [72]
    Moor, J.H. 2006. The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems. 21, 4 (Jul. 2006), 18--21.
    [73]
    Moore, A. et al. 2021. Trust in information, political identity and the brain: An interdisciplinary fMRI study. Philosophical Transactions of the Royal Society B: Biological Sciences. 376, 1822 (Apr. 2021), 20200140.
    [74]
    Moore, A.B. et al. 2011. In defense of the personal/impersonal distinction in moral psychology research: Cross-cultural validation of the dual process model of moral judgment. Judgment and Decision Making. 6, 3 (2011), 10.
    [75]
    Moore, A.B. et al. 2008. Who shalt not kill? Individual differences in working memory capacity, executive control, and moral judgment. Psychological Science. 19, 6 (Jun. 2008), 549--557.
    [76]
    Pascale, C.-M. 2019. The weaponization of language: Discourses of rising right-wing authoritarianism. Current Sociology. 67, 6 (Oct. 2019), 898--917.
    [77]
    Patzelt, E.H. et al. 2019. Incentives boost model-based control across a range of severity on several psychiatric constructs. Biological Psychiatry. 85, 5 (Mar. 2019), 425--433.
    [78]
    Perkowitz, S. 2021. The bias in the machine: Facial recognition technology and racial disparities. MIT Case Studies in Social and Ethical Responsibilities of Computing. (Feb. 2021).
    [79]
    Persson, I. and Savulescu, J. 2013. Getting moral enhancement right: The desirability of moral bioenhancement. Bioethics. 27, 3 (Mar. 2013), 124--131.
    [80]
    Persson, I. and Savulescu, J. 2008. The perils of cognitive enhancement and the urgent imperative to enhance the moral character of humanity. Journal of Applied Philosophy. 25, 3 (Aug. 2008), 162--177.
    [81]
    Persson, I. and Savulescu, J. 2011. Unfit for the future? Human nature, scientific progress, and the need for moral enhancement. Enhancing Human Capacities. J. Savulescu et al., eds. Blackwell Publishing Ltd. 486--500.
    [82]
    Pizarro, D.A. and Bloom, P. 2003. The intelligence of the moral intuitions: A comment on Haidt (2001). Psychological Review. 110, 1 (2003), 193--196.
    [83]
    Rakic, V. 2014. Voluntary moral enhancement and the survival-at-any-cost bias. Journal of Medical Ethics. 40, 4 (Apr. 2014), 246--250.
    [84]
    Ross, C. and Swetlitz, I. 2017. IBM pitched its Watson supercomputer as a revolution in cancer care. It's nowhere close. STAT.
    [85]
    Sartre, J.-P. 1946. Existentialism Is a humanism.
    [86]
    Savulescu, J. et al. 2015. The moral imperative to continue gene editing research on human embryos. Protein & Cell. 6, 7 (Jul. 2015), 476--479.
    [87]
    Savulescu, J. and Maslen, H. 2015. Moral enhancement and artificial intelligence: Moral AI? Beyond artificial intelligence. J. Romportl et al., eds. Springer International Publishing. 79--95.
    [88]
    Savulescu, J. and Singer, P. 2019. An ethical pathway for gene editing. Bioethics. 33, 2 (Feb. 2019), 221--222.
    [89]
    Schepman, A. and Rodway, P. 2020. Initial validation of the general attitudes towards artificial intelligence scale. Computers in Human Behavior Reports. 1, (Jan. 2020), 100014.
    [90]
    Scheuerman, M.K. et al. 2020. How we've taught algorithms to see identity: Constructing race and gender in image databases for facial analysis. Proceedings of the ACM on human-computer interaction. 4, CSCW1 (May 2020), 1--35.
    [91]
    Schwartz, S.H. and Bilsky, W. 1990. Toward a theory of the universal content and structure of values: Extensions and cross-cultural replications. Journal of Personality and Social Psychology. 58, 5 (May 1990), 878--891.
    [92]
    Schwartz, S.H. and Bilsky, W. 1987. Toward a universal psychological structure of human values. Journal of Personality and Social Psychology. 53, 3 (1987), 550--562.
    [93]
    Serafimova, S. 2020. Whose morality? Which rationality? Challenging artificial intelligence as a remedy for the lack of moral enhancement. Humanities and Social Sciences Communications. 7, 1 (Dec. 2020), 119.
    [94]
    Shariff, A. et al. 2017. Psychological roadblocks to the adoption of self-driving vehicles. Nature Human Behaviour. 1, 10 (Oct. 2017), 694--696.
    [95]
    Shou, Y. et al. 2020. Impact of uncertainty and ambiguous outcome phrasing on moral decision-making. PLOS ONE. 15, 5 (May 2020), e0233127.
    [96]
    Siegel, J.Z. et al. 2017. Inferences about moral character moderate the impact of consequences on blame and praise. Cognition. 167, (Oct. 2017), 201--211.
    [97]
    Sinnott-Armstrong, W. and Skorburg, J. (Gus) A. 2021. How AI can aid bioethics. Journal of Practical Ethics. 9, 1 (Dec. 2021).
    [98]
    Slovic, P. et al. 2007. The affect heuristic. European Journal of Operational Research. 177, 3 (Mar. 2007), 1333--1352.
    [99]
    Smith, K.B. et al. 2017. Intuitive ethics and political orientations: Testing moral foundations as a theory of political ideology. American Journal of Political Science. 61, 2 (Apr. 2017), 424--437.
    [100]
    Sparrow, R. 2014. Better living through chemistry? A reply to Savulescu and Persson on ?moral enhancement.' Journal of Applied Philosophy. 31, 1 (Feb. 2014), 23--32.
    [101]
    Sparrow, R. 2021. Why machines cannot be moral. AI & Society. 36, 3 (Sep. 2021), 685--693.
    [102]
    Stewart, N. et al. 2006. Decision by sampling. Cognitive Psychology. 53, 1 (Aug. 2006), 1--26.
    [103]
    Sunstein, C.R. 2005. Moral heuristics. Behavioral and Brain Sciences. 28, 4 (Aug. 2005), 531--573.
    [104]
    Talat, Z. et al. 2021. A word on machine ethics: A response to Jiang et al. (2021). arXiv:2111.04158 [cs]. (Nov. 2021).
    [105]
    Tucker, J. et al. 2018. Social media, political polarization, and political disinformation: A review of the scientific literature. SSRN Electronic Journal. (2018).
    [106]
    United Nations General Assembly 1948. Universal Declaration of Human Rights. United Nations.
    [107]
    Valdesolo, P. and DeSteno, D. 2006. Manipulations of emotional context shape moral judgment. Psychological Science. 17, 6 (Jun. 2006), 476--477.
    [108]
    Vallor, S. 2015. Moral deskilling and upskilling in a new machine age: Reflections on the ambiguous future of character. Philosophy & Technology. 28, 1 (Mar. 2015), 107--124.
    [109]
    Whitby, B. 2008. Computing machinery and morality. AI & Society. 22, 4 (Apr. 2008), 551--563.
    [110]
    Whitby, B. 2011. On computable morality. Machine ethics. M. Anderson and S.L. Anderson, eds. Cambridge University Press. 138--150.
    [111]
    Wong, P.-H. 2020. Cultural differences as excuses? Human rights and cultural values in global ethics and governance of AI. Philosophy & Technology. 33, 4 (Dec. 2020), 705--715.
    [112]
    van Wynsberghe, A. and Robbins, S. 2019. Critiquing the reasons for making artificial moral agents. Science and Engineering Ethics. 25, 3 (Jun. 2019), 719--735.

    Cited By

    View all
    • (2024)Ethical AI: Conceivable Versions and Projected Issues2024 IEEE Ural-Siberian Conference on Biomedical Engineering, Radioelectronics and Information Technology (USBEREIT)10.1109/USBEREIT61901.2024.10584050(182-184)Online publication date: 13-May-2024
    • (2024)Debunking Cognition. Why AI Moral Enhancement Should Focus on IdentityNeuro-ProsthEthics10.1007/978-3-662-68362-0_7(103-128)Online publication date: 7-May-2024

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    AIES '22: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society
    July 2022
    939 pages
    ISBN:9781450392471
    DOI:10.1145/3514094
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 27 July 2022

    Check for updates

    Author Tags

    1. ai ethics
    2. ai moral enhancement
    3. artificial moral advisor
    4. moral psychology
    5. normative ethics

    Qualifiers

    • Research-article

    Conference

    AIES '22
    Sponsor:
    AIES '22: AAAI/ACM Conference on AI, Ethics, and Society
    May 19 - 21, 2021
    Oxford, United Kingdom

    Acceptance Rates

    Overall Acceptance Rate 61 of 162 submissions, 38%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)432
    • Downloads (Last 6 weeks)54
    Reflects downloads up to 26 Jul 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Ethical AI: Conceivable Versions and Projected Issues2024 IEEE Ural-Siberian Conference on Biomedical Engineering, Radioelectronics and Information Technology (USBEREIT)10.1109/USBEREIT61901.2024.10584050(182-184)Online publication date: 13-May-2024
    • (2024)Debunking Cognition. Why AI Moral Enhancement Should Focus on IdentityNeuro-ProsthEthics10.1007/978-3-662-68362-0_7(103-128)Online publication date: 7-May-2024

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media