Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches

Published: 01 September 2005 Publication History

Abstract

A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI systems sensitivity to the values, ethics, and legality of activities. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this paper is to discuss strategies for implementing artificial morality and the differing criteria for success that are appropriate to different strategies.

References

[1]
Allen C, Varner G, and Zinser J A Prolegomena to Any Future Artificial Moral Agent Journal of Experimental and Theoretical Artificial Intelligence 2000 12 251-261
[2]
Axelrod R and Hamilton W The Evolution of Cooperation Science 1981 211 1390-1396
[3]
Danielson P Artificial Morality: Virtuous Robots for Virtual Games 1992 New York Routledge
[4]
Danielson P Modeling Rationality, Morality and Evolution 1998 New York Oxford University Press
[5]
Floridi L Information Ethics: On the Philosophical Foundation of Computer Ethics Ethics and Information Technology 1999 1 37-56
[6]
Floridi L and Sanders J. W. On the Morality of Artificial Agents Minds and Machines 2004 14 349-379
[7]
Von Foerster H Ethics and Second-order Cybernetics Cybernetics & Human Knowing 1992 1 9-25
[8]
Gips J Ford K, Glymour C, and Hayes P Towards the Ethical Robot Android Epistemology 1995 Cambridge, MA MIT Press 243-252
[9]
Skyrms B Evolution of the Social Contract 1996 New York Cambridge University Press
[10]
Turing A Computing Machinery and Intelligence Mind 1950 59 433-460
[11]
W. Wallach. Artificial Morality: Bounded Rationality, Bounded Morality and Emotions. In I. Smit, G. Lasker and W. Wallach, editors, Proceedings of the Intersymp 2004 Workshop on Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, pp. 1–6, Baden-Baden, Germany, IIAS, Windsor, Ontario, 2004.
[12]
Wilson E.O Sociobiology: The New Synthesis 1975 Cambridge, MA Harvard/Belknap

Cited By

View all
  • (2024)Designing ecosystems of intelligence from first principlesCollective Intelligence10.1177/263391372312224813:1Online publication date: 1-Jan-2024
  • (2024)Macro Ethics Principles for Responsible AI Systems: Taxonomy and DirectionsACM Computing Surveys10.1145/367239456:11(1-37)Online publication date: 8-Jul-2024
  • (2024)Moral sensitivity and the limits of artificial moral agentsEthics and Information Technology10.1007/s10676-024-09755-926:1Online publication date: 24-Feb-2024
  • Show More Cited By

Recommendations

Reviews

John M. Artz

The premise of this paper is that as we increasingly rely on artificial software agents, it becomes increasingly important that we can rely on them as artificial moral agents. And this, in turn, requires that morality be designed into the software itself. The paper offers three approaches to achieve this. The top-down approach begins with moral principles and derives moral behavior from those principles. The bottom-up approach provides moral experiences from which the software can learn. The hybrid approach is a combination of the two. Unfortunately, the premises upon which this paper is based are absurd. For example, software does not possess freewill, and hence cannot make moral decisions. Second, software does not possess a moral sense, and hence cannot learn to be moral. Third, the approaches suggested in this paper, if applied to humans, would be of questionable value, since the questions of "What is moral behavior__?__" and "How do you get people to behave morally__?__" have been open questions in philosophy for as long as there has been philosophy. There is a quip among ethicists: "When in doubt, do the right thing." It is funny because it trades one difficult question, "What should I do__?__" for another equally difficult question, "What is the right thing__?__" without making any real progress. This paper attempts exactly the same slight of hand. Question: "How do you teach software agents to behave morally__?__" Answer: "Top down, bottom up, or a hybrid of the two." Online Computing Reviews Service

Access critical reviews of Computing literature here

Become a reviewer for Computing Reviews.

Comments

Information & Contributors

Information

Published In

cover image Ethics and Information Technology
Ethics and Information Technology  Volume 7, Issue 3
Sep 2005
71 pages

Publisher

Kluwer Academic Publishers

United States

Publication History

Published: 01 September 2005

Author Tags

  1. artificial morality
  2. autonomous agents
  3. ethics
  4. machines
  5. robots
  6. values

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 25 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Designing ecosystems of intelligence from first principlesCollective Intelligence10.1177/263391372312224813:1Online publication date: 1-Jan-2024
  • (2024)Macro Ethics Principles for Responsible AI Systems: Taxonomy and DirectionsACM Computing Surveys10.1145/367239456:11(1-37)Online publication date: 8-Jul-2024
  • (2024)Moral sensitivity and the limits of artificial moral agentsEthics and Information Technology10.1007/s10676-024-09755-926:1Online publication date: 24-Feb-2024
  • (2024)When is it acceptable to break the rules? Knowledge representation of moral judgements based on empirical dataAutonomous Agents and Multi-Agent Systems10.1007/s10458-024-09667-438:2Online publication date: 13-Jul-2024
  • (2024)Moral distance, AI, and the ethics of careAI & Society10.1007/s00146-023-01642-z39:4(1695-1706)Online publication date: 1-Aug-2024
  • (2024)Ethical Alignment in Citizen-Centric AIPRICAI 2024: Trends in Artificial Intelligence10.1007/978-981-96-0128-8_4(43-55)Online publication date: 19-Nov-2024
  • (2023)Action Guidance and AI AlignmentProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society10.1145/3600211.3604714(387-395)Online publication date: 8-Aug-2023
  • (2023)Knowledge representation and acquisition for ethical AI: challenges and opportunitiesEthics and Information Technology10.1007/s10676-023-09692-z25:1Online publication date: 11-Mar-2023
  • (2023)Uncertain Machine Ethical Decisions Using Hypothetical RetrospectionCoordination, Organizations, Institutions, Norms, and Ethics for Governance of Multi-Agent Systems XVI10.1007/978-3-031-49133-7_9(161-181)Online publication date: 29-May-2023
  • (2023)Resolving the Dilemma of Responsibility in Multi-agent Flow NetworksAdvances in Practical Applications of Agents, Multi-Agent Systems, and Cognitive Mimetics. The PAAMS Collection10.1007/978-3-031-37616-0_7(76-87)Online publication date: 12-Jul-2023
  • Show More Cited By

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media