Macro Ethics Principles for Responsible AI Systems: Taxonomy and Directions
Abstract
1 Introduction
1.1 Motivation for a Taxonomy of Ethical Principles
1.2 AI Principles and Ethical Principles
1.2.1 Ethical Principles.
1.2.2 AI Principles.
1.2.3 Distinction between Ethical Principles and AI Principles.
1.3 Gaps in Related Research
1.4 Novelty
1.5 Organisation
2 Methodology
2.1 Objective
2.2 Relevant Works
3 Taxonomy of Ethical Principles
3.1 Overview of Paper Categorisation
Contribution | Ethical Principles | |||||
---|---|---|---|---|---|---|
Contribution Type | Evaluation Type | Deontology | Egalitarianism | Proportionalism | Kantian | Virtue |
Descriptive | None | [1, 14, 16, 24, 62, 66, 79, 113, 125] | [16, 66, 106] | [106] | [1, 14, 26, 65] | [1, 14, 24, 62, 65, 66, 79, 113, 125] |
Model Representation | Test | [6, 102] | – | – | – | – |
Proof | [15] | [55, 84] | [84] | [15] | [60] | |
Informal | [3, 4, 107] | [9] | [32] | [4] | [3, 63, 107] | |
None | [5, 18, 39, 40, 47, 56, 64, 99, 130, 134, 141] | [87, 98] | [47, 87] | [18, 39, 40, 48, 56, 83, 111, 130, 134] | [18, 40, 99, 104, 111, 130, 134, 135, 141] | |
Individual Decision Making | Test | [35, 70, 75, 112, 119] | – | – | [81, 112, 129] | [70, 75, 112, 119] |
Proof | [89, 90] | – | – | [89] | – | |
Informal | [11, 30] | – | – | – | [30] | |
None | [141] | – | – | [7] | [141] | |
Centralised Collective Decision Making | Test | [68] | [34] | – | – | [68] |
Proof | – | [21, 44] | – | – | – | |
Informal | [85] | [85] | [85] | – | – | |
None | – | – | [108] | – | – | |
Decentralised Collective Decision Making | Test | [61] | – | – | – | [61] |
Proof | – | – | – | – | – | |
Informal | [110] | – | – | [110] | [110] | |
None | [141] | – | – | – | [141] | |
Human-AI Interaction | Test | [6] | – | – | [129] | – |
Proof | – | – | – | – | – | |
Informal | [107] | – | – | – | [63, 107] | |
None | [46, 141] | – | – | [7, 46] | [131, 141] |
Contribution Type | Ethical Principles | ||||||
---|---|---|---|---|---|---|---|
Contribution Type | Evaluation Type | Consequenti-alism | Utilitarianism | Maximin | Envy-Freeness | Doctrine of Double Effect | Do No Harm |
Descriptive | None | [1, 14, 65, 66, 113, 125] | [1, 24, 79, 113, 118, 125] | – | – | – | – |
Model Representation | Test | – | – | – | – | [17] | |
Proof | [15] | [15, 84] | – | – | [59] | – | |
Informal | [3, 4, 107] | [3, 4, 134] | [9, 15] | – | [15] | – | |
None | [18, 39, 40, 48, 130, 135, 141] | [5, 18, 39, 40, 47, 48, 56, 83, 98, 99, 111, 135, 141] | [87] | – | [40] | [39] | |
Individual Decision Making | Test | [112] | [2, 35, 70, 75, 81, 112, 119, 129] | [2] | – | – | [37] |
Proof | [89] | [8, 89, 90] | – | – | [90, 94] | [90] | |
Informal | [30] | [11] | – | – | – | – | |
None | [141] | [7, 141] | – | – | – | – | |
Centralised Collective Decision Making | Test | – | [13, 68, 88, 117] | [13, 38, 88] | – | – | – |
Proof | – | [27] | [27, 105, 127] | [127] | – | – | |
Informal | [85] | [85] | [85] | – | – | – | |
None | – | – | – | [19] | – | – | |
Decentralised Collective Decision Making | Test | [61] | [100] | – | – | – | – |
Proof | – | [58] | [58] | – | – | – | |
Informal | – | [110] | – | – | – | – | |
None | [141] | [141] | – | – | – | – | |
Human-AI Interaction | Test | – | [20, 129] | – | – | – | – |
Proof | – | – | – | – | – | – | |
Informal | [107] | – | – | – | – | – | |
None | – | [7] | – | – | – | – |
3.2 Deontology
3.2.1 Egalitarianism.
Principle | Description | Difficulties |
---|---|---|
Non-Maleficence | Imposes egalitarianism across harms but not benefits [85]. In optimisation, different actions could be assigned values based on a predetermined formula, identifying harms caused by each action. The action with the most equal distribution of harm is chosen. | Allows for arbitrarily large inequalities in outcomes, and assumes a dubious distinction between ‘better-off’ and ‘worse-off’ [85]. It thus is difficult to define what a harm is and what a benefit is. |
Equality of Opportunity | Negative attributes due to an individual’s circumstances of birth or random choice should not be held against them. However, individuals should be still held accountable for their actions [44, 55]. Opportunities should therefore be equally distributed. Binns [16] proposes that one could examine whether each group is equally likely to be predicted a desirable outcome, given the base rates for that group. Lee et al. [87] suggest ensuring opportunities are equally open to all applicants based on a relevant definition of merit. | Fleurbaey [51] argues that this can be fully satisfied even if only a minority segment of the population has realistic prospects of accessing the opportunity. |
Luck | Inequalities that stem from unchosen aspects should be eliminated so no one is worse off due to bad luck. Instead, people should receive benefits as a result of their own choice [45, 87]. From an optimisation perspective, people could be given a weighting which mitigates the effects of luck. Allocations are distributed equally, accounting for this weighting. | Defining what is within an individual’s genuine control is often difficult [16]. The ideal solution would allow inequalities resulting from people’s free choices and informed risk taking, disregarding those which are the result of brute luck. |
Autonomy | Levels of autonomy should be equally distributed, through a variety and quality of options, and decision-making competence [51]. The aim of this would be to incorporate the full range of individual freedom [87]. Levels of autonomy could be inputted to reason about potential actions, selecting the action with the most equal distribution of autonomy. | When there is a significant asymmetry of power and information, autonomy in rational decision makers fails as an ethical objective [51]. |
3.2.2 Proportionalism.
Principle | Description | Difficulties |
---|---|---|
Libertarian | Libertarianism emphasises the importance of each person’s freedom [87]. Rights are distributed according to each person’s total contribution at the time of consent. Inequality within the range of this initial contribution is not considered unfair [85]. | Libertarianism does not target pre-existing inequalities which may be worth mitigating. For example, the contribution of some people may be inhibited due to factors outside of their control (e.g., generational wealth inequality or disability). Allowing factors which are beyond people’s control to determine what rights they have may seem unfair. |
Desert Based | Desert is defined in terms of individual effort or contribution, discounting the effects of luck. The amount an individual deserves is thus proportionate to how much they have contributed, after luck has been discounted for. The effects of luck are discounted for because the prior prevalence of a trait in a population can be the result of unjust circumstances [85]. Dwork et al. [44] suggest that desert-based proportionalism could be implemented by assigning each individual some distance in a metric space that evaluates desert, and evaluating fairness through average distance between individuals in the metric space. Persson and Hedlund [106] propose utilising desert to consider how responsibility for ethical AI development should be distributed, assigning responsibility according to the contribution of each individual. | A weakness of this principle is that luck is an abstract concept which is difficult to define, and may vary between contexts. Thus, evaluating which traits should be mitigated for is challenging. |
3.2.3 Kantian.
3.3 Virtue Ethics
3.4 Consequentialism
3.4.1 Utilitarianism.
Principle | Description | Difficulties |
---|---|---|
(Hedonic) Act Utilitarianism | Morality of action lies in its consequences [130]. Hedonic act utilitarianism entails computing the action which derives the greatest net pleasure [23]. Berreby et al. [15] suggest that a machine utilising this could weigh actions corresponding to their consequences, and then order them accordingly; an action is less desirable if there is another action whose weight is greater. Anderson et al. [7] propose that one could input the number of people affected and the intensity of pleasure/displeasure for each person for each possible action. The algorithm then computes the product of intensity, duration, and probability to obtain the net pleasure for each person. This computation is performed for each alternative action. Nashed et al. [100] implement act utilitarianism by requiring policies which maximise the value of all relevant agents. | A criticism of hedonic act utilitarianism is that it is difficult to define pleasure; what is pleasurable for one person may not be pleasurable for another. Ambiguity in defining pleasure thereby makes it difficult to identify the action with the greatest net pleasure. |
Rule Utilitarianism | Actions are morally assessed by first appraising moral rules based on the principle of utility; deciding whether a (set of) moral rule(s) will lead to the best overall consequences, assuming all/most agents follow it. Berreby et al. [15] illustrate that this could be implemented using a predicate which compounds all effective weights of the actions belonging to a particular rule, then summing up those weights via a predicate. Governatori et al. [58] provide an argumentation framework, where moral theories including rule utilitarianism are expressed as normative systems whose moral justification agents argue about. | Sometimes a rule may lead to unintuitive outcomes and therefore should be broken. This makes rule utilitarianism look more like act utilitarianism, where the right thing to do is evaluated through the consequences of each action. |
3.4.2 Maximin.
3.4.3 Envy-Freeness.
3.4.4 Doctrine of Double Effect.
3.4.5 Do No (Instrumental) Harm.
3.5 Other Principles
3.5.1 Egoism.
3.5.2 Particularism.
3.5.3 The Ethic of Care.
3.5.4 Other Cultures.
4 Previous Operationalisation of Ethical Principles
4.1 Choosing Technical Implementation
Implementation Type | Ethical Principles | |||||
---|---|---|---|---|---|---|
Deontology | Egalitarianism | Proportionalism | Kantian | Virtue | ||
Logical Reasoning | Deductive Logic | – | [84] | [84] | [110] | [110] |
Non-Monotonic Logic | – | – | – | [15, 89] | – | |
Abductive Logic | – | – | – | – | – | |
Deontic Logic | [90] | – | – | – | [60] | |
Rule-Based Systems | [30, 35] | [11, 21] | [47] | [110] | [30, 110] | |
Event Calculus | – | – | – | [15, 89] | [60] | |
Knowledge Representation and Ontologies | [30, 35] | – | – | [110] | [30, 110] | |
Inductive Logic | [6, 46] | – | – | [46] | – | |
Probabilistic Reasoning | Bayesian Approaches | – | – | – | – | – |
Markov Models | [112] | – | – | [112, 129] | [112] | |
Statistical Inference | – | [44] | [44] | – | – | |
Learning | Decision Tree | – | [11] | – | – | – |
Reinforcement Learning | [112, 119] | – | – | [112] | [112, 119] | |
Inverse Reinforcement Learning | [102] | – | – | – | – | |
Neural Networks | [68, 70, 75] | – | – | – | [68, 70, 73, 75] | |
Evolutionary Computing | – | – | – | – | [73] | |
Optimisation | – | [7, 11, 44, 85] | [32, 44, 85] | – | [7] | |
Case-Based Reasoning | [35, 92] | – | – | – | – |
Implementation Type | Ethical Principles | ||||||
---|---|---|---|---|---|---|---|
Consequ-entialism | Utilitari-anism | Maximin | Envy-Freeness | Doctrine of Double Effect | Do No Harm | ||
Logical Reasoning | Deductive Logic | – | [84, 110] | – | – | – | |
Non-Monotonic Logic | – | [15, 89] | [15] | – | [15] | – | |
Abductive Logic | – | – | – | – | [94] | – | |
Deontic Logic | – | [90] | – | – | [59, 90] | [90] | |
Rule-Based Systems | [30] | [11, 35, 58, 110] | [2, 58] | – | – | [37] | |
Event Calculus | – | [15, 89] | [15] | – | [15, 60] | – | |
Knowledge Representation and Ontologies | [30] | [35, 110] | – | – | – | – | |
Inductive Logic | – | – | – | – | – | – | |
Probabilistic Reasoning | Bayesian Approaches | – | [8] | – | – | – | – |
Markov Models | [112] | [100, 129] | – | – | [59] | – | |
Statistical Inference | – | – | – | – | – | – | |
Learning | Decision Tree | – | [11] | – | – | – | – |
Reinforcement Learning | [112] | [119] | – | – | – | – | |
Inverse Reinforcement Learning | – | – | – | – | – | – | |
Neural Networks | – | [13, 68, 70, 75] | [13] | – | – | – | |
Evolutionary Computing | – | – | – | – | – | – | |
Optimisation | – | [7, 8, 11, 27, 85, 88, 117] | [27, 38, 85, 88, 105] | [127] | – | – | |
Case-Based Reasoning | – | [35] | – | – | [17] | – |
4.2 Clarifying the Architecture
4.2.1 Bottom-Up Approaches.
4.2.2 Top-Down Approaches.
4.2.3 Hybrid Approaches.
Ethical Principles | Bottom-Up | Top-Down | Hybrid |
---|---|---|---|
Deontology | Inductive Logic [6]; Inverse Reinforcement Learning [102] | Rule-Based Systems, Knowledge Representation and Ontologies, Case-Based Reasoning [35]; Deontic Logic [90]; Case-Based Reasoning [92] | Neural Networks [68, 70, 75]; Rule-Base Systems, Knowledge Representation and Ontologies [30]; Markov Models, Reinforcement Learning [112]; Reinforcement Learning [119] |
Egalitarianism | – | Deductive Logic [84]; Statistical Inference, Optimisation [44]; Optimisation [7, 85]; Rule-Based Systems [21] | Rule-Based Systems, Decision Tree, Optimisation [11] |
Proportionalism | – | Deductive Logic [84]; Statistical Inference, Optimisation [44]; Rule-Based Systems [47]; Optimisation [32, 85] | – |
Kantian | – | Deductive Logic, Rule-Based Systems, Knowledge Representation and Ontologies [110]; Markov Models [129] | Non-Monotonic Reasoning and Event Calculus [15, 89]; Markov Models, Reinforcement Learning [112] |
Virtue | Evolutionary Computing [73] | Deductive Logic, Rule-Based Systems, Knowledge Representation and Ontologies [110] | Neural Networks [68, 70, 75]; Deontic Logic, Event Calculus [60]; Rule-Base Systems, Knowledge Representation and Ontologies [30]; Markov Models, Reinforcement Learning [112]; Reinforcement Learning [119] |
Consequentialism | – | – | Rule-Base Systems, Knowledge Representation and Ontologies [30]; Markov Models, Reinforcement Learning [112] |
Utilitarianism | Rule-Based Systems, Knowledge Representation and Ontologies [35] | Deductive Logic [84]; Deductive Logic, Rule-Based Systems, Knowledge Representation and Ontologies [110]; Deontic Logic [90]; Optimisation [7, 27, 85, 88, 117]; Markov Models [100]; Rule-Based Systems [58] | Non-Monotonic Reasoning and Event Calculus [15, 89]; Neural Networks [13, 68, 70, 75]; Rule-Based Systems, Decision Tree, Optimisation [11]; Bayesian Approaches, Optimisation [8]; Reinforcement Learning [119] |
Maximin | – | Optimisation [27, 38, 85, 88, 105]; Rule-Based Systems [2, 58] | Non-Monotonic Reasoning and Event Calculus [15]; Neural Networks [13] |
Envy-Freeness | – | Optimisation [127] | |
Doctrine of Double Effect | – | Abductive Logic [94]; Deontic Logic [90]; Deontic Logic, Event Calculus, Markov Models [59]; Case-Based Reasoning [17] | Non-Monotonic Reasoning and Event Calculus [15] |
Do No Harm | – | Deontic Logic [90]; Rule-Based Systems [37] | – |
4.3 Specifying the Ethical Principle
4.3.1 Implementing Pluralism.
4.4 Choosing Abstract Implementation
4.4.1 Applying Rules.
4.4.2 Developing Virtues.
4.4.3 Evaluating Consequences.
5 Gaps in Operationalising Ethical Principles
5.1 Expanding the Taxonomy
5.2 Resolving Ethical Dilemmas
5.3 Implementing Ethical Principles in STS
6 Conclusion
Acknowledgements
Appendix
A Methodology
A.1 Sources Selection and Strategy
A.1.1 Search String Definition.
A.1.2 Inclusion and Exclusion Criteria.
Inclusion |
---|
Published works from ACM CSUR, AIES, FAccT, AAAI, IJCAI, (J)AAMAS, TAAS, TIST, JAIR, AIJ, Nature, Science |
Responsible AI |
Individual and/or group fairness |
Normative ethics and multiple-user AI |
Normative ethics and STS |
Normative ethical principles and AI |
Bias when related to ethical principles |
Exclusion |
Meta-ethics or applied ethics outside of AI and computer science |
Specific ML fairness methodology |
Multiple-user AI without reference to ethics |
STS without reference to ethics |
AI principles without reference to ethical principles |
Bias without reference to ethical principles |
A.2 Method for Principle Identification
A.3 Threats to Validity and Mitigation
References
Index Terms
- Macro Ethics Principles for Responsible AI Systems: Taxonomy and Directions
Recommendations
Understanding responsibility in Responsible AI. Dianoetic virtues and the hard problem of context
AbstractDuring the last decade there has been burgeoning research concerning the ways in which we should think of and apply the concept of responsibility for Artificial Intelligence. Despite this conceptual richness, there is still a lack of consensus ...
Macro Ethics for Governing Equitable Sociotechnical Systems
AAMAS '22: Proceedings of the 21st International Conference on Autonomous Agents and Multiagent SystemsThe evolving relationship between humans and technology entails increasing concerns about the impact on ethical issues such as bias, unfairness, and lack of accountability. There is thus a need for consistent responses to multiple-user social dilemmas ...
Moral Philosophy of Artificial General Intelligence: Agency and Responsibility
Artificial General IntelligenceAbstractThe European Parliament recently proposed to grant the personhood of autonomous AI, which raises fundamental questions concerning the ethical nature of AI. Can they be moral agents? Can they be morally responsible for actions and their ...
Comments
Information & Contributors
Information
Published In
Publisher
Association for Computing Machinery
New York, NY, United States
Publication History
Check for updates
Author Tags
Qualifiers
- Survey
Funding Sources
- EPSRC Doctoral Training Partnership
- UKRI EPSRC
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 1,043Total Downloads
- Downloads (Last 12 months)1,043
- Downloads (Last 6 weeks)436
Other Metrics
Citations
View Options
Get Access
Login options
Check if you have access through your login credentials or your institution to get full access on this article.
Sign in