Abstract
Fairness is one of the most prominent values in the Ethics and Artificial Intelligence (AI) debate and, specifically, in the discussion on algorithmic decision-making (ADM). However, while the need for fairness in ADM is widely acknowledged, the very concept of fairness has not been sufficiently explored so far. Our paper aims to fill this gap and claims that an ethically informed re-definition of fairness is needed to adequately investigate fairness in ADM. To achieve our goal, after an introductory section aimed at clarifying the aim and structure of the paper, in section “Fairness in algorithmic decision-making” we provide an overview of the state of the art of the discussion on fairness in ADM and show its shortcomings; in section “Fairness as an ethical value”, we pursue an ethical inquiry into the concept of fairness, drawing insights from accounts of fairness developed in moral philosophy, and define fairness as an ethical value. In particular, we argue that fairness is articulated in a distributive and socio-relational dimension; it comprises three main components: fair equality of opportunity, equal right to justification, and fair equality of relationship; these components are grounded in the need to respect persons both as persons and as particular individuals. In section “Fairness in algorithmic decision-making revised”, we analyze the implications of our redefinition of fairness as an ethical value on the discussion of fairness in ADM and show that each component of fairness has profound effects on the criteria that ADM ought to meet. Finally, in section “Concluding remarks”, we sketch some broader implications and conclude.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Fairness is one of the most prominent values in the Ethics and Artificial Intelligence (AI) debate and, specifically, in the discussion on algorithmic decision-making (ADM). Especially in recent years, more and more decisions are being delegated to autonomous or semi-autonomous algorithms, as machine learning (ML) algorithms and ADM systems have become crucial in a wide range of domains. At the same time, a growing number of flaws have been uncovered in ADM functioning, especially relating to biases involving discrimination against specific groups or categories, as well as to the opacity or non-explainability of ADM. Some scholars have defined algorithms, including ML-based ADM, as weapons of math destruction (O’Neil, 2016), as they are based on models that reinforce discrimination rather than lead to greater fairness.
As a result, fairness in ADM has become an urgent task, both in academia and industry. However, while the need for fairness in ADM is widely acknowledged, the very concept of fairness has not been sufficiently explored so far. Moreover, even though fairness is acknowledged as a moral value, an ethical inquiry into fairness, drawing insights from accounts of fairness developed in moral philosophy, is largely missing in the debate on fairness in ADM.
Our paper aims to fill this gap and claims that ethical inquiry allows us to highlight the moral value of fairness and that an ethically informed re-definition of fairness is needed to adequately investigate fairness in ADM. By unpacking the ethical value of fairness in ADM, we question the statement that ADM is a weapon of destruction, per se, and point to criteria and tools that could make ADM a “weapon” of moral construction, leading to greater fairness.
To achieve our goal, in section “Fairness in algorithmic decision-making”, we provide an overview of the state of the art of the discussion on fairness in ADM and show that the concept of fairness emerging from this debate can be defined as “negative fairness,” whereby fairness overlaps with non-discrimination, which is itself defined as the absence of biases. At the end of the section, we question whether this concept of “negative” fairness is enough for ADM or whether it is also necessary to highlight a “positive” sense of fairness that requires more than just non-discrimination and that points to features and criteria of a fair ADM that extend beyond the consideration of biases and datasets.
In section “Fairness as an ethical value”, we pursue an ethical inquiry into the concept of fairness, drawing insights from moral philosophy, and show that a “positive” sense of fairness can be elaborated, that is, fairness as an ethical value. In doing so, we base our arguments on a renewed reflection on the concept of respect, which goes beyond the idea of equal respect to include respect for particular individuals, too. In this view, fairness is articulated in a distributive and socio-relational dimension. It comprises three main components: fair equality of opportunity, equal right to justification, and fair equality of relationship. These components are grounded in the need to respect persons both as persons and as particular individuals.
In section “Fairness in algorithmic decision-making revised”, we analyze the implications of our redefinition of fairness as an ethical value on the discussion of fairness in ADM and claim that an ethically informed concept of fairness ensures that ADM respects persons both as persons and as particular individuals. More specifically, we show that each component of fairness has profound effects on the criteria that ADM ought to meet.
Finally, in section “Concluding remarks”, we sketch some broader implications and conclude.
Fairness in algorithmic decision-making
Fairness in ADM is a widely debated topic in the framework of the ethics of AI and algorithms (Jobin et al., 2019; Mittelstadt et al., 2016; Tsamados et al., 2021). The growing interest in this topic is especially due to the huge development and application of ADM in the form of an unprecedented delegation of more and more human tasks, daily choices, and high-stakes decisions to “autonomous” and “semi-autonomous” algorithms, such as ML algorithms. In recent years, initiatives focused on fairness in ADM have increased greatly,Footnote 1 and a growing body of literature has been developed to focus on the need to address and improve fairness in ADM (Abebe et al., 2020; Binns, 2018; Edwards & Veale, 2017; Kleinberg et al., 2017; Overdorf et al., 2018; Selbst et al., 2019; Wong, 2019), especially as a response to the controversial effects of ADM in a wide array of application domains, including internet search engines, news aggregators, social media communication and information management (Bozdag, 2013; Hinman, 2008; Laidlaw, 2008; Parsell, 2008; Mowshowitz & Kawaguchi, 2002; Shapiro, 2020; Sunstein, 2008), advertising and marketing (Coll, 2013; Hildebrandt, 2008; Tufekci, 2015), recruiting and employment (Kim, 2017), university admissions (Simonite, 2020), housing (Barocas and Selbst, 2016), credit lending (Devill, 2013; Lobosco, 2013; Sengh Ah Lee & Floridi, 2020), criminal justice (Abebe et al., 2020; Berk et al., 2018), policing (Ferguson, 2017), and healthcare (Buhmann et al., 2019; Danks & London, 2017; Robbins, 2019), just to mention a few.
Moreover, the exponential progress and use of ML algorithms that involve deep learning architectures in highly complex ADM systems—whose outputs are not based on causal connections but on correlations induced from data—often lead to opaque, non-explainable ADM (i.e., ADM as a black box; Pasquale, 2015). In addition, a growing number of flaws have been uncovered in ADM functioning (Benjamin, 2019; Eubanks, 2018; Noble, 2018; O’Neil, 2016), ranging from the ProPublica investigative report on racial biases in COMPAS risk-assessment ADM for predicting recidivism in the U.S. justice system (Angwin et al., 2016) to Amazon’s gender-biased recruitment algorithm (Dastin, 2018) to the “Gender Shades Study” on gender and racial bias in ADM facial recognition software (Buolamwini & Gebru, 2018). This situation raises unprecedented concerns and makes fairness in ADM “an urgent task in academia and industry” (Shin and Park, 2019).
But while the need for fairness in ADM is widely acknowledged, there is less agreement and much more vagueness with regard to the definition of fairness in ADM (Gajane & Pechenizkiy, 2018; Lee, 2018; Saxena et al., 2019). Out of more than 21 definitions of fairness in ADM emerging in the available literature (Wong, 2019), four arise as the most commonly adopted so far (Corbett-Davies & Goel, 2018; Kleinberg et al., 2017; Tsamados et al., 2021):
-
1.
Fairness as anti-classification, according to which fairness is achievable by avoiding the use in ADM of proxies that refer to protected categories (such as race, religion, and gender).
-
2.
Fairness as classification parity, according to which an ADM is fair if the measures of its predictive performance are equal across protected groups.
-
3.
Fairness as calibration, according to which the fairness of an ADM is given by the measure of how well-calibrated an algorithm is between protected groups.
-
4.
Fairness as statistical disparity, according to which the fairness of an ADM corresponds to an equal average probability estimate over all members of protected groups.
These definitions, however, are often mutually incompatible (Kleinberg et al., 2017). For example, while in some cases the removal of proxies for protected categories can be beneficial, there are other cases where the consideration of protected characteristics is crucial to making fair decisions (Corbett-Davies & Goel, 2018).Footnote 2
Despite their differences, all the definitions revolve around the consideration or treatment of protected groups or categories. As a result, fairness in ADM seems to overlap with non-discrimination against protected groups or categories (Barocas & Selbst, 2016; Green & Chen, 2019; Grgić-Hlača et al., 2018). Discrimination in ADM is defined and measured through mathematical, statistical, and probabilistic tools. The most prominent methods for ensuring fairness in ADM consist mainly of “discrimination prevention analytics and strategies” (Romei & Ruggieri, 2014) and “fairness- and discrimination-aware data mining techniques” (Dwork et al., 2011; Kamishima et al., 2012) based on the development of anti-discrimination criteria and their integration into the ADM classifier algorithm, and on the control of distortion of data used to train the algorithms.
More specifically, discrimination in ADM systems has been largely traced back to biases (Barocas, 2014; Diakopoulos & Koliska, 2017; Shah, 2018), especially “automation bias” and “bias by proxy.” Automation bias is the large-scale spread through ADM processes of social and cultural biases deeply embedded in historical training data used to fuel the ADM (Abebe et al., 2020; Benjamin, 2019; Hu, 2017; Noble, 2018; Richardson et al., 2019; Turner Lee, 2018). Bias by proxy occurs when unanticipated proxies for protected variables (gender, race, etc.) can still be used to reconstruct and infer, by proxy, biases that are highly difficult to detect and eliminate, even though an attempt was made to prevent some biases by excluding them from the historical data used to train the ADM (Fuster et al., 2017; Gillis & Spiess, 2019).
As a consequence, the resulting widespread idea is that, since biases are the main cause of discrimination by ADM (Benjamin, 2019; Eubanks, 2018; Noble, 2018; O’Neil, 2016), detecting and eliminating biases would possibly mitigate or fix algorithmic discrimination and build fair ADM systems.
To sum up, the concept of fairness emerging so far in ADM can be defined as “negative fairness,”Footnote 3 whereby fairness is understood as the absence of discrimination. By that definition, a fair ADM is one whose functioning, outcomes, and effects do not produce discrimination in the consideration or treatment of individuals or groups (Corbett-Davies & Goel, 2018; Gillis & Spiess, 2019; Kleinberg et al., 2017; Newell & Marabelli, 2015). Discrimination itself is defined as an absence of biases. Therefore, a fair ADM is a “bias-free ADM.” In other words, fairness in ADM seems to be secured by eliminating discrimination via techniques focused on detecting and eliminating biases both in historical training datasets and in proxies (see, for example, the approaches proposed by Veale & Binns, 2017, and Katell et al., 2020).
This concept of “negative fairness” seems to be confirmed in the existing frameworks defining the ethical principles for the design of AI and ML algorithms. In the analysis of 84 ethical documents on ethical ML and ADM design, conducted by Jobin et al. (2019), fairness emerges as one of the five core ethical principles, recurring in more than 68 academic and non-academic ethical frameworks, most of which prescribe the design of non-discriminating, non-biased ML algorithms. The definition of a fair ADM as a “bias-free ADM” also underpins initiatives on ethical and fair ML in the industry (Ochigame, 2019). In 2018, Microsoft published its ethical principles for ML, starting with the principle of fairness, meaning the use of bias-free ML. Amazon inaugurated “fairness in AI,” a program partnered with the U.S. National Science Foundation focused on the scrutiny of biases in ADM systems. Facebook announced its commitment to fairness in ML by launching the “Fairness Flow,” a specific tool designed to “search for bias.” Meanwhile, IBM launched “AI Fairness 360,” a tool designed to “check for unwanted bias in datasets and machine learning models.”
Despite the importance of biases in ADM, however, it is questionable whether focusing exclusively on biases can encompass the complexity of the concept of fairness, which is also informed by history and sociology (Binns, 2018; Selbst et al., 2019) and entails a deep ethical dimension. Moreover, it is ultimately unclear whether the absence of biases and non-discriminating datasets would ensure true fairness—that is, whether a “negative” concept of fairness alone is sound or whether something more is needed to ensure fairness in a “positive” sense. In sum, despite the growing attention on fairness in ADM, the very concept of fairness has not been sufficiently explored so far, nor have the main assumptions behind it been adequately accounted for (Overdorf et al., 2018). We aim to fill this gap and claim that ethical inquiry allows us to unpack the different components of fairness and adequately evaluate the role of fairness in ADM.
To achieve our goal, in the next section, we claim that only by acknowledging the ethical dimension of fairness and redefining the notion of fairness accordingly can we adequately understand what fairness in ADM is. As we show, this requires more than merely “negative fairness” and the exclusive focus on discrimination. It requires the development of a more complex concept of “positive fairness.” In section “Fairness in algorithmic decision-making revised”, drawing on the proposed redefinition of fairness, we analyze its implications for discussions on fairness in ADM.
Fairness as an ethical value
To pursue our ethical inquiry into the concept of fairness, we first clarify the relationship between fairness and discrimination, drawing on the main reflections offered by moral philosophy. We then zoom in on the ethical significance of fairness and define fairness as an ethical value. In doing so, we base our arguments on a renewed reflection on the concept of respect, which goes beyond the idea of equal respect for persons to include respect for particular individuals, too.
The relationship between fairness and discrimination has been widely acknowledged by philosophical scholarship, mainly in the framework of theories of justice.Footnote 4 Many scholars focus on discrimination as a form of unfair treatment, rooted in the misrecognition of the value of equality. The main argument is that every person has an equal moral worth and is, therefore, due equal concern and respect (Dworkin, 2000). This implies treating people as equals and refraining from wrongful discrimination,Footnote 5 as far as distributive justice is concerned. Other scholars focus on the social meaning of discrimination. In this perspective, discrimination is seen as a way of demeaning or degrading someone and implies treating them cruelly or humiliating them to undermine their capacity to develop and maintain an integral sense of self (Sangiovanni, 2017). In a similar vein, other scholars, drawing on Rawls’s justice as fairness (1971), focus on the socio-relational aspects of discrimination and highlight its negative effects on the achievement of a society of equals. In this view, discrimination is considered a major moral and social wrong, as it hinders attitudes and practices of mutual recognition among persons (Anderson, 1999; Scheffler, 2003).
In short, the philosophical reflection on discrimination and fairness in the context of theories of justice highlights the fact that discrimination hinders social justice. The major reason is that discrimination is rooted in the misrecognition of the value of equality of persons. In this case, equality is, above all, moral, as it is grounded in the (equal) moral worth of every person.
Despite the acknowledgment of the moral value of equality for discussions on discrimination and fairness, relatively little work has been done so far on the ethical significance of discrimination and fairness themselves.
As for discrimination, there are just a few recent—and very interesting—studies that focus on the moral wrong of discrimination and investigate the conditions under which discrimination is wrongful (Eidelson, 2015; Lipper-Rasmussen, 2013; Moreau, 2010). The general idea underpinning these works is that wrongful discrimination is connected to moral disrespect, that is, disrespect for the discriminatees as persons (Eidelson, 2015, p. 6). More precisely, an action is discriminatory if either the reasons underlying the action or the consequences brought about by the action do not respect the status of an agent as equal. In other words, it is “the absence of appropriate responsiveness to someone’s standing as a person” (Eidelson, 2015, p. 7) that underpins moral disrespect and wrongful discrimination. The moral respect at stake here can be best captured by referring to the notion of recognition respect elaborated by Darwall (1977), that is, respect grounded in the recognition of the (equal) humanity of every person.Footnote 6
As for fairness, ethical inquiry has been mainly aimed at investigating the basis of moral equality that fairness ought to ensure (Carter, 2011; Sangiovanni, 2017; Waldron, 2017), rather than at discussing the ethical significance of fairness as such. However, by focusing on the ethical significance of fairness, we can identify its constitutive dimensions and components and understand why it is an important value that ought to be promoted effectively.
On one hand, fairness is strictly linked with (non-)discrimination, as the reflection on theories of justice sketched above clearly highlights. On the other hand, fairness extends far beyond non-discrimination to include both a distributive and a socio-relational dimension (Giovanola, 2018). Among its constitutive elements are fair equality of opportunity (Rawls, 1971) and equal right to justification (Forst, 2014). Fair equality of opportunity regulates the distribution of benefits and burdens of social cooperation and the arrangement of socio-economic inequalities in such a way that not only prevents discrimination but also creates conditions that enable personal agency and self-realization (Rawls, 1971, p. 73).Footnote 7 It shows a clearly distributive dimension of fairness, which is grounded in the need to respect persons as recipients of distribution and as subjects capable to make choices and take actions.
The right to justification expresses the ethical demand that no “relations should exist that cannot be adequately justified toward those involved” (Forst, 2014, p. 6): it points to the importance of intersubjective relations and structures, and requires that they protect every person’s status and capability to make up their own minds on issues of concern. This demand rests on a principle of general and reciprocal justification, that is, on the claim that every person ought to be respected as a subject who offers and demands justification. Therefore, the question of justification is also a question of power, that is, the question of who decides what (Forst, 2014, p. 24). The right to justification shows a socio-relational component of fairness, which is intertwined with both the importance of mutual recognition and the need to mitigate asymmetries of power.Footnote 8
Both fair equality of opportunity and equal right to justification are constitutive components of fairness and highlight a distributive and a socio-relational dimension. Moreover, both are based on the need for equal respect for persons as persons, that is, equal respect for the equal moral worth of every person.
From this, it follows that the ethical significance of fairness is grounded in the recognition of the equal moral worth of each person, which, in turn, calls for equal respect as the appropriate ethical response to each individual’s standing as a person. In other words, the value of fairness lies in the commitment to ensure equal respect for persons as persons.
Equal respect here can mean recognition respect, a type of respect that requires treating persons as “opaque” and respecting them on the footing of moral equality, without engaging in any assessment of their merits, demerits, or character. In other words, equal respect entails respecting people’s moral value, which is to say, respecting their (abstract) capacity for agency.
However, people’s moral value is not only attached to their (abstract) capacity for agency “but also to their status as particular individuals” (Noggle, 1999, p. 457) who exercise their agency in different concrete ways. Acknowledging this implies going beyond an exclusive focus on equal respect for persons as persons to investigate respect for persons as particular individuals or particular agents, taking into account the different ways in which different persons exercise their agency. But what does this mean exactly?
A very first meaning of respect for persons as particular individuals can be captured by what Darwall defines as “appraisal respect,” that is, respect for a person’s character based on that person’s specific features that make them deserving of such positive appraisal (Darwall, 1977, pp. 38–39). Darwall’s definition, however, does not explain what exactly is involved in this kind of respect. What makes a particular person “deserving” of appraisal respect and what the grounds of moral appraisal are remain unclear.
A helpful clue for digging into this issue is provided by Noggle, who argues that “A person is much more than a mere instance of rational agency. She is a being with a particular life, a particular psychology, and a particular set of attachments, goals, and commitments. To be a person is not merely to be an instance of rational agency; it is also to be some particular individual. It seems that if we are really serious about respecting persons, we ought to respect them not only as instances of rational agency, but also as the particular individuals that they are” (Noggle, 1999, p. 454). A person’s particular identity—that is, her status as the particular individual she is—depends on many factors, including her ends, values, attachments, and commitments, the “ground projects” that give meaning and purpose to her life (Williams, 1981) and make her a concrete “me,” as opposed to a “disencumbered” and abstract self (Sandel, 1984). Therefore, respecting persons requires respecting their status as particular individuals, going beyond treating them as opaque only, and focusing also on the different ways in which they exercise their agency.Footnote 9 It should be pointed out, however, that respect for particular individuals does not justify unqualified obligations to respect any particular ends, values, attachments, or commitments. Only the genuine ones, which express our agency (Valentini, 2019, p. 7) and are morally permissible (Hill, 2000, pp. 79 ff.), call for this kind of respect.
The notion of respect for persons as particular individuals allows us to unpack the grounds of our moral appraisal of specific persons and to identify them in a particular individual’s ends, values, attachments, and commitments that are morally permissible and express her agency.
Focusing on respect for persons as particular individuals, we can uncover a third constitutive component of fairness, in addition to fair equality of opportunity and equal right to justification, that we call fair equality of relationship. Fair equality of relationship points to the importance of relationships in shaping particular individuals’ attachments, commitments, ends and values. Let us make the argument clearer. Commitments involve robust intentions that can be central to one’s life plans and sense of self (Calhoun, 2009). Many such commitments depend partially or fully on our relationships, including attachments and affiliations (Giovanola, 2021), which, in turn, trigger shared intentions and capabilities for joint action (Gilbert, 2006). Affiliations, attachments, joint commitments, and more broadly, relationships give rise to obligations among the parties involved and help them shape their ends and values. What does this imply for fairness?
To answer this question, let us focus on some examples, which are also useful for the discussion on fairness in ADM that will be carried out in the next section. Many relationships, shaping attachments, affiliations, joint commitments, and shared intentions, are becoming more and more triggered by technologies based on algorithms and ADM (Giovanola, 2021). However, these technologies often create filter bubbles (Pariser, 2011) or echo chambers (Sunstein, 2008) that aggregate news and select relevant information, thereby predetermining the conditions of our choices and restricting the range of available options. As a result, people are under the illusion that they have more freedom of choice, thanks to personalization processes and techniques provided by technology, but in actuality, they are delegating a great deal of the choice process to technology and very often do not even realize how or why they made a certain choice (Royakkers et al., 2018). As a result, their autonomy is gradually challenged and eroded (Mittelstadt et al., 2016). Moreover, what is more important for our argument here, these technologies often shape our interactions in ways that tend not to expand our relationships but to narrow them—that is, in ways that often produce polarization. Paradoxically, by vastly increasing the number of people it is possible to meet, these technologies might narrow the focus, as we can—and, in fact, do—choose to primarily make contact or socialize with people just like ourselves. “In short, online we can intentionally restrict our interactions to those of exactly the same opinion sets as our own”; however, this “narrowing of focus and community” tends to “make us more prejudiced and our attitudes more insular” and, ultimately, leads to increased social cleavage and division (Parsell, 2008, p. 43). In fact, polarization and social cascades are likely to occur more often when people only engage in relationships with those who are like them. In groups or communities of the like-minded, “people are likely to move toward a more extreme point, in the direction to which they were previously inclined” and thus are likely to “end up thinking the same thing that they thought before—but in more extreme forms” (Sunstein, 2008, p. 99). As people want to be perceived favorably by other group members, they often “adjust their position in the direction of the dominant position” (Sunstein, 2008, p. 101): the outcome of this adjustment is that both the group, as a collective, and its members, as individuals, are incline to support positions that tend to become increasingly self-enclosed, thus preventing the possibility of espousing different views. Moreover, as recent research on cases of motivationally biased beliefs has shown, feeling as members of self-enclosed groups that conflict with other groups, group members might end up eroding their ability to think of themselves as part of a joint project, as capable of having shared purposes, and as determined to build, all together, a better society (Giovanola & Sala, 2021).
The examples above show that relationships can restrict – rather than expand – particular individuals’ freedom and capability to have shared intentions and joint commitments; they can as well highly influence particular individuals’ ends and values. Self-enclosure and polarization can result from relationships that are not genuine in the sense specified above, as they constrain and eventually counter particular individuals’ agency. Fair equality of relationship, in the way we conceive it, is intended exactly to ensure that relationships foster particular individuals’ agency, triggering genuine attachments, commitments, values and ends. This is why we claim that a sound conception of fairness needs to comply with fair equality of relationship, so as to include respect for particular individuals’ genuine relationships and agency.
To sum up our arguments so far, through our ethical inquiry into the notion of fairness, we have clarified the relationship between fairness and discrimination and showed that fairness and non-discrimination do not overlap. Even though fairness involves non-discrimination, it requires much more than that. Fairness has both a distributive and a socio-relational dimension, and among its constitutive components are fair equality of opportunity and equal right to justification. Both fair equality of opportunity and equal right to justification are grounded in equal respect for persons as persons, that is, respect that recognizes the equal moral worth of each person. However, as we have shown, a person’s moral value is also attached to their status as particular agents and triggers obligations of respect for particular individuals. Respect for particular individuals is respect that recognizes, among other things, the importance of particular individuals’ attachments, affiliations, and joint commitments, as they have a huge bearing on individuals’ ends and values, shared intentions, and capabilities for joint action, that is, on many of the forms in which social relations and interpersonal relationships are shaped and performed. Therefore, a third essential component of fairness, besides fair equality of opportunity and equal right to justification, is fair equality of relationship.
The concept of fairness emerging from our inquiry sheds light on the ethical value of fairness. Understanding fairness as an ethical value amounts to redefining fairness, going beyond an exclusive focus on negative fairness and developing a more complex concept of positive fairness, which, in turn, allows us to account for the distributive and socio-relational dimension of fairness, identify the three main components of fairness, and acknowledge the importance of respect—both for persons as persons and for particular individuals—at the basis of fairness.
Having unpacked and redefined the concept of fairness through our ethical inquiry, in the next section, we analyze its implications for the discussion on fairness in ADM.
Fairness in algorithmic decision-making revised
In section “Fairness in algorithmic decision-making”, we claimed that fairness in ADM is understood mainly as non-discrimination, which, in turn, is defined mostly in terms of biases and algorithm data. We also questioned whether this concept of “negative” fairness is enough for ADM or whether it is also necessary to highlight a “positive” sense of fairness that requires more than just non-discrimination and that points to features and criteria of a fair ADM that extend beyond the consideration of biases and datasets. In section “Fairness as an ethical value”, we showed that, thanks to an ethical inquiry into the meaning of fairness, a “positive” sense of fairness can be elaborated, specifically, fairness as an ethical value. In this view, fairness is articulated in a distributive and a socio-relational dimension. It comprises three main components: fair equality of opportunity, equal right to justification, and fair equality of relationship. These components are grounded in the need to respect persons both as persons and as particular individuals.
In this section, we analyze the implications of our redefinition of fairness as an ethical value on the discussion on fairness in ADM and claim that an ethically informed concept of fairness ensures that ADM respects persons both as persons and as particular individuals. More specifically, we show that each component of fairness has profound effects on the criteria that ADM ought to meet.
Fair equality of opportunity is a component that mostly expresses the distributive dimension of fairness. It is grounded in the need to respect persons as persons and requires that distributive shares—and more specifically, access to opportunities—not be improperly influenced by economic and social contingencies, that is, by a person’s place in the social system (Rawls, 1971, p. 63). Fair equality of opportunity clearly implies non-discrimination but also goes beyond it, in so far as it does not only require a formal equality of opportunity—ensured, for example, by the legal system—but also the promotion of real chances for every person.
The concept of fair equality of opportunity has been used in the discussion on fairness in ADM, but only in a narrow sense, mainly overlapping with non-discrimination, in one (or more) of the four main meanings discussed in section “Fairness in algorithmic decision-making”, and mainly traces back to statistical parity or parity in measures of the ADM’s performance across protected groups. An example is that of Hardt et al. (2016), who introduced the principle of equality of opportunity in ML, the underlying idea of which is that individuals who qualify for a desirable outcome should have an equal chance of being correctly classified for that outcome. However, the operationalization of the principle they propose is still limited to not making predictions dependent on sensitive attributes like race or gender. As we discussed above, however, the removal of biases and proxies is not sufficient to make an ADM fair in a way that creates real chances for every person and that therefore complies with a “positive” concept of fairness.
Ensuring fair equality of opportunity in ADM also requires the design of compensatory tools able to create real chances for every person, even when biases against and proxies for protected categories are removed. As an example, let us consider the inequalities in access to prestigious schools among high-income and low-income students. Even if the ADM that manages online information is corrected for biases and proxies and its advertisements recommending that users apply to prestigious schools do not exclude members of disadvantaged communities, the ADM does not contribute to creating real chances for the members of those communities to be considered equally in the ADM’s application evaluation or to pay high university tuition fees. Therefore, it does not help mitigate disparities caused by their belonging to specific groups or categories. Nevertheless, an ADM design previously informed of existing social inequalities can introduce compensatory tools to mitigate these phenomena. Continuing with our example, in the context of education, ADM systems that manage and recommend information on applying to prestigious schools can be designed to target low-income communities or frequently marginalized groups with content on scholarships, grants, free online courses on extracurricular activities, and free courses on specific subjects. At the same time, ADMs that regulate applications to universities should be designed to give these activities a weight or score similar to ones given to other, less affordable activities (such as study abroad programs or violin lessons).
To sum up, ensuring fair equality of opportunity in ADM cannot be limited to eliminating discriminatory biases in the training data. It also requires an ADM design that takes existing social inequalities into account and develops tools to compensate for them. For example, designers could introduce “compensatory correlations” in the supervised learning phase of ADM to counteract or at least mitigate social and economic inequalities that are deeply rooted in our societies.
Equal right to justification is a component that mostly expresses the socio-relational dimension of fairness. More specifically, it expresses the ethical demand that no relations should exist “that cannot be adequately justified towards those involved” (Forst, 2014, p. 6). Like fair equality of opportunity, it is grounded in the need to respect persons as persons, but instead of focusing on distribution, it focuses on intersubjective relations and structures and requires that they protect every person’s status of an equal. This demand rests on a principle of general and reciprocal justification, that is, on the claim that every person ought to be respected as a subject who offers and demands justification.
Ensuring the right to justification in ADM requires ADM to protect every person’s status of an equal end-setter—in other words, ADM must respect every person as having an equal right to offer and demand justification. As we know, ADM scales a huge quantity of data and identifies patterns through correlations. On the basis of these patterns, the range of choice options available to every person is widely pre-defined in ways that affect the conditions of the person’s choices and agency—their status as an equal end-setter. Therefore, according to the right to justification, every person has the right to demand justification for ADM processes and outcomes, and ADM designers thus have a duty to take this demand into account in ways that are accessible to the subjects involved. This does not amount to a call for full transparency. Beyond the common difficulty of achieving complete transparency of ADM due to data design decisions and model obfuscation by ADM providers, full transparency is often unnecessary for a sufficient explanation to users, nor is it always beneficial to them. While full transparency may provide users with a full overview of an algorithm’s model of functioning, features, and limitations so that they can game the ADM, it also overwhelms users with a huge amount of often incomprehensible technical information that makes the ADM even more opaque (Ananny & Crawford, 2018). What counts as an interpretable explanation varies based on the end-user, and as different end-users require different explanations (Edwards & Veale, 2017), different explanations require different levels of transparency. The specific level of transparency required by the right to justification concerns the inferences (Diakopoulos & Koliska, 2017) produced and used by ADM to process a certain outcome. The right to justification requires that these inferences (and patterns) be conceived as explainable or describable (see, for example, Gebru et al., 2020) to not prevent fairness. In other words, the right to justification is the right of a person involved in an ADM process to know the reasons (i.e., the correlations) behind a certain algorithmic output so that persons subjected to ADM can exercise their right and power to know and, therefore, control to a certain extent their consideration by ADM as a person and—when this is not adequately respected—to contest and change the parameters underlying the ADM’s outcome.
For example, let us think about the ADM used by health insurance companies to determine insurance rates for individuals presenting medical pathologies. The correlations on which ADM mostly relies are based on correlations found between patients’ symptoms, vital signs, general past habits, and previous comparative cases presenting formally similar characteristics, but they generally ignore emotional facts (e.g., willingness to change habits like diet and lifestyle) that can have a crucial impact on the disease’s outcome and its medical treatments (Buhmann et al., 2019). The result is that a person with a disease predicted as severe or scored as potentially terminal will pay higher insurance rates—often without knowing it—due to this inaccurate prediction. This is a case where a patient involved in the ADM can exercise her right to justification and ask for the disclosure of the correlations that lead the ADM to a certain outcome.
Thus, the right to justification is a key criterion both for ADM designers and for institutional decision-makers interested in developing and deploying fair ADM. For ADM designers, considering the right to justification as a design criterion means projecting ADM systems in a way that can, by design, secure the disclosure of correlations on which a certain outcome is based without fully disclosing the ADM model or infringing the company’s intellectual property rights. For institutional decision-makers, the right to justification ought to be a criterion of discernment in the initial stages—when they are called to adopt ADM in crucial social sectors (such as justice and healthcare) and to discern the best deployment option in order to comply with the ethical value of fairness—as well as later in the process, to exclude or prohibit an ADM when a person’s request cannot be fulfilled and thus the duty of the ADM’s providers is not fulfilled.
Finally, fair equality of relationship is a component that, like equal right to justification, refers to the socio-relational dimension of fairness, though it is different in that it is grounded in the need to respect persons as particular individuals rather than as persons. Fair equality of relationship requires every person to be given real chances to engage in relationships that express their agency and that favour genuine attachments, commitments, values and ends.
As argued in section “Fairness as an ethical value”, many relationships are becoming more and more triggered by technologies based on algorithms and ADM, and the latter often create filter bubbles or eco chambers. The reason is that many such relationships depend on correlations induced from data and that seem targeted to particular persons but that actually do not respect them as particular individuals. Indeed, the outputs of ML-based ADM are probabilistic. They are produced by scaling huge quantities of data and finding associations and correlations between variables that emerge in data as recurrent. The problem is that ML-based ADM can infer patterns where none actually exists (Boyd & Crawford, 2012). It might also see patterns that result from in-built properties of the system or properties that are inherent to the datasets chosen or to the model itself, or that arise from the standardization of macro-correlations in data, very often deduced by setting the ML’s task towards the detection of similar characteristics amongst users’ data (e.g., collective filtering) with the ultimate purpose of categorizing people by assigning them labels or profiles in groups to which targeting diverse information and therefore shaping their exposure to diverse relationships. In all these cases, patterns detected and used as causes by the ADM to produce a certain outcome, and specifically, a certain label (or profile) on which the ADM bases the shaping of users’ exposure to information and relationships, fail to respond to the persons as particular individuals and to consider their real attachments, affiliations, and joint commitments, instead assigning a probabilistic bet to them. This ends up pre-shaping the conditions on which individuals form relationships, ends, and values and how they express their agency, thereby influencing their capacity and freedom to develop new ideas and ground projects,Footnote 10 based on detached nodes and correlations that they cannot see and hence do not respect them as particular individuals.
To make our argument clearer, let us consider the case of Bob, a White student from Wisconsin with a huge number of historical “friends” on Facebook who express strong right-wing opinions and are members of a far-right group. While Bob, his family, and his closest friends have a very different set of political opinions that they usually do not share online, Bob’s social media, which relies on ML-based ADM systems, recommends right-wing information, news, and social communities to him, based on the set of high correlations between the number of his friends and a right-wing political orientation. Bob moves to Iowa for school, and the same ML-based ADM shows him housing ads that exclude Black neighborhoods based on previous correlations scaled through the analysis of his social media. Although Bob may become aware of this distortion and so start to act specifically to game the ADM in order to change the correlations inferred, this awareness is very often difficult to mature. It is very likely that, as a result, Bob will gradually start to develop friendships, start activities, and form shared commitments with neighbors, inadvertently and gradually changing his values and beliefs in order to be recognized by the new community he entered on the basis of ADM’s influences and specifically on the basis of the exposure to information and relations that ADM have pre-selected for him, through the elaboration of a profile and a categorization that do not consider him as a particular individual. This is a clear example of ADM correlations that fail to consider Bob’s genuine attachments and values, instead preferring to focus on macro-correlations based on probabilistic calculations. Thus, ADM fails to respect Bob as a particular individual, instead favoring its consideration as a detached aggregation of recurrent variables.
So far, the methods proposed to counteract this phenomenon have mostly relied on users’ capacity to sabotage ADM’s correlations, for example, by erasing web histories, deleting cookies, using the incognito option, entering fake queries, or liking everything on social media (Pariser, 2011). But not only is the increasing presence of ML-based ADM in our social environments making these techniques harder for users to perform; ADM functioning is also far from ensuring the respect of persons as particular individuals.
Ensuring respect of persons as particular individuals and fair equality of relationship in ADM requires that a person be respected as a particular individual as early as the design of ADM, which should not operate merely on the basis of nodes and correlations detected and chosen solely by ML-based ADM by comparing users’ observable data and macro-generalized patterns obtained by scaling massive amounts of similar available data (as in the case of collective filtering techniques ruling ADM). The respect for genuine attachments, goals, and joint commitments by ADM, that is, the respect for fair equality of relationship, must be incorporated into the very design of ADM systems. This is possible via the combination of the continuous learning characterizing ML with novel tools designed to favor specific interaction between ADM and users so that the latter can be informed about how they are profiled and categorized and in turn be in the position to actively inform ADM about their real attachments, commitments, values and ends and so, in other words, to take part to ADM’s functioning in shaping their exposure to information and relations which are meaningful to develop and express their agency.
Examples of these tools might include questionnaires or semi-structured interviews, which provide users with an overview of their algorithmic consideration (or profile) and suggest actions (for example modifying or removing specific trails such as data nodes and information that can be critical for the development of profiles that are aligned to them as specific persons) to audit and refine the ADM to improve consideration for them as particular individuals. This ensures the possibility that users will develop genuine relationships that truly allow them to express their agency.
Concluding remarks
In our paper, we tackled one of the most urgent risks of AI systems, the risk of leading to unfair outcomes or of becoming “weapons of math destruction.” In pursuing our analysis, we focused on ADM and discussed the concept of fairness emerging in the debate. We highlighted that fairness in ADM seems to overlap with non-discrimination, which, in turn, is defined in terms of biases and datasets. Drawing insights from moral philosophy, we showed that such a concept of fairness is vague and partial, and we pursued an ethical inquiry into the concept of fairness. Our inquiry led us to redefine fairness as an ethical value and argue that it extends beyond the consideration of biases and datasets. Specifically, we argued that fairness is articulated in a distributive and socio-relational dimension and comprises three main components: fair equality of opportunity, equal right to justification, and fair equality of relationship. We also claimed that fairness is grounded in the need to respect persons both as persons and as particular individuals. Finally, we analyzed the implications of our redefinition of fairness as an ethical value on the discussion of fairness in ADM and showed that each component of fairness has profound effects on the criteria that ADM ought to meet. We pointed to some features and criteria of fair ADM and suggested some practical tools for implementing them.
We are aware that much more needs to be done in this direction. However, we are confident that our inquiry will help others better understand fairness as an ethical value and shed light on its major dimensions and components in a way that might also be useful for further implementation of fair ADM systems. In this way, we hope ADM can once more become a force for good and a weapon of moral construction aimed at contributing to greater fairness and a better society.
Data availability
Not applicable.
Code availability
Not applicable.
Change history
17 January 2024
A Correction to this paper has been published: https://doi.org/10.1007/s10676-023-09743-5
Notes
There are cases, for example, in housing advertisements, where the removal from ADM of a proxy like users’ postal codes can be beneficial, as they can be intentionally used by real estate agencies or private sellers to infer the race of potential applicants in order to exclude them. There are also other cases where the removal of data on (and proxies for) protected categories can be highly detrimental. For instance, Corbett-Davies and Goel (2018) point out how the consideration in ADM of the protected category of gender is highly crucial in recidivism algorithms used in the criminal justice system. Considering the lower rates of female re-offense, excluding data or proxies for gender as an input in ADM would result in disproportionately high-risk scores for women and lead to unfair decisions by ADM or by people on the basis of ADM ratings.
Here, we are paraphrasing the well-known distinction between negative and positive liberty introduced by Berlin (1969).
Foundational questions about discrimination are familiar to legal scholars, too, and in recent years, in particular, there has been a renewed interest in philosophical questions about anti-discrimination law (Hellman & Moreau, 2013; Khaitan, 2015) aimed mainly at defining under what conditions discrimination ought to be prohibited. The focus of these inquiries, however, is on discrimination rather than on the relationship between discrimination and fairness.
However, drawing on Dworkin (2000), Waldron (2017, p. 14) acknowledges that not every discrimination is wrongful; in fact, there might also be forms of unequal treatment or “surface-level” discrimination that do not imply any moral wrongdoing but rather are justifiable by an appeal to the whole range of human interests, as in the case, discussed by Waldron, of firefighters being selected for their physical fitness.
Darwall (1977) introduces the well-known distinction between recognition respect and appraisal respect, whereby the latter depends on the appraisal of a person’s character. Darwall’s account of recognition respect has been further elaborated on by Carter (2011), who developed the notion of opacity-respect, that is, recognition respect expressed through the idea that we have to treat every person as “opaque,” respecting them on the footing of moral equality, without engaging in an assessment of their personal merits or demerits (Carter, 2011).
Following Rawls (1971), fair equality of opportunity is to be complemented with a difference principle, which requires that – once fair equality of opportunity is guaranteed – the overall scheme of cooperation and distribution do not discriminate the (expectations of the) worst-off. Even if we do not dig into this principle here, we would like to stress that it is consistent with our discussion on discrimination and fairness.
On the importance of asymmetries of power, as a central issue of fairness in ML, framed through a relational ethics perspective, see Birhane (2021).
Following Noggle and expanding on the importance of concrete agency, some scholars have proposed calling the respect for persons as particular agents “agency respect” (Valentini, 2019).
As highlighted by Richards (2008), the freedom to express new ideas and ground projects requires the preservation of a private sphere where persons can make up their minds. He calls this intellectual privacy.
References
Abebe, R., Barocas, S., Kleinberg, J., Levy, K., Raghavan, M., & Robinson, D. G. (2020). Roles for computing in social change. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* '20). Association for Computing Machinery, New York, NY, USA, pp. 252–260.https://doi.org/10.1145/3351095.3372871.
Ananny, M., & Crawford, K. (2018). Seeing without Knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
Anderson, E. (1999). What is the point of equality? Ethics, 109(2), 289–337. https://doi.org/10.1086/233897
Angwin, J., Larson, J., Mattu, S. & Lauren, K. (2016, May 23). Machine Bias. Retrieved March 10, 2021, from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
Barocas, S. (2014), Data mining and the discourse on discrimination. Proceedings of the Data Ethics Workshop, Conference on Knowledge Discovery and Data Mining (KDD). Retrieved March 10, 2021, from https://dataethics.github.io/proceedings/DataMiningandtheDiscourseOnDiscrimination.pdf.
Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2477899
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new jim code. Polity.
Berk, R., Heidari, H., Jabbari, S., Kearns, M., & Roth, A. (2018). Fairness in criminal justice risk assessments: The state of the art. Sociological Methods & Research. https://doi.org/10.1177/0049124118782533
Berlin, I. (1969). Two concepts of freedom. Oxford University Press.
Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Retrieved 11 March, 2021, from http://arxiv.org/abs/1712.03586
Birhane, A. (2021). Algorithmic injustice: A relational ethics approach. Patterns, 2(2), 100205. https://doi.org/10.1016/j.patter.2021.100205
Boyd, D., & Crawford, K. (2012). Critical questions for big data. Information, Communication & Society, 15(5), 662–679. https://doi.org/10.1080/1369118X.2012.678878.
Bozdag, E. (2013). Bias in algorithmic filtering and personalization. Ethics and Information Technology, 15, 209–227. https://doi.org/10.1007/s10676-013-9321-6
Buhmann, A., Paßmann, J., & Fieseler, C. (2019). Managing algorithmic accountability: balancing reputational concerns, engagement strategies, and the potential of rational discourse’. Journal of Business Ethics. https://doi.org/10.1007/s10551-019-04226-4
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities. Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, PMLR, 81, 77-91. Retrieved 11 March, 2021, from http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf
Calhoun, C. (2009). What good is commitment? Ethics, 119(4), 613–641. https://doi.org/10.1086/605564
Carter, I. (2011). Respect and the basis of equality. Ethics, 121(3), 538–571. https://doi.org/10.1086/658897
Coll, S. (2013). Consumption as biopower: Governing bodies with loyalty cards. Journal of Consumer Culture, 13(3), 201–220.
Corbett-Davies, S., & Goel, S. (2018). The measure and mismeasure of fairness: A critical review of fair machine learning. Retrieved March 11, 2021, from http://arxiv.org/abs/1808.00023
Danks, D., & London, A. J. (2017). Algorithmic bias in autonomous systems. Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 4691–4697. https://doi.org/10.24963/ijcai.2017/654
Darwall, S. (1977). Two kinds of respect. Ethics, 88, 36–49. https://doi.org/10.1086/292054
Dastin, J. (2018, October 11). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. Retrieved March 7, 2021 from https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G.
Deville, J. (May 20, 2013). Leaky data: How wonga makes lending decisions. Charisma: Consumer Market Studies. Retrieved March 11, 2021, from http://www.charisma-network.net/finance/leaky-data-how-wonga-makes-lending-decisions.
Diakopoulos, N., & Koliska, M. (2017). Algorithmic transparency in the news media. Digital Journalism, 5(7), 809–828. https://doi.org/10.1080/21670811.2016.1208053
Dwork, C., Hard, M., Pitassi, T., Reingold, O., & Zemel, R. (2011). Fairness through awareness. Retrieved March 11, 2021, from http://arxi-v.org/abs/1104.3913.
Dworkin, R. (2000). Sovereign virtue: The theory and practice of equality. Harvard University Press.
Edwards, L., & Veale, M. (2017). Slave to the algorithm? Why a right to explanation is probably not the remedy you are looking for. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2972855
Eidelson, B. (2015). Discrimination and disrespect. Oxford University Press.
Eubanks, V. (2018). Automating Inequality. How high-tech tools profile, police, and punish the poor. St Martin’s Publishing.
Ferguson, A. G. (2017). The rise of big dtata policing. Surveillance, race, and the future of law enforcement. New York University Press.
Forst, R. (2014). Two pictures of justice. Justice, Democracy and the right to justification. Rainer forst in dialogue (pp. 3–26). Bloomsbury.
Fuster, A., Goldsmith-Pinkham, P., Ramadorai, T., & Walther, A. (2017). Predictably unequal? The effects of machine learning on credit markets. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3072038
Gajane, P., & Pechenizkiy, M. (2018). On formalizing fairness in prediction with machine learning. Retrieved March 11, 2021, from http://arxiv.org/abs/1710.03184.
Gebru, T., Morgenstern, J., Vecchione, B., Wortman Vaughan, J., Wallach, H., Daumé III, H., & Crawford, K. (2020). Datasheets for dataset. Retrieved March 11, 2021, from http://arxiv.org/abs/1803.09010.
Gilbert, M. (2006). A theory of political obligation: Membership, commitment, and the bonds of society. Oxford University Press.
Gillis, T. B., & Spiess, J. (2019). Big data and discrimination. University of Chicago Law Review, 459. Retrieved March 11, 2021, from https://lawreview.uchicago.edu/sites/lawreview.uchicago.edu/files/09%20Gillis%20%26%20Spiess_SYMP_Post-SA%20%28BE%29.pdf.
Giovanola, B. (2018). Giustizia sociale. Eguaglianza e rispetto nelle società diseguali. Il Mulino.
Giovanola, B. (2021). Justice, emotions, socially disruptive technologies. Critical Review of International Social and Political Philosophy. https://doi.org/10.1080/13698230.2021.1893255
Giovanola, B., & Sala, R. (2021). The reasons of the unreasonable: Is political liberalism still an option? Philosophy and Social Criticism. https://doi.org/10.1177/01914537211040568
Green, B., & Chen, Y. (2019). Disparate interactions: An algorithm-in-the-loop analysis of fairness in risk assessments. Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* ’19, 90–99. Atlanta, GA, USA: ACM Press. https://doi.org/10.1145/3287560.3287563
Grgić-Hlača, N., Redmiles, M. E., Gummadi, K. P., & Weller, A. (2018). Human perceptions of fairness in algorithmic decision making: A case study of criminal risk prediction. Retrieved March 11, 2021, from http://arxiv.org/abs/1802.09548.
Hardt, M., Price, E. & Srebro, N. (2016). Equality of opportunity in supervised learning. Retrieved March 12, 2021, from https://arxiv.org/abs/1610.02413.
Hellman, D., & Moreau, S. (2013). Philosophical foundations of discrimination law. Oxford University Press.
Hildebrandt, M. (2008). Defining profiling: A new type of knowledge? In M. Hildebrandt & S. Gutwirth (Eds.), Profiling the European citizen. Dordrecht: Springer. https://doi.org/10.1007/978-1-4020-6914-7_2
Hill, T. E., Jr. (2000). Respect, pluralism, and justice. Kantian perspectives. Oxford University Press.
Hinman, L. M. (2008). Searching ethics: The role of search engines in the construction and distribution of knowledge. In A. Spink & M. Zimmer (Eds.), Web search. Information science and knowledge management. Springer.
Hoffmann, A. L., Roberts, S. T., Wolf, C. T., & Wood, S. (2018). Beyond fairness, accountability, and transparency in the ethics of algorithms: Contributions and perspectives from LIS. Proceedings of the Association for Information Science and Technology, 55(1), 694–696. https://doi.org/10.1002/pra2.2018.14505501084
Hu, M. (2017). Algorithmic Jim Crow. Fordham Law Review. Retrieved March 10, 2021, from https://ir.lawnet.fordham.edu/flr/vol86/iss2/13/
Jobin, A., Ienca, M., & Vayena, E. (2019). Artificial intelligence: the global landscape of ethics guidelines. Nature Machine Intelligence, 1, 389–399. https://doi.org/10.1038/s42256-019-0088-2
Kamishima, T., Akaho, S., Asoh, H., & Sakuma, J. (2012). Considerations on fairness-aware data mining. In: IEEE 12th International Conference on Data Mining Workshops, Brussels, Belgium, pp. 378–385. Retrieved March 10, 2021, from: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6406465
Katell, M., Young, M., Dailey, D., Herman, B., Guetler, V., Tam, A., Binz, C., Raz, D., & Krafft, P. M. (2020). Toward situated interventions for algorithmic equity: Lessons from the field. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 45–55. Barcelona Spain: ACM. https://doi.org/10.1145/3351095.3372874.
Khaitan, T. (2015). A theory of discrimination law. Oxford University Press.
Kim, P.T. (2017). Data-driven discrimination at work. 58 Wm. & Mary L. Rev, 857 (3). Retrieved March 11, 2021, from https://scholarship.law.wm.edu/wmlr/vol58/iss3/4.
Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent Trade-Offs in the Fair Determination of Risk Scores. Leibniz International Proceedings in Informatics (LIPIcs), 67. https://doi.org/10.4230/LIPIcs.ITCS.2017.43.
Laidlaw, E. B. (2008). Private power, public interest: An Examination of search engine accountability. International Journal of Law and Information Technology, 17(1), 113–145. https://doi.org/10.1093/ijlit/ean018
Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), 205395171875668. https://doi.org/10.1177/2053951718756684
Lippert-Rasmussen, K. (2013). Born free and equal? A philosophical inquiry into the nature of discrimination. Oxford University Press.
Lobosco, K. (2013, August 27). Facebook friends could change your credit score. CNN Business. Retrieved March 11, 2021, from https://money.cnn.com/2013/08/26/technology/social/facebook-credit-score/index.html.
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society. https://doi.org/10.1177/2053951716679679
Moreau, S. (2010). What is discrimination? Philosophy and Public Affairs, 38(2), 143–179. https://doi.org/10.1111/j.1088-4963.2010.01181.x
Mowshowitz, A., & Kawaguchi, A. (2002). Bias on the web. Communications of the ACM, 45(9), 56–60.
Newell, S., & Marabelli, M. (2015). Strategic opportunities (and challenges) of algorithmic decision-making: A call for action on the long-term societal effects of ‘datificaion.’ The Journal of Strategic Information Systems, 24(1), 3–14. https://doi.org/10.1016/j.jsis.2015.02.001
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.
Noggle, R. (1999). Kantian respect and particular persons. Canadian Journal of Philosophy, 29, 449–477. https://doi.org/10.1080/00455091.1999.10717521
Ochigame, R. (2019, December 20). The invention of “Ethical AI”, 2019. Retrieved March 10, 2021 from https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/.
O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
Overdorf, R., Kulynych, B., Balsa, E., Troncoso, C., & Gürses, S. (2018). Questioning the assumptions behind fairness solutions. Retrieved March 11, 2021, from http://arxiv.org/abs/1811.11293
Pariser, E. (2011). The filter bubble. Penguin.
Parsell, M. (2008). Pernicious virtual communities: Identity, polarisation and the web 2.0. Ethics and Information Technology, 10(1), 41–56.
Pasquale, F. (2015). The Black Box Society: the secret algorithms that control money and information. Harvard University Press.
Rawls, J. (1971). A theory of justice. Harvard University Press.
Richards, N.M. (2008). Intellectual privacy. Texas Law Review, Vol. 87, Washington U. School of Law Working Paper No. 08-08-03.
Richardson, R., Schultz, J., & Crawford., K. (2019). Dirty data, bad predictions: How civil rights violations impact police data, predictive policing systems, and justice, N.Y.U. L. Review, 94 (192), Retrieved March 10, 2021, from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3333423
Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds and Machines, 29(4), 495–514. https://doi.org/10.1007/s11023-019-09509-3
Romei, A., & Ruggieri, S. (2014). A multidisciplinary survey on discrimination analysis. The Knowledge Engineering Review, 29(5), 582–638. https://doi.org/10.1017/S0269888913000039
Royakkers, L., Timmer, J., Kool, L., & van Est, R. (2018). Societal and ethical issues of digitization. Ethics and Information Technology, 20(2), 127–142. https://doi.org/10.1007/s10676-018-9452-x
Sandel, M. (1984). The procedural republic and the unencumbered self. Political Theory, 12, 81–96. Retrieved March 11, 2021, from http://www.jstor.org/stable/191382
Sangiovanni, A. (2017). Humanity without dignity. Moral equality, respect, and human rights. Harvard University Press.
Saxena, N., Huang, K., DeFilippis, E., Radanovic, G., Parkes, D., & Liu, Y. (2019). How do fairness definitions fare? Examining public attitudes towards algorithmic definitions of fairness’. Retrieved March 11, 2021, from http://arxiv.org/abs/1811.03654.
Scheffler, S. (2003). What is egalitarianism?. Philosophy and Public Affairs, 31(1), 5–39. Retrieved March 11, 2021, from http://www.jstor.org/stable/3558033.
Selbst, A. D., Boyd, D., Friedler, A. S., Venkatasubramanian, S., & Vertesi, J. (2019). Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency—FAT* ’19, 59–68. Atlanta, GA, USA: ACM Press. https://doi.org/10.1145/3287560.3287598.
Seng Ah Lee, M., & Floridi, L. (2020). Algorithmic fairness in mortgage lending: From absolute conditions to relational trade-offs. Minds & Machines. https://doi.org/10.1007/s11023-020-09529-4
Shah, H. (2018). Algorithmic accountability. Philosophical Transactions of the Royal Society: Mathematical, Physical and Engineering Sciences, 376(2128), 20170362. https://doi.org/10.1098/rsta.2017.0362
Shapiro, S. (2020). Algorithmic television in the age of large-scale customization. Television & New Media, 21(6), 658–663. https://doi.org/10.1177/1527476420919691
Shin, D., & Park, Y. J. (2019). Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior, 98, 277–284. https://doi.org/10.1016/j.chb.2019.04.019
Simonite, T. (2020, October 7). Meet the secret algorithm that's keeping students out of college. Wired. Retrieved March 11, 2021, from https://www.wired.com/story/algorithm-set-students-grades-altered-futures/
Sunstein, C. (2008). Democracy and the internet. In J. van den Hoven & J. Weckert (Eds.), Information technology and moral philosophy (pp. 93–110). Cambridge University Press.
Tsamados, A., Aggarwal, N., Cowls, J., Morley, J., Roberts, H., Taddeo, M., & Floridi, L. (2021). The ethics of algorithms: Key problems and solutions. AI & Society. https://doi.org/10.1007/s00146-021-01154-8
Tufekci, Z. (2015). Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency. Journal on Telecommunications and High Technology Law, 13(203). Retrieved March 11, 2021 from https://ctlj.colorado.edu/wp-content/uploads/2015/08/Tufekci-final.pdf
Turner Lee, N. (2018). Detecting racial bias in algorithms and machine learning. Journal of Information, Communication and Ethics in Society, 16(3), 252–260. https://doi.org/10.1108/JICES-06-2018-0056
Valentini, L. (2019). Respect for persons and the moral force of socially constructed norms. Noûs, 2019, 1–24. https://doi.org/10.1111/nous.12319
Veale, M., & Binns, R. (2017). Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society. https://doi.org/10.1177/2053951717743530
Waldron, J. (2017). One another’s equal. The basis of human equality. Harvard University Press.
Williams, B. (1981). Persons, character and morality. Moral luck: Philosophical papers 1973–1980 (pp. 1–19). Cambridge University Press.
Wong, P. (2019). Democratizing algorithmic fairness. Philosophy & Technology. https://doi.org/10.1007/s13347-019-00355-w
Funding
Funding was provided by Università degli Studi di Macerata.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have no conflicts of interest to declare.
Ethical challenges and potential impact
The ethical challenges faced in the paper concern the risks of unfair outcomes of AI systems and the current limits in the Ethics & AI debate in conceptualizing and operationalizing the ethical value of fairness in the specific context of ML-based ADM. The potential impact of the work is expected both on the theoretical side of the research in Ethics of AI, as an enrichment of the debate on the moral value of fairness in ADM—drawing insights from accounts of fairness developed in moral philosophy—and on the side of application, as the ethical inquiry into fairness could provide insights for the design of novel techniques that go beyond bias detection and reduction to operationalize and implement fairness in ADM.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The original online version of this article was revised: Copyright was changed to Open Access.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Giovanola, B., Tiribelli, S. Weapons of moral construction? On the value of fairness in algorithmic decision-making. Ethics Inf Technol 24, 3 (2022). https://doi.org/10.1007/s10676-022-09622-5
Accepted:
Published:
DOI: https://doi.org/10.1007/s10676-022-09622-5