The Ethics of Algorithms-Mittelstadt Et Al
The Ethics of Algorithms-Mittelstadt Et Al
The Ethics of Algorithms-Mittelstadt Et Al
ethics of algorithms.
The map includes six types of ethical concerns: inconclusive evidence, inscrutable evidence, misguided evidence, unfair outcomes,
transformative effects, and traceability, which jointly cover epistemic and ethical concerns related to the use of algorithms.
Abstract
In information societies, operations, decisions and choices previously left to humans are increasingly delegated to
algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken
as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions,
and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and
operation of algorithms and our understanding of their ethical implications can have severe consequences affecting
individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance
of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of
ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to
develop the ethics of algorithms.
Keywords
Algorithms, automation, Big Data, data analytics, data mining, ethics, machine learning
algorithms can inadvertently produce biased evidence, leading to unfair outcomes and discrimination. Moreover, algorithms challenge autonomy and
informational privacy, underscoring the need for ethical accountability and responsibility in the design and operation of algorithms.
Creative Commons Non Commercial CC-BY-NC: This article is distributed under the terms of the Creative Commons Attribution-
NonCommercial 3.0 License (http://www.creativecommons.org/licenses/by-nc/3.0/) which permits non-commercial use, reproduction
and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages
(https://us.sagepub.com/en-us/nam/open-access-at-sage).
In conclusion, the research article offers a comprehensive analysis of the ethical implications of algorithms, providing a structured framework to
address the ethical challenges posed by algorithmic decision-making, and highlighting the need for ongoing research and governance to ensure the
responsible and ethical use of algorithms in society.
2 Big Data & Society
2015; Birrer, 2005), as seen in delivery of online across computer science, mathematics and public
advertisements according to perceived ethnicity discourse. As Hill explains, ‘‘we see evidence that any
(Sweeney, 2013). procedure or decision process, however ill-defined,
Determining the potential and actual ethical impact can be called an ‘algorithm’ in the press and in
of an algorithm is difficult for many reasons. public discourse. We hear, in the news, of ‘algorithms’
Identifying the influence of human subjectivity in algo- that suggest potential mates for single people and
rithm design and configuration often requires investi- algorithms that detect trends of financial benefit to
gation of long-term, multi-user development processes. marketers, with the implication that these algorithms
Even with sufficient resources, problems and underlying may be right or wrong. . .’’ (Hill, 2015: 36). Many
values will often not be apparent until a problematic scholarly critiques also fail to specify technical cate-
use case arises. Learning algorithms, often quoted as gories or a formal definition of ‘algorithm’ (Burrell,
the ‘future’ of algorithms and analytics (Tutt, 2016), 2016; Kitchin, 2016). In both cases the term is used
introduce uncertainty over how and why decisions are not in reference to the algorithm as a mathematical
made due to their capacity to tweak operational par- construct, but rather the implementation and inter-
ameters and decision-making rules ‘in the wild’ action of one or more algorithms in a particular pro-
(Burrell, 2016). Determining whether a particular prob- gram, software or information system. Any attempt to
lematic decision is merely a one-off ‘bug’ or evidence of map an ‘ethics of algorithms’ must address this confla-
a systemic failure or bias may be impossible (or at least tion between formal definitions and popular usage of
highly difficult) with poorly interpretable and predict- ‘algorithm’.
able learning algorithms. Such challenges are set to Here, we follow Hill’s (2015: 47) formal definition of
grow, as algorithms increase in complexity and interact an algorithm as a mathematical construct with ‘‘a finite,
with each other’s outputs to take decisions (Tutt, 2016). abstract, effective, compound control structure,
The resulting gap between the design and operation of imperatively given, accomplishing a given purpose
algorithms and our understanding of their ethical impli- under given provisions.’’ However, our investigation
cations can have severe consequences affecting individ- will not be limited to algorithms as mathematical con-
uals, groups and whole segments of a society. structs. As suggested by the inclusion of ‘purpose’ and
In this paper, we map the ethical problems prompted ‘provisions’ in Hill’s definition, algorithms must be
by algorithmic decision-making. The paper answers implemented and executed to take action and have
two questions: what kinds of ethical issues are raised effects. The popular usage of the term becomes relevant
by algorithms? And, how do these issues apply to algo- here. References to algorithms in public discourse do
rithms themselves, as opposed to technologies built not normally address algorithms as mathematical con-
upon algorithms? We first propose a conceptual map structs, but rather particular implementations. Lay
based on six kinds of concerns that are jointly sufficient usage of ‘algorithm’ also includes implementation of
for a principled organisation of the field. We argue that the mathematical construct into a technology, and an
the map allows for a more rigorous diagnosis of ethical application of the technology configured for a particular
challenges related to the use of algorithms. We then task.2 A fully configured algorithm will incorporate the
review the scientific literature discussing ethical aspects abstract mathematical structure that has been imple-
of algorithms to assess the utility and accuracy of the mented into a system for analysis of tasks in a particu-
proposed map. Seven themes emerged from the litera- lar analytic domain. Given this clarification, the
ture that demonstrate how the concerns defined in the configuration of an algorithm to a specific task or data-
proposed map arise in practice. Together, the map and set does not change its underlying mathematical repre-
review provide a common structure for future discus- sentation or system implementation; it is rather a
sion of the ethics of algorithms. In the final section of further tweaking of the algorithm’s operation in rela-
the paper we assess the fit between the proposed map tion to a specific case or problem.
and themes currently raised in the reviewed literature to Accordingly, it makes little sense to consider the
identify areas of the ‘ethics of algorithms’ requiring ethics of algorithms independent of how they are imple-
further research. The conceptual framework, review mented and executed in computer programs, software
and critical analysis offered in this paper aim to and information systems. Our aim here is to map the
inform future ethical inquiry, development, and gov- ethics of algorithms, with ‘algorithm’ interpreted along
ernance of algorithms. public discourse lines. Our map will include ethical
issues arising from algorithms as mathematical con-
structs, implementations (technologies, programs) and
Background
configurations (applications).3 Where discussion
To map the ethics of algorithms, we must first define focuses on implementations or configurations (i.e. an
some key terms. ‘Algorithm’ has an array of meanings artefact with an embedded algorithm), we limit our
The authors also address the challenges in assigning moral responsibility, particularly in the context of learning algorithms, which exhibit autonomy
and complexity beyond traditional conceptions of designer responsibility. The paper emphasizes the need for ethical oversight, transparency, and
accountability in the development and deployment of algorithms, particularly in environments where human intervention may be limited or impractical.
Mittelstadt et al. 3
focus to issues relating to the algorithm’s work, rather this later) are sought rather than causal relationships
than all issues related to the artefact. (Grindrod, 2014; Hildebrandt, 2011; Johnson, 2013).
However, as noted by Hill above, a problem with the Analytics demonstrates how algorithms can chal-
popular usage of ‘algorithm’ is that it can describe ‘‘any lenge human decision-making and comprehension
procedure or decision process,’’ resulting in a prohibi- even for tasks previously performed by humans. In
tively large range of artefacts to account for in a map- making a decision (for instance, which risk class a pur-
ping exercise. Public discourse is currently dominated chaser of insurance belongs to), analytics algorithms
by concerns with a particular class of algorithms that work with high-dimension data to determine which fea-
make decisions, e.g. the best action to take in a given tures are relevant to a given decision. The number of
situation, the best interpretation of data, and so on. features considered in any such classification task can
Such algorithms augment or replace analysis and deci- run into the tens of thousands. This type of task is thus
sion-making by humans, often due to the scope or scale a replication of work previously undertaken by human
of data and rules involved. Without offering a precise workers (i.e. risk stratification), but involving a quali-
definition of the class, the algorithms we are interested tatively different decision-making logic applied to
in here are those that make generally reliable (but sub- greater inputs.
jective and not necessarily correct) decisions based Algorithms are, however, ethically challenging not
upon complex rules that challenge or confound only because of the scale of analysis and complexity
human capacities for action and comprehension.4 In of decision-making. The uncertainty and opacity of
other words, we are interested in algorithms whose the work being done by algorithms and its impact is
actions are difficult for humans to predict or whose also increasingly problematic. Algorithms have trad-
decision-making logic is difficult to explain after the itionally required decision-making rules and weights
fact. Algorithms that automate mundane tasks, for to be individually defined and programmed ‘by hand’.
instance in manufacturing, are not our concern. While still true in many cases (Google’s PageRank
Decision-making algorithms are used across a var- algorithm is a standout example), algorithms increas-
iety of domains, from simplistic decision-making ingly rely on learning capacities (Tutt, 2016).
models (Levenson and Pettrey, 1994) to complex profil- Machine learning is ‘‘any methodology and set of
ing algorithms (Hildebrandt, 2008). Notable contem- techniques that can employ data to come up with
porary examples include online software agents used novel patterns and knowledge, and generate models
by online service providers to carry out operations on that can be used for effective predictions about the
the behalf of users (Kim et al., 2014); online dispute data’’ (Van Otterlo, 2013). Machine learning is defined
resolution algorithms that replace human decision- by the capacity to define or modify decision-making
makers in dispute mediation (Raymond, 2014; rules autonomously. A machine learning algorithm
Shackelford and Raymond, 2014); recommendation applied to classification tasks, for example, typically
and filtering systems that compare and group users to consists of two components, a learner which produces
provide personalised content (Barnet, 2009); clinical a classifier, with the intention to develop classes that
decision support systems (CDSS) that recommend diag- can generalise beyond the training data (Domingos,
noses and treatments to physicians (Diamond et al., 2012). The algorithm’s work involves placing new
1987; Mazoué, 1990); and predictive policing systems inputs into a model or classification structure. Image
that predict criminal activity hotspots. recognition technologies, for example, can decide what
The discipline of data analytics is a standout exam- types of objects appear in a picture. The algorithm
ple, defined here as the practice of using algorithms to ‘learns’ by defining rules to determine how new inputs
make sense of streams of data. Analytics informs imme- will be classified. The model can be taught to the algo-
diate responses to the needs and preferences of the users rithm via hand labelled inputs (supervised learning); in
of a system, as well as longer term strategic planning other cases the algorithm itself defines best-fit models to
and development by a platform or service provider make sense of a set of inputs (unsupervised learning)5
(Grindrod, 2014). Analytics identifies relationships (Schermer, 2011; Van Otterlo, 2013). In both cases, the
and small patterns across vast and distributed datasets algorithm defines decision-making rules to handle new
(Floridi, 2012). New types of enquiry are enabled, inputs. Critically, the human operator does not need to
including behavioural research on ‘scraped’ data (e.g. understand the rationale of decision-making rules
Lomborg and Bechmann, 2014: 256); tracking of fine- produced by the algorithm (Matthias, 2004: 179).
grained behaviours and preferences (e.g. sexual orien- As this explanation suggests, learning capacities
tation or political opinions; (Mahajan et al., 2012); grant algorithms some degree of autonomy.
and prediction of future behaviour (as used in predict- The impact of this autonomy must remain uncertain
ive policing or credit, insurance and employment to some degree. As a result, tasks performed by
screening; Zarsky, 2016). Actionable insights (more on machine learning are difficult to predict beforehand
4 Big Data & Society
(how a new input will be handled) or explain afterwards for such failures. Such difficulties motivate the addition
(how a particular decision was made). Uncertainty can of traceability as a final, overarching, concern.
thus inhibit the identification and redress of ethical
challenges in the design and operation of algorithms.
Inconclusive evidence
When algorithms draw conclusions from the data they
Map of the ethics of algorithms process using inferential statistics and/or machine
Using the key terms defined in the previous section, we learning techniques, they produce probable6 yet inevit-
propose a conceptual map (Figure 1) based on six types ably uncertain knowledge. Statistical learning theory
of concerns that are jointly sufficient for a principled (James et al., 2013) and computational learning
organisation of the field, and conjecture that it allows theory (Valiant, 1984) are both concerned with the
for a more rigorous diagnosis of ethical challenges characterisation and quantification of this uncertainty.
related to the use of algorithms. The map is not pro- In addition to this, and as often indicated, statistical
posed from a particular theoretical or methodological methods can help identify significant correlations, but
approach to ethics, but rather is intended as a prescrip- these are rarely considered to be sufficient to posit the
tive framework of types of issues arising from algo- existence of a causal connection (Illari and Russo, 2014:
rithms owing to three aspects of how algorithms Chapter 8), and thus may be insufficient to motivate
operate. The map takes into account that the algo- action on the basis of knowledge of such a connection.
rithms this paper is concerned with are used to (1) The term actionable insight we mentioned earlier can be
turn data into evidence for a given outcome (henceforth seen as an explicit recognition of these epistemic
conclusion), and that this outcome is then used to (2) limitations.
trigger and motivate an action that (on its own, or when Algorithms are typically deployed in contexts where
combined with other actions) may not be ethically neu- more reliable techniques are either not available or too
tral. This work is performed in ways that are complex costly to implement, and are thus rarely meant to be
and (semi-)autonomous, which (3) complicates appor- infallible. Recognising this limitation is important, but
tionment of responsibility for effects of actions driven should be complemented with an assessment of how the
by algorithms. The map is thus not intended as a tool to risk of being wrong affects one’s epistemic responsibil-
help solve ethical dilemmas arising from problematic ities (Miller and Record, 2013): for instance, by
actions driven by algorithms, but rather is posed as weakening the justification one has for a conclusion
an organising structure based on how algorithms oper- beyond what would be deemed acceptable to justify
ate that can structure future discussion of ethical issues. action in the context at hand.
This leads us to posit three epistemic, and two norma-
tive kinds of ethical concerns arising from the use of
algorithms, based on how algorithms process data to
Inscrutable evidence
produce evidence and motivate actions. These concerns When data are used as (or processed to produce) evi-
are associated with potential failures that may involve dence for a conclusion, it is reasonable to expect that the
multiple actors, and therefore complicate the question connection between the data and the conclusion should
of who should be held responsible and/or accountable be accessible (i.e. intelligible as well as open to scrutiny
and perhaps even critique).7 When the connection is not
obvious, this expectation can be satisfied by better access
as well as by additional explanations. Given how algo-
rithms operate, these requirements are not automatically
satisfied. A lack of knowledge regarding the data being
used (e.g. relating to their scope, provenance and qual-
ity), but more importantly also the inherent difficulty in
the interpretation of how each of the many data-points
used by a machine-learning algorithm contribute to the
conclusion it generates, cause practical as well as prin-
cipled limitations (Miller and Record, 2013).
Misguided evidence
Algorithms process data and are therefore subject to a
limitation shared by all types of data-processing,
Figure 1. Six types of ethical concerns raised by algorithms. namely that the output can never exceed the input.
Mittelstadt et al. 5
While Shannon’s mathematical theory of communica- both the cause and responsibility for the harm to be
tion (Shannon and Weaver, 1998), and especially some traced.
of his information-inequalities, give a formally precise Thanks to this map (Figure 1), we are now able to
account of this fact, the informal ‘garbage in, garbage distinguish epistemological, strictly ethical and traceabil-
out’ principle clearly illustrates what is at stake here, ity types in descriptions of ethical problems with algo-
namely that conclusions can only be as reliable (but rithms. The map is thus intended as a tool to organise a
also as neutral) as the data they are based on. widely dispersed academic discourse addressing a diver-
Evaluations of the neutrality of the process, and by sity of technologies united by their reliance on algo-
connection whether the evidence produced is mis- rithms. To assess the utility of the map, and to observe
guided, are of course observer-dependent. how each of these kinds of concerns manifests in ethical
problems already observed in algorithms, a systematic
review of academic literature was carried out.9 The fol-
Unfair outcomes
lowing sections (4 to 10) describe how ethical issues and
The three epistemic concerns detailed thus far address concepts are treated in the literature explicitly discussing
the quality of evidence produced by an algorithm that the ethical aspects of algorithms.
motivates a particular action. However, ethical evalu-
ation of algorithms can also focus solely on the action Inconclusive evidence leading to
itself. Actions driven by algorithms can be assessed
according to numerous ethical criteria and principles,
unjustified actions
which we generically refer to here as the observer- Much algorithmic decision-making and data mining
dependent ‘fairness’ of the action and its effects. An relies on inductive knowledge and correlations identi-
action can be found discriminatory, for example, fied within a dataset. Causality is not established prior
solely from its effect on a protected class of people, to acting upon the evidence produced by the algorithm.
even if made on the basis of conclusive, scrutable and The search for causal links is difficult, as correlations
well-founded evidence. established in large, proprietary datasets are frequently
not reproducible or falsifiable (cf. Ioannidis, 2005;
Lazer et al., 2014). Despite this, correlations based on
Transformative effects
a sufficient volume of data are increasingly seen as suf-
The ethical challenges posed by the spreading use ficiently credible to direct action without first establish-
of algorithms cannot always be retraced to clear cases ing causality (Hildebrandt, 2011; Hildebrandt and
of epistemic or ethical failures, for some of the effects of Koops, 2010; Mayer-Schönberger and Cukier, 2013;
the reliance on algorithmic data-processing and (semi-) Zarsky, 2016). In this sense data mining and profiling
autonomous decision-making can be questionable and algorithms often need only establish a sufficiently reli-
yet appear ethically neutral because they do not seem to able evidence base to drive action, referred to here as
cause any obvious harm. This is because algorithms can actionable insights.
affect how we conceptualise the world, and modify its Acting on correlations can be doubly problematic.10
social and political organisation (cf. Floridi, 2014). Spurious correlations may be discovered rather than
Algorithmic activities, like profiling, reontologise the genuine causal knowledge. In predictive analytics cor-
world by understanding and conceptualising it in new, relations are doubly uncertain (Ananny, 2016). Even if
unexpected ways, and triggering and motivating actions strong correlations or causal knowledge are found, this
based on the insights it generates. knowledge may only concern populations while actions
are directed towards individuals (Illari and Russo,
2014). As Ananny (2016: 103) explains, ‘‘algorithmic
Traceability
categories . . . signal certainty, discourage alternative
Algorithms are software-artefacts used in data-proces- explorations, and create coherence among disparate
sing, and as such inherit the ethical challenges asso- objects,’’ all of which contribute to individuals being
ciated with the design and availability of new described (possibly inaccurately) via simplified models
technologies and those associated with the manipula- or classes (Barocas, 2014). Finally, even if both actions
tion of large volumes of personal and other data. This and knowledge are at the population-level, our actions
implies that harm caused by algorithmic activity is hard may spill over into the individual level. For example,
to debug (i.e. to detect the harm and find its cause), but this happens when an insurance premium is set for a
also that it is rarely straightforward to identify who sub-population, and hence has to be paid by each
should be held responsible for the harm caused.8 member. Actions taken on the basis of inductive cor-
When a problem is identified addressing any or all of relations have real impact on human interests inde-
the five preceding kinds, ethical assessment requires pendent of their validity.
6 Big Data & Society
Emergent bias is linked with advances in knowledge This much is implicit in Schermer’s (2011) argument
or changes to the system’s (intended) users and stake- that discriminatory treatment is not ethically problem-
holders (Friedman and Nissenbaum, 1996). For exam- atic in itself; rather, it is the effects of the treatment that
ple, CDSS are unavoidably biased towards treatments determine its ethical acceptability. However, Schermer
included in their decision architecture. Although emer- muddles bias and discrimination into a single concept.
gent bias is linked to the user, it can emerge unexpect- What he terms discrimination can be described instead
edly from decisional rules developed by the algorithm, as mere bias, or the consistent and repeated expression
rather than any ‘hand-written’ decision-making struc- of a particular preference, belief or value in decision-
ture (Hajian and Domingo-Ferrer, 2013; Kamiran and making (Friedman and Nissenbaum, 1996). In contrast,
Calders, 2010). Human monitoring may prevent some what he describes as problematic effects of discrimin-
biases from entering algorithmic decision-making in atory treatment can be defined as discrimination tout
these cases (Raymond, 2014). court. So bias is a dimension of the decision-making
The outputs of algorithms also require interpretation itself, whereas discrimination describes the effects of a
(i.e. what one should do based on what the algorithm decision, in terms of adverse disproportionate impact
indicates); for behavioural data, ‘objective’ correlations resulting from algorithmic decision-making. Barocas
can come to reflect the interpreter’s ‘‘unconscious and Selbst (2015) show that precisely this definition
motivations, particular emotions, deliberate choices, guides ‘disparate impact detection’, an enforcement
socio-economic determinations, geographic or demo- mechanism for American anti-discrimination law in
graphic influences’’ (Hildebrandt, 2011: 376). areas such as social housing and employment. They
Explaining the correlation in any of these terms suggest that disparate impact detection provides a
requires additional justification – meaning is not self- model for the detection of bias and discrimination in
evident in statistical models. Different metrics ‘‘make algorithmic decision-making which is sensitive to differ-
visible aspects of individuals and groups that are not ential privacy.
otherwise perceptible’’ (Lupton, 2014: 859). It thus It may be possible to direct algorithms not to consider
cannot be assumed that an observer’s interpretation sensitive attributes that contribute to discrimination
will correctly reflect the perception of the actor rather (Barocas and Selbst, 2015), such as gender or ethnicity
than the biases of the interpreter. (Calders et al., 2009; Kamiran and Calders, 2010;
Schermer, 2011), based upon the emergence of discrim-
Unfair outcomes leading to ination in a particular context. However, proxies for
protected attributes are not easy to predict or detect
discrimination (Romei and Ruggieri, 2014; Zarsky, 2016), particularly
Much of the reviewed literature also addresses how dis- when algorithms access linked datasets (Barocas and
crimination results from biased evidence and decision- Selbst, 2015). Profiles constructed from neutral charac-
making.15 Profiling by algorithms, broadly defined ‘‘as teristics such as postal code may inadvertently overlap
the construction or inference of patterns by means of with other profiles related to ethnicity, gender, sexual
data mining and . . . the application of the ensuing pro- preference, and so on (Macnish, 2012; Schermer, 2011).
files to people whose data match with them’’ Efforts are underway to avoid such ‘redlining’ by
(Hildebrandt and Koops, 2010: 431), is frequently sensitive attributes and proxies. Romei and Ruggieri
cited as a source of discrimination. Profiling algorithms (2014) observe four overlapping strategies for discrim-
identify correlations and make predictions about ination prevention in analytics: (1) controlled distortion
behaviour at a group-level, albeit with groups (or pro- of training data; (2) integration of anti-discrimination
files) that are constantly changing and re-defined by the criteria into the classifier algorithm; (3) post-processing
algorithm (Zarsky, 2013). Whether dynamic or static, of classification models; (4) modification of predictions
the individual is comprehended based on connections and decisions to maintain a fair proportion of effects
with others identified by the algorithm, rather than between protected and unprotected groups. These
actual behaviour (Newell and Marabelli, 2015: 5). strategies are seen in the development of privacy-
Individuals’ choices are structured according to infor- preserving, fairness- and discrimination-aware data
mation about the group (Danna and Gandy, 2002: mining (Dwork et al., 2011; Kamishima et al., 2012).
382). Profiling can inadvertently create an evidence- Fairness-aware data mining takes the broadest aim, as
base that leads to discrimination (Vries, 2010). it gives attention not only to discrimination but fair-
For the affected parties, data-driven discriminatory ness, neutrality, and independence as well (Kamishima
treatment is unlikely to be more palatable than discrim- et al., 2012). Various metrics of fairness are possible
ination fuelled by prejudices or anecdotal evidence. based on statistical parity, differential privacy and
Mittelstadt et al. 9
other relations between data subjects in classification are used to match information to the interests and attri-
tasks (Dwork et al., 2011; Romei and Ruggieri, 2014). butes of data subjects. The subject’s autonomy in deci-
The related practice of personalisation is also sion-making is disrespected when the desired choice
frequently discussed. Personalisation can segment reflects third party interests above the individual’s
a population so that only some segments are worthy (Applin and Fischer, 2015; Stark and Fins, 2013).
of receiving some opportunities or information, This situation is somewhat paradoxical. In prin-
re-enforcing existing social (dis)advantages. Questions ciple, personalisation should improve decision-
of the fairness and equitability of such practices are making by providing the subject with only relevant
often raised (e.g. Cohen et al., 2014; Danna and information when confronted with a potential infor-
Gandy, 2002; Rubel and Jones, 2014). Personalised pri- mation overload; however, deciding which informa-
cing, for example, can be ‘‘an invitation to leave tion is relevant is inherently subjective. The subject
quietly’’ issued to data subjects deemed to lack value can be pushed to make the ‘‘institutionally preferred
or the capacity to pay.16 action rather than their own preference’’ (Johnson,
Reasons to consider discriminatory effects as adverse 2013); online consumers, for example, can be nudged
and thus ethically problematically are diverse. to fit market needs by filtering how products are
Discriminatory analytics can contribute to self-fulfilling displayed (Coll, 2013). Lewis and Westlund
prophecies and stigmatisation in targeted groups, (2015: 14) suggest that personalisation algorithms
undermining their autonomy and participation in soci- need to be taught to ‘act ethically’ to strike a balance
ety (Barocas, 2014; Leese, 2014; Macnish, 2012). between coercing and supporting users’ decisional
Personalisation through non-distributive profiling, autonomy.
seen for example in personalised pricing in insurance Personalisation algorithms reduce the diversity of
premiums (Hildebrandt and Koops, 2010; Van Wel information users encounter by excluding content
and Royakkers, 2004), can be discriminatory by violat- deemed irrelevant or contradictory to the user’s beliefs
ing both ethical and legal principles of equal or fair (Barnet, 2009; Pariser, 2011). Information diversity can
treatment of individuals (Newell and Marabelli, thus be considered an enabling condition for autonomy
2015). Further, as described above the capacity of indi- (van den Hoven and Rooksby, 2008). Filtering algo-
viduals to investigate the personal relevance of factors rithms that create ‘echo chambers’ devoid of contra-
used in decision-making is inhibited by opacity and dictory information may impede decisional autonomy
automation (Zarsky, 2016). (Newell and Marabelli, 2015). Algorithms may be
unable to replicate the ‘‘spontaneous discovery of new
Transformative effects leading things, ideas and options’’ which appear as anomalies
against a subject’s profiled interests (Raymond, 2014).
to challenges for autonomy With near ubiquitous access to information now feas-
Value-laden decisions made by algorithms can also ible in the internet age, issues of access concern whether
pose a threat to the autonomy of data subjects. The the ‘right’ information can be accessed, rather than any
reviewed literature in particular connects personalisa- information at all. Control over personalisation and
tion algorithms to these threats. Personalisation can be filtering mechanisms can enhance user autonomy, but
defined as the construction of choice architectures potentially at the cost of information diversity (Bozdag,
which are not the same across a sample (Tene and 2013). Personalisation algorithms, and the underlying
Polonetsky, 2013a). Similar to explicitly persuasive practice of analytics, can thus both enhance and under-
technologies, algorithms can nudge the behaviour of mine the agency of data subjects.
data subjects and human decision-makers by filtering
information (Ananny, 2016). Different content, infor- Transformative effects leading to
mation, prices, etc. are offered to groups or classes of
challenges for informational privacy
people within a population according to a particular
attribute, e.g. the ability to pay. Algorithms are also driving a transformation of notions
Personalisation algorithms tread a fine line between of privacy. Responses to discrimination, de-individua-
supporting and controlling decisions by filtering which lisation and the threats of opaque decision-making for
information is presented to the user based upon in- data subjects’ agency often appeal to informational
depth understanding of preferences, behaviours, and privacy (Schermer, 2011), or the right of data subjects
perhaps vulnerabilities to influence (Bozdag, 2013; to ‘‘shield personal data from third parties.’’
Goldman, 2006; Newell and Marabelli, 2015; Zarsky, Informational privacy concerns the capacity of an indi-
2016). Classifications and streams of behavioural data vidual to control information about herself (Van Wel
10 Big Data & Society
and Royakkers, 2004), and the effort required by third Koops (2010) call for ‘smart transparency’ by designing
parties to obtain this information. the socio-technical infrastructures responsible for pro-
A right to identity derived from informational filing in a way that allows individuals to anticipate and
privacy interests suggests that opaque or secretive pro- respond to how they are profiled.
filing is problematic.17 Opaque decision-making by
algorithms (see ‘Inconclusive evidence leading to unjus- Traceability leading to moral
tified actions’ section) inhibits oversight and informed
decision-making concerning data sharing (Kim et al.,
responsibility
2014). Data subjects cannot define privacy norms to When a technology fails, blame and sanctions must be
govern all types of data generically because their apportioned. One or more of the technology’s designer
value or insightfulness is only established through (or developer), manufacturer or user are typically
processing (Hildebrandt, 2011; Van Wel and held accountable. Designers and users of algorithms
Royakkers, 2004). are typically blamed when problems arise (Kraemer
Beyond opacity, privacy protections based upon et al., 2011: 251). Blame can only be justifiably attrib-
identifiability are poorly suited to limit external man- uted when the actor has some degree of control
agement of identity via analytics. Identity is increas- (Matthias, 2004) and intentionality in carrying out the
ingly influenced by knowledge produced through action.
analytics that makes sense of growing streams of behav- Traditionally, computer programmers have had
ioural data. The ‘identifiable individual’ is not necessar- ‘‘control of the behaviour of the machine in every
ily a part of these processes. Schermer (2011) argues detail’’ insofar as they can explain its design and func-
that informational privacy is an inadequate conceptual tion to a third party (Matthias, 2004). This traditional
framework because profiling makes the identifiability of conception of responsibility in software design assumes
data subjects irrelevant. the programmer can reflect on the technology’s likely
Profiling seeks to assemble individuals into mean- effects and potential for malfunctioning (Floridi et al.,
ingful groups, for which identity is irrelevant (Floridi, 2014), and make design choices to choose the most
2012; Hildebrandt, 2011; Leese, 2014). Van Wel and desirable outcomes according to the functional specifi-
Royakkers (2004: 133) argue that external identity cation (Matthias, 2004). With that said, programmers
construction by algorithms is a type of de-individuali- may only retain control in principle due to the complex-
sation, or a ‘‘tendency of judging and treating people ity and volume of code (Sandvig et al., 2014), and the
on the basis of group characteristics instead of on use of external libraries often treated by the program-
their own individual characteristics and merit.’’ mer as ‘black boxes’ (cf. Note 7).
Individuals need never be identified when the profile Superficially, the traditional, linear conception of
is assembled to be affected by the knowledge and responsibility is suitable to non-learning algorithms.
actions derived from it (Louch et al., 2010: 4). When decision-making rules are ‘hand-written’, their
The individual’s informational identity (Floridi, authors retain responsibility (Bozdag, 2013). Decision-
2011) is breached by meaning generated by algo- making rules determine the relative weight given to the
rithms that link the subject to others within a dataset variables or dimensions of the data considered by the
(Vries, 2010). algorithm. A popular example is Facebook’s EdgeRank
Current regulatory protections similarly struggle to personalisation algorithm, which prioritises content
address the informational privacy risks of analytics. based on date of publication, frequency of interaction
‘Personal data’ is defined in European data protection between author and reader, media type, and other
law as data describing an identifiable person; anon- dimensions. Altering the relative importance of each
ymised and aggregated data are not considered perso- factor changes the relationships users are encouraged
nal data (European Commission, 2012). Privacy to maintain. The party that sets confidence intervals for
preserving data mining techniques which do not require an algorithm’s decision-making structure shares
access to individual and identifiable records may miti- responsibility for the effects of the resultant false posi-
gate these risks (Agrawal and Srikant, 2000; Fule and tives, false negatives and spurious correlations (Birrer,
Roddick, 2004). Others suggest a mechanism to ‘opt- 2005; Johnson, 2013; Kraemer et al., 2011). Fule and
out’ of profiling for a particular purpose or context Roddick (2004: 159) suggest operators also have a
would help protect data subjects’ privacy interests responsibility to monitor for ethical impacts of deci-
(Hildebrandt, 2011; Rubel and Jones, 2014). A lack of sion-making by algorithms because ‘‘the sensitivity of
recourse mechanisms for data subjects to question the a rule may not be apparent to the miner . . . the ability
validity of algorithmic decisions further exacerbates the to harm or to cause offense can often be inadvertent.’’
challenges of controlling identity and data about one- Schermer (2011) similarly suggests that data processors
self (Schermer, 2011). In response, Hildebrandt and should actively searching for errors and bias in their
Mittelstadt et al. 11
algorithms and models. Human oversight of complex standout question in machine ethics (e.g. Allen et al.,
systems as an accountability mechanism may, however, 2006; Anderson, 2008; Floridi and Sanders, 2004a).
be impossible due to the challenges for transparency Ethical decisions require agents to evaluate the desir-
already mentioned (see ‘Inscrutable evidence leading ability of different courses of actions which present con-
to opacity’ section). Furthermore, humans kept ‘in flicts between the interests of involved parties (Allen
the loop’ of automated decision-making may be et al., 2006; Wiltshire, 2015).
poorly equipped to identify problems and take correct- For some, learning algorithms should be considered
ive actions (Elish, 2016). moral agents with some degree of moral responsibility.
Particular challenges arise for algorithms with Requirements for moral agency may differ between
learning capacities, which defy the traditional concep- humans and algorithms; Floridi and Sanders (2004b)
tion of designer responsibility. The model requires the and Sullins (2006) argue, for instance, that ‘machine
system to be well-defined, comprehensible and predict- agency’ requires significant autonomy, interactive
able; complex and fluid systems (i.e. one with count- behaviour, and a role with causal accountability, to
less decision-making rules and lines of code) inhibit be distinguished from moral responsibility, which
holistic oversight of decision-making pathways and requires intentionality. As suggested above, moral
dependencies. Machine learning algorithms are par- agency and accountability are linked. Assigning moral
ticularly challenging in this respect (Burrell, 2016; agency to artificial agents can allow human stake-
Matthias, 2004; Zarsky, 2016), seen for instance in holders to shift blame to algorithms (Crnkovic and
genetic algorithms that program themselves. The trad- Çürüklü, 2011). Denying agency to artificial agents
itional model of responsibility fails because ‘‘nobody makes designers responsible for the unethical behaviour
has enough control over the machine’s actions to be of their semi-autonomous creations; bad consequences
able to assume the responsibility for them’’ (Matthias, reflect bad design (Anderson and Anderson, 2014;
2004: 177). Kraemer et al., 2011; Turilli, 2007). Neither extreme
Allen et al. (2006: 14) concur in discussing the need is entirely satisfactory due to the complexity of over-
for ‘machine ethics’: ‘‘the modular design of systems sight and the volatility of decision-making structures.
can mean that no single person or group can fully Beyond the nature of moral agency in machines,
grasp the manner in which the system will interact or work in machine ethics also investigates how best to
respond to a complex flow of new inputs.’’ From trad- design moral reasoning and behaviours into autono-
itional, linear programming through to autonomous mous algorithms as artificial moral and ethical
algorithms, behavioural control is gradually transferred agents18 (Anderson and Anderson, 2007; Crnkovic
from the programmer to the algorithm and its operat- and Çürüklü, 2011; Sullins, 2006; Wiegel and Berg,
ing environment (Matthias, 2004: 182). The gap 2009). Research into this question remains highly rele-
between the designer’s control and algorithm’s behav- vant because algorithms can be required to make real-
iour creates an accountability gap (Cardona, 2008) time decisions involving ‘‘difficult trade-offs . . . which
wherein blame can potentially be assigned to several may include difficult ethical considerations’’ without an
moral agents simultaneously. operator (Wiegel and Berg, 2009: 234).
Related segments of the literature address the ‘ethics Automation of decision-making creates problems of
of automation’, or the acceptability of replacing or aug- ethical consistency between humans and algorithms.
menting human decision-making with algorithms (Naik Turilli (2007) argues algorithms should be constrained
and Bhide, 2014). Morek (2006) finds it problematic to ‘‘by the same set of ethical principles’’ as the former
assume that algorithms can replace skilled professionals human worker to ensure consistency within an organ-
in a like-for-like manner. Professionals have implicit isation’s ethical standards. However, ethical principles
knowledge and subtle skills (cf. Coeckelbergh, 2013; as used by human decision-makers may prove difficult
MacIntyre, 2007) that are difficult to make explicit to define and rendered computable. Virtue ethics is also
and perhaps impossible to make computable (Morek, thought to provide rule sets for algorithmic decision-
2006). When algorithmic and human decision-makers structures which are easily computable. An ideal model
work in tandem, norms are required to prescribe when for artificial moral agents based on heroic virtues is
and how human intervention is required, particularly in suggested by Wiltshire (2015), wherein algorithms are
cases like high-frequency trading where real-time inter- trained to be heroic and thus, moral.19
vention is impossible before harms occur (Davis et al., Other approaches do not require ethical principles to
2013; Raymond, 2014). serve as pillars of algorithmic decision-making frame-
Algorithms that make decisions can be considered works. Bello and Bringsjord (2012) insist that moral rea-
blameworthy agents (Floridi and Sanders, 2004a; soning in algorithms should not be structured around
Wiltshire, 2015). The moral standing and capacity for classic ethical principles because it does not reflect how
ethical decision-making of algorithms remains a humans actually engage in moral decision-making.
12 Big Data & Society
from human decision-makers. Similar effects can be particularly for machine learning. Merely rendering the
observed in mixed networks of human and information code of an algorithm transparent is insufficient to ensure
systems as already studied in bureaucracies, charac- ethical behaviour. Regulatory or methodological
terised by reduced feelings of personal responsibility requirements for algorithms to be explainable or inter-
and the execution of otherwise unjustifiable actions pretable demonstrate the challenge data controllers now
(Arendt, 1971). Algorithms involving stakeholders face (Tutt, 2016). One possible path to explainability is
from multiple disciplines can, for instance, lead to each algorithmic auditing carried out by data processors
party assuming the others will shoulder ethical respon- (Zarsky, 2016), external regulators (Pasquale, 2015;
sibility for the algorithm’s actions (Davis et al., 2013). Tutt, 2016; Zarsky, 2016), or empirical researchers
Machine learning adds an additional layer of complexity (Kitchin, 2016; Neyland, 2016), using ex post audit stu-
between designers and actions driven by the algorithm, dies (Adler et al., 2016; Diakopoulos, 2015; Kitchin,
which may justifiably weaken blame placed upon the 2016; Romei and Ruggieri, 2014; Sandvig et al., 2014),
former. Additional research is needed to understand reflexive ethnographic studies in development and test-
the prevalence of these effects in algorithm driven deci- ing (Neyland, 2016), or reporting mechanisms designed
sion-making systems, and to discern how to minimise the into the algorithm itself (Vellido et al., 2012). For all
inadvertent justification of harmful acts. types of algorithms, auditing is a necessary precondition
A related problem concerns malfunctioning and to verify correct functioning. For analytics algorithms
resilience. The need to apportion responsibility is with foreseeable human impact, auditing can create an
acutely felt when algorithms malfunction. Unethical ex post procedural record of complex algorithmic deci-
algorithms can be thought of as malfunctioning soft- sion-making to unpack problematic or inaccurate deci-
ware-artefacts that do not operate as intended. Useful sions, or to detect discrimination or similar harms.
distinctions exists between errors of design (types) and Further work is required to design broadly applicable,
errors of operation (tokens), and between the failure to low impact auditing mechanisms for algorithms (cf.
operate as intended (dysfunction) and the presence of Adler et al., 2016; Sandvig et al., 2014) that build upon
unintended side-effects (misfunction) (Floridi et al., current work in transparency and interpretability of
2014). Misfunctioning is distinguished from mere nega- machine learning (e.g. Kim et al., 2015; Lou et al., 2013).
tive side effects by avoidability, or the extent to which All of the challenges highlighted in this review are
comparable extant algorithm types accomplish the addressable in principle. As with any technological
intended function without the effects in question. artefact, practical solutions will require cooperation
These distinctions clarify ethical aspects of algorithms between researchers, developers and policy-makers.
that are strictly related to their functioning, either in the A final but significant area requiring further work is
abstract (for instance when we look at raw perform- the translation of extant and forthcoming policy applic-
ance), or as part of a larger decision-making system, able to algorithms into realistic regulatory mechanisms
and reveals the multi-faceted interaction between and standards. The forthcoming EU General Data
intended and actual behaviour. Protection Regulation (GDPR) in particular is indica-
Both types of malfunctioning imply distinct respon- tive of the challenges to be faced globally in regulating
sibilities for algorithm and software developers, users algorithms.21
and artefacts. Additional work is required to describe The GDPR stipulates a number of responsibilities of
fair apportionment of responsibility for dysfunctioning data controllers and rights of data subjects relevant to
and misfunctioning across large development teams and decision-making algorithms. Concerning the former,
complex contexts of use. Further work is also required to when undertaking profiling controllers will be required
specify requirements for resilience to malfunctioning as to evaluate the potential consequences of their data-
an ethical ideal in algorithm design. Machine learning in processing activities via a data protection impact
particular raises unique challenges, because achieving assessment (Art. 35(3)(a)). In addition to assessing priv-
the intended or ‘‘correct’’ behaviour does not imply acy hazards, data controllers also have to communicate
the absence of errors20 (cf. Burrell, 2016) or harmful these risks to the persons concerned. According to Art.
actions and feedback loops. Algorithms, particularly 13(2)(f) and 14(2)(g) data controllers are obligated to
those embedded in robotics, can for instance be made inform the data subjects about existing profiling meth-
safely interruptible insofar as harmful actions can be ods, its significance and its envisaged consequences. Art.
discouraged without the algorithm being encouraged 12(1) mandates that clear and plain language is used to
to deceive human users to avoid further interruptions inform about these risks.22 Further, Recital 71 states
(Orseau and Armstrong, 2016). the data controllers’ obligation to explain the logic of
Finally, while a degree of transparency is broadly how the decision was reached. Finally, Art. 22(3) states
recognised as a requirement for traceability, how to the data controller’s duty to ‘‘implement suitable
operationalise transparency remains an open question, measures to safeguard the data subject’s rights and
14 Big Data & Society
freedoms and legitimate interests’’ when automated aspects of algorithms; (2) a prescriptive map to organise
decision-making is applied. This obligation is rather discussion; and (3) a critical assessment of the literature
vague and opaque. to identify areas requiring further work to develop the
On the rights of data subjects, the GDPR generally ethics of algorithms.
takes a self-determination approach. Data subjects are The review undertaken here was primarily limited to
granted a right to object to profiling methods (Art. 21) literature explicitly discussing algorithms. As a result,
and a right not to be subject to solely automated pro- much relevant work performed in related fields is only
cessed individual decision-making23 (Art. 22). In these briefly touched upon, in areas such as ethics of artificial
and similar cases the person concerned either has the intelligence, surveillance studies, computer ethics and
right to object that such methods are used or should at machine ethics.27 While it would be ideal to summarise
least have the right to ‘‘obtain human intervention’’ in work in all the fields represented in the reviewed litera-
order to express their views and to ‘‘contest the deci- ture, and thus in any domain where algorithms are in
sion’’ (Art. 22(3)). use, the scope of such as an exercise is prohibitive. We
At first glance these provisions defer control to the must therefore accept that there may be gaps in cover-
data subjects and enable them to decide how their data age for topics discussed only in relation to specific types
are used. Notwithstanding that the GDPR bears great of algorithms, and not for algorithms themselves.
potential to improve data protection, a number of Despite this limitation, the prescriptive map is purpose-
exemptions limit the rights of data subjects.24 The fully broad and iterative to organise discussion around
GDPR can be a toothless or a powerful mechanism the ethics of algorithms, both past and future.
to protect data subjects dependent upon its eventual Discussion of a concept as complex as ‘algorithm’
legal interpretation: the wording of the regulation inevitably encounters problems of abstraction or
allows either to be true. Supervisory authorities and ‘talking past each other’ due to a failure to specify a
their future judgments will determine the effectiveness level of abstraction (LoA) for discussion, and thus limit
of the new framework.25 However, additional work is the relevant set of observables (Floridi, 2008). A mature
required in parallel to provide normative guidelines and ‘ethics of algorithms’ does not yet exist, in part because
practical mechanisms for putting the new rights and ‘algorithm’ as a concept describes a prohibitively broad
responsibilities into practice. range of software and information systems. Despite this
These are not mundane regulatory tasks. For exam- limitation, several themes emerged from the literature
ple, the provisions highlighted above can be interpreted that indicate how ethics can coherently be discussed
to mean automated decisions must be explainable to data when focusing on algorithms, independently of
subjects. Given the connectivity and dependencies of domain-specific work.
algorithms and datasets in complex information systems, Mapping these themes onto the prescriptive frame-
and the tendency of errors and biases in data and models work proposed here has proven helpful to distinguish
to be hidden over time (see ‘Misguided evidence leading between the kinds of ethical concerns generated by algo-
to bias’ section), ‘explainability’26 may prove particularly rithms, which are often muddled in the literature.
disruptive for data intensive industries. Practical require- Distinct epistemic and normative concerns are often trea-
ments will need to be unpacked in the future that strike ted as a cluster. This is understandable, as the different
an appropriate balance between data subjects’ rights to concerns are part of a web of interdependencies. Some of
be informed about the logic and consequences of profil- these interdependencies are present in the literature we
ing, and the burden imposed on data controllers. reviewed, like the connection between bias and discrim-
Alternatively, it may be necessary to limit automation ination (see ‘Misguided evidence leading to bias’ and
or particular analytic methods in particular contexts to ‘Unfair outcomes leading to discrimination’ sections) or
meet transparency requirements specified in the GDPR the impact of opacity on the attribution of responsibility
(Tutt, 2016; Zarsky, 2016). Comparable restrictions (see ‘Inscrutable evidence leading to opacity’ and
already exist in the US Credit Reporting Act, which ‘Traceability leading to moral responsibility’ sections).
effectively prohibits machine learning in credit scoring The proposed map brings further dependencies into
because reasons for the denial of credit must be made focus, like the multi-faceted effect of the presence and
available to consumers on demand (Burrell, 2016). absence of epistemic deficiencies on the ethical ramifi-
cations of algorithms. Further, the map demonstrates
that solving problems at one level does not address all
Conclusion types of concerns; a perfectly auditable algorithmic
Algorithms increasingly mediate digital life and deci- decision, or one that is based on conclusive, scrutable
sion-making. The work described here has made three and well-founded evidence, can nevertheless cause
contributions to clarify the ethical importance of this unfair and transformative effects, without obvious
mediation: (1) a review of existing discussion of ethical ways to trace blame among the network of contributing
Mittelstadt et al. 15
actors. Better methods to produce evidence for some analysis is exploratory, meaning it lacks a specific
actions need not rule out all forms of discrimination target or hypothesis. In this way, new models and classi-
for example, and can even be used to discriminate fications can be defined. In contrast, predictive analytics
more efficiently. Indeed, one may even conceive of situ- based on supervised learning seeks to match cases to pre-
ations where less discerning algorithms may have fewer existing classes to infer knowledge about the case.
Knowledge about the assigned classes is used to make
objectionable effects.
predictions about the case (Van Otterlo, 2013).
More importantly, as already repeatedly stressed in
6. The term ‘probable knowledge’ is used here in the sense
the above overview, we cannot in principle avoid epi- of Hacking (2006) where it is associated with the emer-
stemic and ethical residues. Increasingly better algorith- gence of probability and the rise of statistical thinking
mic tools can normally be expected to rule out many (for instance in the context of insurance) that started in
obvious epistemic deficiencies, and even help us to the 17th Century.
detect well-understood ethical problems (e.g. discrimin- 7. In mainstream analytic epistemology this issue is con-
ation). However, the full conceptual space of ethical nected to the nature of justification, and the importance
challenges posed by the use of algorithms cannot be of having access to one’s own justifications for a specific
reduced to problems related to easily identified epistemic belief (Kornblith, 2001). In the present context, however,
and ethical shortcomings. Aided by the map drawn here, we are concerned with a more interactive kind of justifi-
future work should strive to make explicit the many cation: human agents need to be able to understand how
implicit connections to algorithms in ethics and beyond. a conclusion reached by an algorithm is justified in view
of the data.
8. The often blamed opacity of algorithms can only partially
Declaration of conflicting interests explain why this is the case. Another aspect is more clo-
The author(s) declared no potential conflicts of interest with sely related to the role of re-use in the development of
respect to the research, authorship, and/or publication of this algorithms and software-artefacts; from the customary
article. use of existing libraries, to the repurposing of existing
tools and methods for different purposes (e.g. the use of
Funding seismological models of aftershocks in predictive policing
The author(s) disclosed receipt of the following financial sup- (Mohler et al., 2011), and the tailoring of general tools for
port for the research, authorship, and/or publication of this specific methods. Apart from the inevitable distribution
article: This study was funded by the University of Oxford’s of responsibilities, this highlights the complex relation
John Fell Fund (Brent Mittelstadt), by the PETRAS IoT Hub between good design (the re-use philosophy promoted
– a EPSRC project (Brent Mittelstadt, Luciano Floridi, in Structured Programming) and the absence of malfunc-
Mariarosaria Taddeo), and the European Union’s Horizon tion, and reveals that even the designers of software-
2020 research and innovation programme under the Marie artefacts regularly treat part of their work as black
Sklodowska-Curie grant agreement No. 657017 (Patrick Allo). boxes (Sametinger, 1997).
9. See Appendix 1 for information on the methodology,
search terms and query results of the review.
Supplementary material
10. A distinction must be made, however, between the ethical
The supplementary files are available at http://bds.sagepub. justifiability of acting upon mere correlation and a
com/content/3/2. broader ethics of inductive reasoning which overlaps
with extant critiques of statistical and quantitative meth-
Notes ods in research. The former concerns the thresholds of
1. We would like to acknowledge valuable comments and evidence required to justify actions with ethical impact.
feedback of the reviewers at Big Data & Society. The latter concerns a lack of reproducibility in analytics
2. Compare with Turner (2016) on the ontology of programs. that distinguishes it in practice from science (cf.
3. For the sake of simplicity, for the remainder of the paper Feynman, 1974; Ioannidis, 2005; Vasilevsky et al., 2013)
we will refer generically to ‘algorithms’ rather than con- and is better understood as an issue of epistemology.
structs, implementations and configurations. 11. Introna and Nissenbaum’s article (2000) is among the
4. Tufekci seems to have a similar class of algorithms in mind first publications on this topic. The article compares
in her exploration of detecting harms. She describes ‘gate- search engines to publishers and suggests that, like pub-
keeping algorithms’ as ‘‘algorithms that do not result in lishers, search engines filter information according to
simple, ‘correct’ answers-instead, I focus on those that are market conditions, i.e. according to consumers’ tastes
utilized as subjective decision makers’’ (Tufekci, 2015: 206). and preferences, and favour powerful actors. Two cor-
5. The distinction between supervised and unsupervised rective mechanisms are suggested: embedding the ‘‘value
learning can be mapped onto analytics to reveal different of fairness as well as [a] suite of values represented by the
ways humans are ‘made sense of’ through data. ideology of the Web as a public good’’ (Introna and
Descriptive analytics based on unsupervised learning, Nissenbaum, 2000: 182) in the design of indexing and
seeks to identify unforeseen correlations between cases to ranking algorithms, and transparency of the algorithms
learn something about the entity or phenomenon. Here, used by search engines. More recently, Zarsky (2013) has
16 Big Data & Society
provided a framework and in-depth legal examination of 17. Data subjects can be considered to have a right to iden-
transparency in predictive analytics. tity. Such a right can take many forms, but the existence
12. This is a contentious claim. Bozdag (2013) suggests that of some right to identity is difficult to dispute. Floridi
human comprehension has not increased in parallel to the (2011) conceives of personal identity as constituted by
exponential growth of social data in recent years due to information. Taken as such, any right to informational
biological limitations on information processing capaci- privacy translates to a right to identity by default, under-
ties. However, this would appear to discount advances in stood as the right to manage information about the self
data visualization and sorting techniques to help humans that constitutes one’s identity. Hildebrandt and Koops
comprehend large datasets and information flows (cf. (2010) similarly recognise a right to form identity without
Turilli and Floridi, 2009). Biological capacities may not unreasonable external influence. Both approaches can be
have increased, but the same cannot be said for tool- connected to the right to personality derived from the
assisted comprehension. One’s position on this turns on European Convention on Human Rights.
whether technology-assisted and human comprehension 18. A further distinction can be made between artificial
are categorically different. moral agents and artificial ethical agents. Artificial
13. The context of autonomous weapon systems is particu- moral agents lack true ‘artificial intelligence’ or the capa-
larly relevant here; see Swiatek (2012). city for reflection required to decide and justify an ethical
14. The argument that technology design unavoidably value- course of action. Artificial ethical agents can ‘‘calculate
laden is not universally accepted. Kraemer et al. (2011) the best action in ethical dilemmas using ethical prin-
provide a counterargument from the reviewed literature. ciples’’ (Moor, 2006) or frameworks derived thereof. In
For them, algorithms are value-laden only ‘‘if one cannot contrast, artificial morality requires only that machines
rationally choose between them without explicitly or act ‘as if’ they are moral agents, and thus make ethically
implicitly taking ethical concerns into account.’’ In justified decisions according to pre-defined criteria
other words, designers make value-judgments that (Moor, 2006). The construction of artificial morality
express views ‘‘on how things ought to be or not to be, is seen as the immediate and imminently achievable
or what is good or bad, or desirable or undesirable’’ challenge for machine ethics, as it does not first require
(Kraemer et al., 2011: 252). For Kraemer et al. (2011), artificial intelligence (Allen et al., 2006). With that said,
algorithms that produce hypothetical value-judgments or the question of whether ‘‘it is possible to create artificial
recommended courses of action, such as clinical decision full ethical agents’’ continues to occupy machine ethicists
support systems, can be value-neutral because the judg- (Tonkens, 2012: 139).
ments produced are hypothetical. This approach would 19. Tonkens (2012) however argues that agents embedded
suggest that autonomous algorithms are value-laden by with virtue-based frameworks would find their creation
definition, but only because the judgments produced are ethically impermissible due to the impoverished sense of
put into action by the algorithm. This conception of value virtues a machine could actually develop. In short, the
neutrality appears to suggest that algorithms are designed character development of humans and machines are too
in value-neutral spaces, with the designer disconnected dissimilar to compare. He predicts that unless autono-
from a social and moral context and history that inevit- mous agents are treated as full moral agents comparable
ably influences her perceptions and decisions. It is diffi- to humans, existing social injustices will be exacerbated as
cult to see how this could be the case (cf. Friedman and autonomous machines are denied the freedom to express
Nissenbaum, 1996). their autonomy by being forced into service of the needs
15. Clear sources of discrimination are not consistently iden- of the designer. This concern points to a broader issue in
tified in the reviewed literature. Barocas (2014) helpfully machine ethics concerning whether algorithms and
clarifies five possible sources of discrimination related to machines with decision-making autonomy will continue
biased analytics: (1) inferring membership in a protected to be treated as passive tools as opposed to active (moral)
class; (2) statistical bias; (3) faulty inference; (4) overly agents (Wiegel and Berg, 2009).
precise inferences; and (5) shifting the sample frame. 20. Except for trivial cases, the presence of false positives and
16. Danna and Gandy (2002) provide a demonstrative exam- false negatives in the work of algorithms, particularly
ple in the Royal Bank of Canada which ‘nudged’ cus- machine learning, is unavoidable.
tomers on fee-for-service to flat-fee service packages 21. It is important to note that this regulation even applies to
after discovering (through mining in-house data) that data controllers or processors that are not established
customers on the latter offered greater lifetime value to within the EU, if the monitoring (including predicting
the bank. Customers unwilling to move to flat-fee services and profiling) of behaviour is focused on data subjects
faced disincentives including higher prices. Through price that are located in the EU (Art 3(2)(b) and Recital 24).
discrimination customers were pushed towards options 22. In cases where informed consent is required, Art. 7(2)
reflecting the bank’s interests. Customers unwilling to stipulates that non-compliance with Art. 12(1) renders
move were placed into a weak bargaining position in given consent not legally binding.
which they were ‘invited to leave’: losing some customers 23. Recital 71 explains that solely automated individual deci-
in the process of shifting the majority to more profitable sion-making has to be understood as a method ‘‘which
flat-fee packages meant the bank lacked incentive to produces legal effects concerning him or her or similarly
accommodate minority interests despite the risk of significantly affects him or her, such as automatic refusal
losing minority fee-for-service customers to competitors. of an online credit application or e-recruiting practices
Mittelstadt et al. 17
without any human intervention’’ and includes profiling Allen C, Wallach W and Smit I (2006) Why machine ethics?
that allows to ‘‘predict aspects concerning the data sub- Intelligent Systems, IEEE 21(4) Available at: http://ieeex
ject’s performance at work, economic situation, health, plore.ieee.org/xpls/abs_all.jsp?arnumber=1667947
personal preferences or interests, reliability or behaviour, (accessed 1 January 2006).
location or movements.’’ Ananny M (2016) Toward an ethics of algorithms convening,
24. Art. 21(1) explains that the right to object to profiling observation, probability, and timeliness. Science,
methods can be restricted ‘‘if the controller demonstrates Technology & Human Values 41(1): 93–117.
compelling legitimate grounds for the processing which Anderson M and Anderson SL (2007) Machine ethics:
override the interests, rights and freedoms of the data Creating an ethical intelligent agent. AI Magazine 28(4): 15.
subject or for the establishment, exercise or defence of Anderson M and Anderson SL (2014) Toward ethical intelli-
legal claims.’’ In addition, Art. 23(1) stipulates that the gent autonomous healthcare agents: A case-supported
rights enshrined in Art. 12 to 22 – including the right to principle-based behavior paradigm. Available at: http://
object to automated decision-making – can be restricted doc.gold.ac.uk/aisb50/AISB50-S17/AISB50-S17-
in cases such as ‘‘national security, defence; public secur- Anderson-Paper.pdf (accessed 24 August 2015).
ity;(. . .); other important objectives of general public Anderson SL (2008) Asimov’s ‘Three Laws of Robotics’ and
interest of the Union or of a Member State, in particular machine metaethics. AI and Society 22(4): 477–493.
an important economic or financial interest of the Union Applin SA and Fischer MD (2015) New technologies and
or of a Member State, including monetary, budgetary and mixed-use convergence: How humans and algorithms are
taxation a matters, public health and social security; (. . .); adapting to each other. In: 2015 IEEE international sym-
the prevention, investigation, detection and prosecution posium on technology and society (ISTAS). Dublin,
of breaches of ethics for regulated professions; (...);’’. As Ireland: IEEE, pp. 1–6.
a result, these exemptions also apply to the right to access Arendt H (1971) Eichmann in Jerusalem: A Report on the
(Art. 15 – the right to obtain information if personal data Banality of Evil. New York: Viking Press.
are being processed) as well as the right to be forgotten Barnet BA (2009) Idiomedia: The rise of personalized, aggre-
(Art. 17). gated content. Continuum 23(1): 93–99.
25. Art. 83(5)(b) invests supervisory authorities with the Barocas S (2014) Data mining and the discourse on discrim-
power to impose fines up to 4% of the total worldwide ination. Available at: https://dataethics.github.io/proceed
annual turnover in cases where rights of the data subjects ings/DataMiningandtheDiscourseOnDiscrimination.pdf
(Art. 12 to 22) have been infringed. This lever can be used (accessed 20 December 2015).
to enforce compliance and to enhance data protection. Barocas S and Selbst AD (2015) Big data’s disparate impact.
26. ‘Explainability’ is preferred here to ‘interpretability’ to SSRN Scholarly Paper, Rochester, NY: Social Science
highlight that the explanation of a decision must be com- Research Network. Available at: http://papers.ssrn.com/
prehensible not only to data scientists or controllers, but abstract=2477899 (accessed 16 October 2015).
to the lay data subjects (or some proxy) affected by the Bello P and Bringsjord S (2012) On how to build a moral
decision. machine. Topoi 32(2): 251–266.
27. The various domains of research and development Birrer FAJ (2005) Data mining to combat terrorism and the
described here share a common characteristic: all make roots of privacy concerns. Ethics and Information
use of computing algorithms. This is not, however, to Technology 7(4): 211–220.
suggest that complex fields such as machine ethics Bozdag E (2013) Bias in algorithmic filtering and personal-
and surveillance studies are subsumed by the ‘ethics of ization. Ethics and Information Technology 15(3): 209–227.
algorithms’ label. Rather, each domain has issues which Brey P and Soraker JH (2009) Philosophy of Computing and
do not originate in the design and functionality of the
Information Technology. Elsevier.
algorithms being used. These issues would thus not be
Burrell J (2016) How the machine ‘thinks:’ Understanding
considered part of an ‘ethics of algorithms’, despite the
opacity in machine learning algorithms. Big Data &
inclusion of the parent field. ‘Ethics of algorithms’ is thus
Society 3(1): 1–12.
not meant to replace existing fields of enquiry, but rather
Calders T and Verwer S (2010) Three naive Bayes approaches
to identify issues shared across a diverse number
for discrimination-free classification. Data Mining and
of domains stemming from the computing algorithms
Knowledge Discovery 21(2): 277–292.
they use.
Calders T, Kamiran F and Pechenizkiy M (2009) Building
classifiers with independency constraints. In: Data mining
References workshops, 2009. ICDMW’09. IEEE international confer-
Adler P, Falk C, Friedler SA, et al. (2016) Auditing black-box ence on, Miami, USA, IEEE, pp. 13–18.
models by obscuring features. arXiv:1602.07043 [cs, stat]. Cardona B (2008) ‘Healthy ageing’ policies and anti-ageing
Available at: http://arxiv.org/abs/1602.07043 (accessed 5 ideologies and practices: On the exercise of responsibility.
March 2016). Medicine, Health Care and Philosophy 11(4): 475–483.
Agrawal R and Srikant R (2000) Privacy-preserving data Coeckelbergh M (2013) E-care as craftsmanship: Virtuous
mining. ACM Sigmod Record. ACM, pp. 439–450. work, skilled engagement, and information technology in
Available at: http://dl.acm.org/citation.cfm?id=335438 health care. Medicine, Health Care and Philosophy 16(4):
(accessed 20 August 2015). 807–816.
18 Big Data & Society
Cohen IG, Amarasingham R, Shah A, et al. (2014) The legal Floridi L (2014) The Fourth Revolution: How the Infosphere is
and ethical concerns that arise from using complex pre- Reshaping Human Reality. Oxford: OUP.
dictive analytics in health care. Health Affairs 33(7): Floridi L and Sanders JW (2004a) On the morality
1139–1147. of artificial agents. Minds and Machines 14(3).
Coll S (2013) Consumption as biopower: Governing bodies Available at: http://dl.acm.org/cit-
with loyalty cards. Journal of Consumer Culture 13(3): ation.cfm?id=1011949.1011964 (accessed 1 August 2004).
201–220. Floridi L and Sanders JW (2004b) On the morality of artifi-
Crawford K (2016) Can an algorithm be agonistic? Ten scenes cial agents. Minds and Machines 14(3). Available at: http://
from life in calculated publics. Science, Technology & dl.acm.org/citation.cfm?id=1011949.1011964 (accessed 1
Human Values 41(1): 77–92. August 2004).
Crnkovic GD and Çürüklü B (2011) Robots: ethical by Floridi L, Fresco N and Primiero G (2014) On malfunction-
design. Ethics and Information Technology 14(1): 61–71. ing software. Synthese 192(4): 1199–1220.
Danna A and Gandy OH Jr (2002) All that glitters is not Friedman B and Nissenbaum H (1996) Bias in computer sys-
gold: Digging beneath the surface of data mining. tems. ACM Transactions on Information Systems (TOIS)
Journal of Business Ethics 40(4): 373–386. 14(3): 330–347.
Datta A, Sen S and Zick Y (2016) Algorithmic transparency Fule P and Roddick JF (2004) Detecting privacy and ethical
via quantitative input influence. In: Proceedings of 37th sensitivity in data mining results. In: Proceedings of the 27th
IEEE symposium on security and privacy, San Jose, USA. Australasian conference on computer science – Volume 26,
Available at: http://www.ieee-security.org/TC/SP2016/ Dunedin, New Zealand, Australian Computer Society,
papers/0824a598.pdf (accessed 30 June 2016). Inc., pp. 159–166. Available at: http://dl.acm.org/cit-
Davis M, Kumiega A and Van Vliet B (2013) Ethics, finance, ation.cfm?id=979942 (accessed 24 August 2015).
and automation: A preliminary survey of problems in high Gadamer HG (2004) Truth and Method. London: Continuum
frequency trading. Science and Engineering Ethics 19(3): International Publishing Group.
851–874. Glenn T and Monteith S (2014) New measures of mental state
de Vries K (2010) Identity, profiling algorithms and a world and behavior based on data collected from sensors, smart-
of ambient intelligence. Ethics and Information Technology phones, and the internet. Current Psychiatry Reports
12(1): 71–85.
16(12): 1–10.
Diakopoulos N (2015) Algorithmic accountability:
Goldman E (2006) Search engine bias and the demise of
Journalistic investigation of computational power struc-
search engine utopianism. Yale Journal of Law &
tures. Digital Journalism 3(3): 398–415.
Technology 8: 188–200.
Diamond GA, Pollock BH and Work JW (1987) Clinician
Granka LA (2010) The politics of search: A decade retro-
decisions and computers. Journal of the American
spective. The Information Society 26(5): 364–374.
College of Cardiology 9(6): 1385–1396.
Grindrod P (2014) Mathematical Underpinnings of Analytics:
Domingos P (2012) A few useful things to know about
Theory and Applications. Oxford: OUP.
machine learning. Communications of the ACM 55(10):
Grodzinsky FS, Miller KW and Wolf MJ (2010) Developing
78–87.
artificial agents worthy of trust: ‘Would you buy a used
Dwork C, Hardt M, Pitassi T, et al. (2011) Fairness through
car from this artificial agent?’ Ethics and Information
awareness. arXiv:1104.3913 [cs]. Available at: http://arxi-
v.org/abs/1104.3913 (accessed 15 February 2016). Technology 13(1): 17–27.
Hacking I (2006) The Emergence of Probability: A
Elish MC (2016) Moral crumple zones: Cautionary tales in
human–robot interaction (WeRobot 2016). SSRN. Philosophical Study of Early Ideas about Probability,
Available at: http://papers.ssrn.com/sol3/ Induction and Statistical Inference. Cambridge:
Papers.cfm?abstract_id=2757236 (accessed 30 June 2016). Cambridge University Press.
European Commission (2012) Regulation of the European Hajian S and Domingo-Ferrer J (2013) A methodology for
Parliament and of the Council on the Protection of direct and indirect discrimination prevention in data
Individuals with regard to the processing of personal data mining. IEEE Transactions on Knowledge and Data
and on the free movement of such data (General Data Engineering 25(7): 1445–1459.
Protection Regulation). Brussels: European Commission. Hajian S, Monreale A, Pedreschi D, et al. (2012) Injecting
Available at: http://ec.europa.eu/justice/data-protection/ discrimination and privacy awareness into pattern discov-
document/review2012/com_2012_11_en.pdf (accessed 2 ery. In: Data mining workshops (ICDMW), 2012 IEEE
April 2013). 12th international conference on, Brussels, Belgium,
Feynman R (1974) ‘Cargo cult science’ – by Richard IEEE, pp. 360–369. Available at: http://ieeexplor-
Feynman. Available at: http://neurotheory.columbia.edu/ e.ieee.org/xpls/abs_all.jsp?arnumber=6406463 (accessed
ken/cargo_cult.html (accessed 3 September 2015). 3 November 2015).
Floridi L (2008) The method of levels of abstraction. Minds Hildebrandt M (2008) Defining profiling: A new type of
and Machines 18(3): 303–329. knowledge? In: Hildebrandt M and Gutwirth S (eds)
Floridi L (2011) The informational nature of personal iden- Profiling the European Citizen the Netherlands:
tity. Minds and Machines 21(4): 549–566. Springer, pp. 17–45Available at: http://link.springer.-
Floridi L (2012) Big data and their epistemological challenge. com/chapter/10.1007/978-1-4020-6914-7_2 (accessed 14
Philosophy & Technology 25(4): 435–437. May 2015).
Mittelstadt et al. 19
Hildebrandt M (2011) Who needs stories if you can get the Levenson JL and Pettrey L (1994) Controversial decisions
data? ISPs in the era of big number crunching. Philosophy regarding treatment and DNR: An algorithmic Guide
& Technology 24(4): 371–390. for the Uncertain in Decision-Making Ethics (GUIDE).
Hildebrandt M and Koops B-J (2010) The challenges of American Journal of Critical Care: An Official
ambient law and legal protection in the profiling era. Publication, American Association of Critical-Care Nurses
The Modern Law Review 73(3): 428–460. 3(2): 87–91.
Hill RK (2015) What an algorithm is. Philosophy & Lewis SC and Westlund O (2015) Big data and journalism.
Technology 29(1): 35–59. Digital Journalism 3(3): 447–466.
Illari PM and Russo F (2014) Causality: Philosophical Theory Lomborg S and Bechmann A (2014) Using APIs for data col-
Meets Scientific Practice. Oxford: Oxford University Press. lection on social media. Information Society 30(4): 256–265.
Introna LD and Nissenbaum H (2000) Shaping the Web: Lou Y, Caruana R, Gehrke J, et al. (2013) Accurate intelli-
Why the politics of search engines matters. The gible models with pairwise interactions. In: Proceedings of
Information Society 16(3): 169–185. the 19th ACM SIGKDD international conference on know-
Ioannidis JPA (2005) Why most published research findings ledge discovery and data mining. Chicago, USA, ACM,
are false. PLoS Medicine 2(8): e124. pp. 623–631.
James G, Witten D, Hastie T, et al. (2013) An Introduction to Louch MO, Mainier MJ and Frketich DD (2010) An analysis
Statistical Learning. Vol. 6, New York: Springer. of the ethics of data warehousing in the context of social
Johnson JA (2006) Technology and pragmatism: From value networking applications and adolescents. In: 2010
neutrality to value criticality. SSRN Scholarly Paper, ISECON Proceedings, Vol. 27 no. 1392, Nashville, USA.
Rochester, NY: Social Science Research Network. Lupton D (2014) The commodification of patient opinion:
Available at: http://papers.ssrn.com/abstract=2154654 The digital patient experience economy in the age of big
(accessed 24 August 2015). data. Sociology of Health & Illness 36(6): 856–869.
Johnson JA (2013) Ethics of data mining and predictive ana- MacIntyre A (2007) After Virtue: A Study in Moral Theory,
lytics in higher education. SSRN Scholarly Paper, 3rd ed. London: Gerald Duckworth & Co Ltd. Revised
Rochester, NY: Social Science Research Network. edition.
Available at: http://papers.ssrn.com/abstract=2156058 Macnish K (2012) Unblinking eyes: The ethics of automating
(accessed 22 July 2015). surveillance. Ethics and Information Technology 14(2):
Kamiran F and Calders T (2010) Classification with no dis- 151–167.
crimination by preferential sampling. In: Proceedings of Mahajan RL, Reed J, Ramakrishnan N, et al. (2012)
the 19th machine learning conf. Belgium and the Cultivating emerging and black swan technologies. ASME
Netherlands, Leuven, Belgium. Available at: http:// 2012 International Mechanical Engineering Congress and
wwwis.win.tue.nl/tcalders/pubs/benelearn2010 (accessed Exposition, Houston, USA, pp. 549–557.
24 August 2015). Markowetz A, Blaszkiewicz K, Montag C, et al. (2014)
Kamishima T, Akaho S, Asoh H, et al. (2012) Considerations Psycho-informatics: Big data shaping modern psychomet-
on fairness-aware data mining. In: IEEE 12th International rics. Medical Hypotheses 82(4): 405–411.
Conference on Data Mining Workshops, Brussels, Belgium, Matthias A (2004) The responsibility gap: Ascribing respon-
pp. 378–385. Available at: http://ieeexplore.ieee.org/ sibility for the actions of learning automata. Ethics and
lpdocs/epic03/wrapper.htm?arnumber=6406465 (accessed Information Technology 6(3): 175–183.
3 November 2015). Mayer-Schönberger V and Cukier K (2013) Big Data: A
Kim B, Patel K, Rostamizadeh A, et al. (2015) Scalable and Revolution that will Transform How We Live, Work and
interpretable data representation for high-dimensional, Think. London: John Murray.
complex data. AAAI. pp. 1763–1769. Mazoué JG (1990) Diagnosis without doctors. Journal of
Kim H, Giacomin J and Macredie R (2014) A qualitative Medicine and Philosophy 15(6): 559–579.
study of stakeholders’ perspectives on the social network Miller B and Record I (2013) Justified belief in a digital age:
service environment. International Journal of Human– On the epistemic implications of secret Internet technolo-
Computer Interaction 30(12): 965–976. gies. Episteme 10(2): 117–134.
Kitchin R (2016) Thinking critically about and researching algo- Mittelstadt BD and Floridi L (2016) The ethics of big data:
rithms. Information, Communication & Society 20(1): 14–29. Current and foreseeable issues in biomedical contexts.
Kornblith H (2001) Epistemology: Internalism and Science and Engineering Ethics 22(2): 303–341.
Externalism. Oxford: Blackwell. Mohler GO, Short MB, Brantingham PJ, et al. (2011) Self-
Kraemer F, van Overveld K and Peterson M (2011) Is there exciting point process modeling of crime. Journal of the
an ethics of algorithms? Ethics and Information Technology American Statistical Association 106(493): 100–108.
13(3): 251–260. Moor JH (2006) The nature, importance, and difficulty of
Lazer D, Kennedy R, King G, et al. (2014) The parable of machine ethics. Intelligent Systems, IEEE 21(4).
Google flu: Traps in big data analysis. Science 343(6176): Available at: http://ieeexplore.ieee.org/xpls/abs_all.jsp?ar-
1203–1205. number=1667948 (accessed 1 January 2006).
Leese M (2014) The new profiling: Algorithms, black boxes, Morek R (2006) Regulatory framework for online dispute
and the failure of anti-discriminatory safeguards in the resolution: A critical view. The University of Toledo Law
European Union. Security Dialogue 45(5): 494–511. Review 38: 163.
20 Big Data & Society
Naik G and Bhide SS (2014) Will the future of knowledge work Shannon CE and Weaver W (1998) The Mathematical Theory
automation transform personalized medicine? Applied & of Communication. Urbana: University of Illinois Press.
Translational Genomics, Inaugural Issue 3(3): 50–53. Simon J (2010) The entanglement of trust and knowledge on
Nakamura L (2013) Cybertypes: Race, Ethnicity, and Identity the web. Ethics and Information Technology 12(4): 343–355.
on the Internet. New York: Routledge. Simon J (2015) Distributed epistemic responsibility in a hyper-
Newell S and Marabelli M (2015) Strategic opportunities connected era. In: Floridi L (ed.) The Onlife Manifesto.
(and challenges) of algorithmic decision-making: A call Springer International Publishing, pp. 145–159. Available
for action on the long-term societal effects of ‘datifica- at: http://link.springer.com/chapter/10.1007/978-3-319-
tion’. The Journal of Strategic Information Systems 24(1): 04093-6_17 (accessed 17 June 2016).
3–14. Stark M and Fins JJ (2013) Engineering medical decisions.
Neyland D (2016) Bearing accountable witness to the ethical Cambridge Quarterly of Healthcare Ethics 22(4): 373–381.
algorithmic system. Science, Technology & Human Values Sullins JP (2006) When is a robot a moral agent? Available at:
41(1): 50–76. http://scholarworks.calstate.edu/xmlui/bitstream/handle/
Orseau L and Armstrong S (2016) Safely interruptible agents. 10211.1/427/Sullins%20Robots-Moral%20Agents.pdf?
Available at: http://intelligence.org/files/Interruptibility. sequence=1 (accessed 20 August 2015).
pdf (accessed 12 September 2016). Sweeney L (2013) Discrimination in online ad delivery. Queue
Pariser E (2011) The Filter Bubble: What the Internet is Hiding 11(3): 10:10–10:29.
from You. London: Viking. Swiatek MS (2012) Intending to err: The ethical challenge of
Pasquale F (2015) The Black Box Society: The Secret lethal, autonomous systems. Ethics and Information
Algorithms that Control Money and Information. Technology14(4). Available at: https://www.scopus.com/
Cambridge: Harvard University Press. inward/record.url?eid=2-s2.0-84870680328&partnerID=
Patterson ME and Williams DR (2002) Collecting and 40&md5=018033cfd83c46292370e160d4938ffa (accessed
Analyzing Qualitative Data: Hermeneutic Principles, 1 January 2012).
Methods and Case Examples. Advances in tourism Taddeo M (2010) Modelling trust in artificial agents, a first
Application Series, Champaign, IL, Champaign, USA: step toward the analysis of e-trust. Minds and Machines
Sagamore Publishing, Inc. Available at: http://www.tree- 20(2): 243–257.
search.fs.fed.us/pubs/29421 (accessed 7 November 2012). Taddeo M and Floridi L (2015) The debate on the moral
Portmess L and Tower S (2014) Data barns, ambient intelli- responsibilities of online service providers. Science and
gence and cloud computing: The tacit epistemology and Engineering Ethics 1–29.
linguistic representation of Big Data. Ethics and Taylor L, Floridi L and van der Sloot B (eds) (2017) Group
Information Technology 17(1): 1–9. Privacy: New Challenges of Data Technologies, 1st ed.
Raymond A (2014) The dilemma of private justice systems: New York, NY: Springer.
Big Data sources, the cloud and predictive analytics. Tene O and Polonetsky J (2013a) Big data for all: Privacy and
Northwestern Journal of International Law & Business, user control in the age of analytics. Available at: http://
Forthcoming. Available at: http://papers.ssrn.com/sol3/ heinonlinebackup.com/hol-cgi-bin/get_pdf.cgi?han-
papers.cfm?abstract_id=2469291 (accessed 22 July 2015). dle=hein.journals/nwteintp11§ion=20 (accessed 2
Romei A and Ruggieri S (2014) A multidisciplinary survey on October 2014).
discrimination analysis. The Knowledge Engineering Tene O and Polonetsky J (2013b) Big Data for all: Privacy
Review 29(5): 582–638. and user control in the age of analytics. Available at:
Rubel A and Jones KML (2014) Student privacy in learning http://heinonlinebackup.com/hol-cgi-bin/get_pdf.cgi?han-
analytics: An information ethics perspective. SSRN dle=hein.journals/nwteintp11§ion=20 (accessed 2
Scholarly Paper, Rochester, NY: Social Science Research October 2014).
Network. Available at: http://papers.ssrn.com/ Tonkens R (2012) Out of character: On the creation of virtu-
abstract=2533704 (accessed 22 July 2015). ous machines. Ethics and Information Technology 14(2):
Sametinger J (1997) Software Engineering with Reusable 137–149.
Components. Berlin: Springer Science & Business Media. Tufekci Z (2015) Algorithmic harms beyond Facebook and
Sandvig C, Hamilton K, Karahalios K, et al. (2014) Auditing Google: Emergent challenges of computational agency.
algorithms: Research methods for detecting discrimination Journal on Telecommunications and High Technology Law
on internet platforms. Data and Discrimination: Converting 13: 203.
Critical Concerns into Productive Inquiry. Available at: Turilli M (2007) Ethical protocols design. Ethics and
http://social.cs.uiuc.edu/papers/pdfs/ICA2014- Information Technology 9(1): 49–62.
Sandvig.pdf (accessed 13 February 2016). Turilli M and Floridi L (2009) The ethics of information
Schermer BW (2011) The limits of privacy in automated pro- transparency. Ethics and Information Technology 11(2):
filing and data mining. Computer Law & Security Review 105–112.
27(1): 45–52. Turner R (2016) The philosophy of computer science. Spring
Shackelford SJ and Raymond AH (2014) Building the virtual 2016. In: Zalta EN (ed.) The Stanford Encyclopedia of
courthouse: Ethical considerations for design, implemen- Philosophy. Available at: http://plato.stanford.edu/arch-
tation, and regulation in the world of Odr. Wisconsin Law ives/spr2016/entries/computer-science/ (accessed 21 June
Review (3): 615–657. 2016).
Mittelstadt et al. 21
Tutt A (2016) An FDA for algorithms. SSRN Scholarly Paper, Vellido A, Martı́n-Guerrero JD and Lisboa PJ (2012) Making
Rochester, NY: Social Science Research Network. machine learning models interpretable. In: ESANN 2012
Available at: http://papers.ssrn.com/abstract=2747994 proceedings, Bruges, Belgium, pp. 163–172.
(accessed 13 April 2016). Wiegel V and van den Berg J (2009) Combining moral theory,
Valiant LG (1984) A theory of the learnable. Communications modal logic and mas to create well-behaving artificial
of the Journal of the ACM 27: 1134–1142. agents. International Journal of Social Robotics 1(3):
van den Hoven J and Rooksby E (2008) Distributive justice 233–242.
and the value of information: A (broadly) Rawlsian Wiener N (1988) The Human Use of Human Beings:
approach. In: van den Hoven J and Weckert J (eds) Cybernetics and Society. Da Capo Press.
Information Technology and Moral Philosophy. Wiltshire TJ (2015) A prospective framework for the design
Cambridge: Cambridge University Press, pp. 376–396. of ideal artificial moral agents: Insights from the science of
Van Otterlo M (2013) A machine learning view on profiling. heroism in humans. Minds and Machines 25(1): 57–71.
In: Hildebrandt M and de Vries K (eds) Privacy, Due Zarsky T (2013) Transparent predictions. University of
Process and the Computational Turn-Philosophers of Law Illinois Law Review 2013(4). Available at: http://
Meet Philosophers of Technology. Abingdon: Routledge, papers.ssrn.com/sol3/Papers.cfm?abstract_id=2324240
pp. 41–64. (accessed 17 June 2016).
Van Wel L and Royakkers L (2004) Ethical issues in web data Zarsky T (2016) The trouble with algorithmic decisions an
mining. Ethics and Information Technology 6(2): 129–140. analytic road map to examine efficiency and fairness in
Vasilevsky NA, Brush MH, Paddock H, et al. (2013) On the automated and opaque decision making. Science,
reproducibility of science: Unique identification of research Technology & Human Values 41(1): 118–132.
resources in the biomedical literature. PeerJ 1: e148.