SSRN Id3899991
SSRN Id3899991
SSRN Id3899991
5 August 2021
a
Researcher and FWO Scholar, KU Leuven Faculty of Law, Department of International and European Law.
Visiting Researcher, University of Birmingham School of Law, LEADS Lab.
b
Doctoral Researcher, University of Birmingham School of Law & School of Computer Science, LEADS Lab.
c
Postdoctoral Researcher, University of Birmingham School of Law, LEADS Lab.
d
Postdoctoral Researcher, University of Birmingham School of Law, LEADS Lab.
e
Doctoral Researcher, University of Birmingham School of Law, LEADS Lab.
f
Postdoctoral Researcher, University of Birmingham School of Law, LEADS Lab.
g
Professor of Law, Ethics and Informatics, University of Birmingham School of Law & School of Computer
Science, Head of the LEADS Lab.
ii
Based on the three pillars of Legally Trustworthy AI and the abovementioned concerns, we
make detailed recommendations for revisions to the Proposal, which are listed in the final
Chapter of this document.
iii
1. INTRODUCTION ........................................................................................................................................................... 1
2. WELCOME ASPECTS OF THE PROPOSED REGULATION ............................................................................... 2
3. THE THREE ESSENTIAL FUNCTIONS OF LEGALLY TRUSTWORTHY AI .................................................. 4
3.1 ACHIEVING ‘LEGALLY TRUSTWORTHY AI’ AS THE PROPOSAL’S CORE MISSION ...................................4
3.2 WHAT DOES LEGALLY TRUSTWORTHY AI REQUIRE? ............................................................................6
a) Allocate and distribute responsibility for wrongs and harms appropriately, and in a manner that adequately
protects fundamental rights.......................................................................................................................................... 6
b) Establish and maintain an effective framework to enforce legal rights and responsibilities, and secure the
clarity and coherence of the law itself (rule of law) .................................................................................................... 7
c) Ensure meaningful transparency, accountability and rights of public participation (democracy) .................... 8
4. HOW THE PROPOSAL FALLS SHORT OF THESE THREE FUNCTIONAL REQUIREMENTS .................. 9
4.1 THE PROPOSAL DOES NOT ALLOCATE AND DISTRIBUTE RESPONSIBILITY FOR WRONGS AND HARMS
APPROPRIATELY, AND IN A MANNER THAT ADEQUATELY PROTECTS FUNDAMENTAL RIGHTS ..............................9
4.1.1 The Proposal operationalises fundamental rights protection in an excessively technocratic
manner ..........................................................................................................................................................9
a) The distinct nature of ‘fundamental rights’ is overlooked ................................................................................ 9
b) The current technocratic approach fails to give expression to the spirit and purpose of fundamental rights . 11
c) The risk-categorisation of AI systems remains unduly blunt and simplistic .................................................. 12
4.1.2 The Proposal’s scope is ambiguous and requires clarification .....................................................13
a) The Proposal’s current definition of AI lacks clarity and may lack policy congruence ................................. 14
b) Clarification is needed on the position of academic research ......................................................................... 15
c) The potential gap in legal protection relating to military AI should be addressed ......................................... 17
d) Clarity is needed on the applicability of the Proposal to national security and intelligence agencies ............ 19
4.1.3 The range of prohibited systems and the scope and content of the prohibitions need to be
strengthened, and their scope rendered amendable .....................................................................................20
a) The scope of prohibited AI practices should be open to future review and revision ...................................... 20
b) Stronger protection is needed against AI-enabled manipulation .................................................................... 21
c) The provisions on AI-enabled social scoring need to be clarified and potentially extended to private actors ...
......................................................................................................................................................................... 23
4.1.4 The adverse impact of biometric systems needs to be better addressed ........................................24
a) Different types of biometric systems under the Proposal: an overview .......................................................... 25
b) The ‘prohibition’ on biometrics is not a real ‘prohibition’ ............................................................................. 25
c) The Proposal does not take a principled approach to the risks of various biometric systems ........................ 26
d) The distinction between private and public uses of remote biometric systems requires justification ............ 27
4.1.5 The requirements for high-risk AI systems need to be strengthened and should not be entirely left
to self-assessment .........................................................................................................................................28
a) Outsourcing the ‘acceptability’ of ‘residual risks’ to high-risk AI providers is hardly acceptable ................ 29
b) It can be questioned why the listed high-risk AI systems are considered acceptable at all ............................ 30
c) The adaptability of the Scope of Title III is too limited .................................................................................. 31
d) The list of high-risk AI systems for law enforcement should be broadened................................................... 32
e) The requirements that high-risk AI systems must comply with need to be strengthened and clarified.......... 33
4.2 THE PROPOSAL DOES NOT ENSURE AN EFFECTIVE FRAMEWORK FOR THE ENFORCEMENT OF LEGAL
RIGHTS AND RESPONSIBILITIES (RULE OF LAW) ..................................................................................................36
4.2.1 The Proposal unduly relies on (self-) conformity assessments ......................................................37
a) The Proposal leaves an unduly broad margin of discretion for AI providers and lacks efficient control
mechanisms ................................................................................................................................................................ 37
b) The conformity assessment regime should be strengthened with more ex ante independent control ............ 39
4.2.2 There is currently insufficient attention to the coherency and consistency of the scope and
content of the rights, duties and obligations that the framework seeks to establish ....................................40
a) The consistency of the Proposal with EU (fundamental rights) law should be ensured ................................. 40
b) The Proposal’s relationship with the General Data Protection Regulation should be strengthened ............... 41
c) The Proposal’s relationship with the Law Enforcement Directive should be clarified .................................. 42
d) Concerns around the Proposal’s implicit harmonisation with MiFID II should be addressed ....................... 44
4.2.3 The lack of individual rights of enforcement in the Proposal undermines fundamental rights
protection......................................................................................................................................................44
a) The Proposal does not provide any rights of redress for individuals .............................................................. 45
b) The Proposal does not provide a complaints mechanism ............................................................................... 45
iv
1
European Commission, “Proposal for a Regulation of the European Parliament and of the Council Laying
Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union
Legislative Acts, COM/2021/206 final,” EUR-Lex, April 21, 2021, https://eur-lex.europa.eu/legal-
content/EN/TXT/?uri=CELEX%3A52021PC0206. Hereafter referred to as “the Proposal,” “the proposed
Regulation,” or the “draft Regulation.”
2
High-Level Expert Group on AI, “Ethics Guidelines for Trustworthy AI,” 8 April 2019, https://digital-
strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.
3
Article 2 Treaty on European Union sets out the values on which the EU is founded, these being “common to
the Member States in a society in which pluralism, non-discrimination, tolerance, justice, solidarity and
equality between women and men prevail.” On the importance of the triad of human rights, democracy and
the rule of law in the context of AI and the EU, see also e.g., Paul Nemitz, “Constitutional Democracy and
Technology in the Age of Artificial Intelligence,” Philosophical Transactions of the Royal Society A:
Mathematical, Physical and Engineering Sciences 376, no. 2133 (2018). See also HLEG, “Ethics
Guidelines,” 4: “It is through Trustworthy AI that we, as European citizens, will seek to reap its benefits in a
way that is aligned with our foundational values of respect for human rights, democracy and the rule of law.”
4
HLEG, “Ethics Guidelines.”
5
High-Level Expert Group on AI, “Policy and Investment Recommendations for Trustworthy AI,” 26 June
2019, https://digital-strategy.ec.europa.eu/en/library/policy-and-investment-recommendations-trustworthy-
artificial-intelligence.
6
European Commission, “Building Trust in Human-Centric Artificial Intelligence,” 8 April 2019,
https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=58496.
7
HLEG, “Ethics Guidelines,” 4. As the Guidelines specify at page 9, Trustworthy AI is explicitly grounded in
a deep commitment to three fundamental principles: “We believe in an approach to AI ethics that is based
upon the fundamental rights enshrined in the EU Treaties, the Charter of Fundamental Rights of the EU (EU
Charter) and international human rights law. Respect for fundamental rights, within a framework of
democracy and rule of law, provides the most promising foundations for identifying abstract ethical principles
and values which can be operationalised in the context of AI.”
8
HLEG, “Ethics Guidelines,” 2.
9
The initial call for experts for the HLEG, stating that the HLEG would propose to the Commission ‘AI Ethics
Guidelines,’ is available at: https://digital-strategy.ec.europa.eu/en/news/call-high-level-expert-group-
artificial-intelligence.
We understand the primary aim of the proposed Regulation as addressing this third component
of Trustworthy AI, by establishing a set of legally binding norms and institutional mechanisms
necessary to ensure Lawful AI.
While the HLEG did not explain the concept of Lawful AI, it is best illuminated by considering
the key distinction between legal standards on the one hand, and voluntary ethical and technical
standards on the other hand. The most powerful features of legal standards concern the fact
that – unlike ethical or technical standards – they are mandatory and legally binding:
promulgated, monitored and enforced in the context of a system of institutions, norms and a
professional community which work together to ensure that laws are properly interpreted,
effectively complied with and duly enforced. 11 Mature legal systems consist of institutions,
procedures and officeholders which are legally empowered to gather information to monitor
compliance with legal duties. Where a possible violation of legal requirements is identified,
those with standing may initiate enforcement action before an independent tribunal seeking
binding legal remedies, which include both orders to bring any legal violations to an end and,
if applicable, the imposition of sanctions and orders for compensation. This institutional
structure differentiates law from both ethics and technical best practices. The ‘Lawful’
component of Trustworthy AI gives the concept of Trustworthiness ‘teeth.’
However, the mere existence of some ‘teeth’ does not by definition contribute to or ensure
Legal Trustworthiness. As is evidenced by the Commission’s efforts to address the requirement
of Lawful AI with a whole new regulation, the concept of ‘Lawful AI’ cannot mean that
Trustworthy AI simply complies with whichever legal rules happen to exist. For legality to
contribute to ‘Trustworthiness,’ it is crucial that the legal framework itself adequately deals
with the risks associated with AI. This desideratum goes far beyond simple legal compliance
checks – it requires the existence of a regulatory framework which addresses the foundational
values of fundamental rights, the rule of law, and democracy. As the Policy Recommendations
state:
Ensuring Trustworthy AI necessitates an appropriate governance and regulatory framework. By
appropriate, we mean a framework that promotes socially valuable AI development and deployment,
ensures and respects fundamental rights, the rule of law, and democracy while safeguarding individuals
and society from unacceptable harm.12
Legally Trustworthy AI (as opposed to simple legally compliant AI) operates at two levels.
The first level concerns the extent to which the regulatory framework around AI promotes the
values of fundamental rights, democracy and the rule of law. The second level concerns the
way in which AI systems themselves affect those three values. For an AI system to be Legally
Trustworthy, it must therefore (1) be regulated by a governance framework which promotes
fundamental rights, democracy and the rule of law, and (2) not itself be detrimental to
fundamental rights, democracy and the rule of law. The second level is conditional on the first
level: if the regulatory framework adequately protects fundamental rights, the rule of law and
10
HLEG, “Policy and Investment Recommendations,” 37.
11
See generally Peter Cane, Responsibility in Law and Morality (Oxford: Hart Publishing, 2002).
12
HLEG, “Policy and Investment Recommendations,” 37.
a) Allocate and distribute responsibility for wrongs and harms appropriately, and in a manner
that adequately protects fundamental rights
Firstly, Legal Trustworthiness requires the appropriate allocation of responsibility for harms
and wrongs. A core function of modern legal systems is to provide a binding framework to
enable peaceful social cooperation between strangers. The legal system achieves this inter alia
by attributing legal responsibility to those whose activities produce ‘other-regarding’ harms or
wrongs, whether intentional or otherwise, resulting in the imposition of either (or both) civil or
criminal liability as appropriate. In this way, the law seeks to reduce and prevent harm to others,
and to ensure appropriate redress where such adverse events occur. The law establishes and
publicly promulgates legally binding rules which identify the scope and content of the rights
and responsibilities of legal and other persons, thereby providing guidance to legal subjects so
that they can alter their behaviour accordingly so as not to fall foul of the law’s demands. This
legal guidance function plays an important role in protecting the legal rights, interests and
expectations of all members of the community against unlawful interference by others.
To this end, fundamental rights can often be understood as providing the justification for
concrete legal standards, instruments and mechanisms, thereby providing legal subjects with
b) Establish and maintain an effective framework to enforce legal rights and responsibilities,
and secure the clarity and coherence of the law itself (rule of law)
Secondly, Legal Trustworthiness must ensure adherence to the essential elements of the rule
of law including, inter alia, effective enforcement, judicial protection, and the coherence of the
law.
13
See HLEG, “Ethics Guidelines,” 10: “Understood as legally enforceable rights, fundamental rights therefore
fall under the first component of Trustworthy AI (lawful AI), which safeguards compliance with the law.
Understood as the rights of everyone, rooted in the inherent moral status of human beings, they also underpin
the second component of Trustworthy AI (ethical AI), dealing with ethical norms that are not necessarily
legally binding yet crucial to ensure trustworthiness.” Accordingly, although the Ethics Guidelines recognise
that there is often overlap between the requirements of ethics and of law, the two are not coterminous.
14
On the need for the concretisation of human rights in the context of AI, see e.g., Nathalie A. Smuha, “Beyond
a Human Rights-Based Approach to AI Governance: Promise, Pitfalls, Plea,” Philosophy & Technology 24
(2020), https://doi.org/10.1007/s13347-020-00403-w.
15
Kashmir Hill, “The Secretive Company That Might End Privacy as We Know It,” The New York Times,
January 18, 2020, https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-
recognition.html.
16
There is a risk that, without knowledge or consent of the persons affected, these databases may, over the
longer term, may be used to track ever more aspects of people’s lives, through what Evgeny Morozov calls
the 'invisible barbed wire.’ See Evgeny Morozov, “The Real Privacy Problem” MIT Technology Review,
October 22, 2013, https://www.technologyreview.com/2013/10/22/112778/the-real-privacy-problem/. In Bill
Davidow’s words, such opaque databases may serve as a key component in the construction of an ‘algorithmic
prison’ where the images collected may be used to profile “persons of interest, to use for facial recognition
and characterisation, or even to predict a person’s propensity for committing a crime” (Bill Davidow,
“Welcome to Algorithmic Prison,” The Atlantic, February 20, 2014,
https://www.theatlantic.com/technology/archive/2014/02/welcome-to-algorithmic-prison/283985/).
17
See respectively Point 4(b) and Point 7(a) of the Proposal’s Annex III, to give but a few examples.
Thirdly, Legal Trustworthiness requires that the regulatory framework is rooted in democratic
deliberation and continuously promotes opportunities for public participation and
transparency.
As discussed earlier, one of the main differences between legal and ethical standards is the
extent to which institutions exist to promulgate, interpret, and enforce those standards.
Moreover, in well-functioning democratic communities, significant legislative measures are
preceded by open and active public debate and deliberation to establish the community’s views
about those measures. In democracies, the legal system endows affected stakeholders and the
community at large with meaningful rights to participation in determining the legal norms
which affect their collective life (both ex ante and ex post). In short, democracy requires that
the regulatory framework around AI derives its legitimacy from consultation with citizens and
grants them a prominent role in the design and enforcement of the Regulation.
Moreover, democracy requires that the AI systems which are allowed under the Proposal do
not undermine the ideals of transparency and accountability, which are both required for
meaningful public participation and democratic accountability. For example, individuals and
the community at large are entitled to transparency so that they may be informed of, adequately
understand, and contest the way in which AI systems make or are directly implicated in making
decisions which significantly affect them.
18
See generally Lon L. Fuller, The Morality of Law (New Haven: Yale University Press, 1969), 67-68; Ronald
Dworkin, Law’s Empire (Cambridge: Harvard University Press, 1986).
4.1 The Proposal does not allocate and distribute responsibility for wrongs and harms
appropriately, and in a manner that adequately protects fundamental rights
The following sections address the ways in which the Proposal does not adequately allocate
responsibility for the wrongs and harms associated with AI – the first pillar of Legally
Trustworthy AI. The way in which the Proposal operationalises fundamental rights protection
appears to rest on a flawed understanding of the nature of fundamental rights. We highlight
inadequacies in the way in which the Proposal understands and operationalises fundamental
rights protection (4.1.1); the scope of the protection offered by the Proposal (4.1.2); the content
and scope of the prohibited AI practices (4.1.3); the scope and strength of protection offered
against the risks arising from the use of AI-enabled biometric systems (4.1.4); and the
fundamental rights protection provided against the risks posed by ‘high-risk’ systems (4.1.5).
The Proposal fails to properly understand the distinctive nature of fundamental rights, which
requires a particular form of protection well-established in EU fundamental rights law.
19
Consider in this regard also Article 52 of the EU Charter, which provides that “Any limitation on the exercise
of the rights and freedoms recognised by this Charter must be provided for by law and respect the essence of
those rights and freedoms. Subject to the principle of proportionality, limitations may be made only if they
are necessary and genuinely meet objectives of general interest recognised by the Union or the need to protect
the rights and freedoms of others” (European Union, “Charter of Fundamental Rights of the European Union,”
Official Journal of the European Union C83 53, (2010)).
20
European Union, “EU Charter,” Article 51(1).
21
European Union, “EU Charter,” Article 54.
10
b) The current technocratic approach fails to give expression to the spirit and purpose of
fundamental rights
Instead of taking the approach well-established in fundamental rights law, the proposed
Regulation translates the protection of fundamental rights to a set of requirements, primarily
11
Yet when considered through the lens of fundamental rights, this approach falls well short of
effective rights protection – offering a weaker form of market-focused regulation as opposed
to specific and clear protections against fundamental rights interference which might be
generated by particular AI systems. We argue that these burdens should be reversed. Innovation
is a legitimate – and perhaps necessary aim, given the remit, goals and obligations of the
Commission and the legal basis upon which this Proposal has been developed. However, as
argued in the previous section, internal market considerations cannot take priority over
fundamental rights. Doing so significantly threatens the fundamental rights of individuals
arising from the development, deployment and use of AI systems, particularly when those
systems are used to inform and even to automate the exercise of decision-making power by
public authorities and private companies.
A serious weakness of the regime is its failure to grapple with the highly controversial nature
of AI applications, treating AI systems as analogous to other consumer products. However,
while the deployment of many existing AI applications already implicates a wide range of
fundamental rights, there is no systematic regulatory oversight to enable independent
evaluation of whether any resulting rights interferences can be justified for the achievement of
permissible purposes which are necessary and proportionate in a democratic society. The
Proposal must avoid treating AI systems solely as technical consumer products which may
yield claimed economic and social benefits. Instead, it needs to acknowledge that they are
socio-technical systems which mediate social structures (and in doing so, must grapple with
administrative and institutional requirements and cultures), and which can produce significant
consequences for individuals, groups, and society more generally.
Not only does the Proposal fail to take seriously the distinct nature and strength of fundamental
rights, the risk-based approach taken by the Proposal also remains unduly blunt and simplistic.
Compared to the binary approach put forward in the White Paper on AI, the Proposal’s more
nuanced degree of risk strata amongst AI systems is an important improvement. A distinction
is currently made between (1) prohibited practices (with certain exceptions) in Title II, (2) high-
risk AI systems in Title III, (3) systems requiring increased transparency measures due to a risk
of deception in Title IV (which could also fall under the high-risk systems category) and (4)
all other systems. However, for several reasons, this approach may still not be sufficiently
stratified in practice to adequately protect fundamental rights.
First, the Proposal does not prohibit the deployment of AI systems which violate fundamental
rights other than systems which engage in the prohibited practices. Instead, AI systems
22
European Commission, “The Proposal,” Explanatory Memorandum, 3.
12
13
a) The Proposal’s current definition of AI lacks clarity and may lack policy congruence
The primary objective of the Proposal is to regulate ‘AI systems.’ How ‘AI’ is defined is
therefore essential for the determination of who bears legal duties under the Proposal and the
scope of its protection. We argue that the definition of AI could lead to confusion and
uncertainty, and although we do not propose a concrete solution in this document, we believe
it merits significant attention.
Article 3(1) of the proposed Regulation states that “‘artificial intelligence system’ (AI system)
means software that is developed with one or more of the techniques and approaches listed in
Annex I and can, for a given set of human-defined objectives, generate outputs such as content,
predictions, recommendations, or decisions influencing the environments they interact with.”
The techniques and approaches listed in Annex I include “(a) Machine learning approaches,
including supervised, unsupervised and reinforcement learning, using a wide variety of
methods including deep learning; (b) Logic- and knowledge-based approaches, including
knowledge representation, inductive (logic) programming, knowledge bases, inference and
deductive engines, (symbolic) reasoning and expert systems; (c) Statistical approaches,
Bayesian estimation, search and optimization methods.”
The description of software which provides outputs for human-defined objectives is incredibly
broad, as it encompasses virtually all algorithms.23 Annex I is meant to specify which particular
techniques fall under ‘artificial intelligence’ techniques. However, the techniques mentioned
seem to include virtually all computational techniques, as machine learning, inductive,
deductive, and statistical approaches are all included. As the title of Annex I refers to “Artificial
Intelligence Techniques and Approaches,” it could be assumed that the definition of, for
example, ‘logic-based approaches’ is limited to logic-based approaches to artificial
intelligence. However, as ‘artificial intelligence’ is defined as any algorithm which uses the
techniques listed in Annex I, this specification has become circular and is therefore not a
specification at all.
For computer scientists working with any computational techniques which draw on logic or
statistical insights, but which are not conventionally seen as falling within the domain of
artificial intelligence, it is hard to determine whether this Regulation applies to them. This
applies especially to instances of ‘simple automation,’ in which logic-based reasoning and
probabilistic approaches are used in algorithms which simply execute pre-programmed rules
and do not engage in optimisation and learning (think, for example, of virtual dice which
display the number six with a certain probability). In safety-critical and fundamental rights-
critical contexts, it is of little relevance whether a system relies on simple automation rather
than on optimisation and learning – although the specific risks and dangers associated with the
respective categories are distinct, and risk assessment procedures for the different categories
of systems might need to be designed with the distinct properties of the category in mind.
23
If we take the definition of an algorithm to be “A finite series of well-defined, computer-implementable
instructions to solve a specific set of computable problems,” from “The Definitive Glossary of Higher
Mathematical Jargon: Algorithm,” Math Vault, accessed 21 June 2021, https://mathvault.ca/math-
glossary/#algo.
14
The second issue relating to the scope of the Proposal and the allocation of responsibility is the
position of academic researchers working within universities (‘academic research’). It is
currently unclear to what extent the proposed Regulation applies to academic research. The
overall tone of the Proposal, which focuses on the promotion of a “well-functioning internal
15
24
See European Commission, “The Proposal,” 1.
25
The Proposal says 3(1) but it comes after 3(1) and before 3(3), so it is reasonable to assume that the document
meant to say 3(2).
16
The third issue concerning the scope of the Proposal is the potential gap in legal protection
against fundamental rights interferences through military AI.
Recital 12 explains that “AI systems exclusively developed or used for military purposes
should be excluded from the scope of this proposed Regulation where that use falls under the
exclusive remit of the Common Foreign and Security Policy regulated under Title V of the
Treaty on European Union (TEU).” Additionally, Article 2(3) states that “[t]his Regulation
shall not apply to AI systems developed or used exclusively for military purposes.”
The condition “where that use falls under the exclusive remit of the Common Foreign and
Security Policy regulated under Title V” of the TEU is not repeated in Article 2(3). It is
therefore unclear whether this condition actually applies, or whether all AI systems developed
or used exclusively for military purposes are excluded from the scope of the proposed
Regulation, regardless of whether they fall within the exclusive remit of the Common Foreign
and Security Policy.
This difference is crucial, considering the existence of the European Defence Fund (EDF),
which invests heavily in military AI.26 The legal basis of the EDF is not Title V TEU
(mentioned in recital 12 of the Proposal), but “Article 173(3), Article 182(4), Article 183 and
the second paragraph of Article 188, of the Treaty on the Functioning of the European
Union.”27 Article 173 TFEU falls under Title XVII, “Industry,” and Articles 182, 183 and 188
fall under Title XIX, “Research and Technological Development and Space.” These legal bases
differ significantly from Title V TEU “General Provisions on the Union’s External Action and
Specific Provisions on the Common Foreign and Security Policy.” This choice of legal basis
for the EDF is also reflected in the fact that the European Commission plays a dominant role
in the allocation of funds under the EDF, while the European Defence Agency (established
under the Common Foreign and Security Policy (CFSP), Title V TEU) merely has an observer
role, assisting the Commission.28 AI projects developed in the context of the EDF therefore do
not fall within the exclusive remit of the CFSP and would therefore not be excluded from the
scope of the proposed Regulation if one follows the text in recital 12. However, the fact that
the condition of the exclusive remit of the CFSP is not repeated in Article 2(3) of the Proposal
suggests that the AI systems developed in the context of the EDF would in fact be excluded
from the scope of the Regulation.
This discrepancy between recital 12 and Article 2(3) must be clarified, as the text in recital 12
seems to create an apparently arbitrary distinction between military AI systems developed in
26
See Christoph Marischka, “Artificial Intelligence in European Defence: Autonomous Armament?” The Left
in the European Parliament, January 14, 2021, https://left.eu/issues/publications/artificial-intelligence-in-
european-defence-autonomous-armament/.
27
European Parliament, “European Parliament legislative resolution of 18 April 2019 on the proposal for a
regulation of the European Parliament and of the Council establishing the European Defence Fund
(COM(2018)0476 – C8-0268/2018 – 2018/0254(COD)),” 18 April 2019,
https://www.europarl.europa.eu/doceo/document/TA-8-2019-0430_EN.html.
28
European Parliament, “Legislative resolution of 18 April 2019,” Article 28: “1. The Commission shall be
assisted by a committee within the meaning of Regulation (EU) No 182/2011. The European Defence Agency
shall be invited as an observer to provide its views and expertise. (…)”
17
29
HLEG, “Ethics Guidelines,” 33.
30
HLEG, “Ethics Guidelines,” 33-34.
31
Marischka, “Artificial Intelligence in European Defence,” 11.
32
See Regulation (EU) 2021/697 of the European Parliament and of the Council of 29 April 2021 establishing
the European Defence Fund and repealing Regulation (EU) 2018/1092, OJ L 170, 12.5.2021, 149–177, Article
7(2).
18
The final concern about the scope of the Proposal and how it leads to a particular distribution
of legal responsibility concerns national security and intelligence agencies.
The scope of the proposed Regulation explicitly excludes AI systems exclusively developed or
used for military purposes (Article 2(3)), and it explicitly includes AI systems used by law
enforcement (Annex III(6)), administrative agencies dealing with migration (Annex III(7)),
and judicial authorities (Annex III(8)). However, while both the excluded and the included
domains can be informed by the work of intelligence agencies, the Proposal and the Annexes
do not explicitly mention national security or intelligence agencies.
Article 3(40) defines “law enforcement authority” as “any public authority competent for the
prevention, investigation, detection or prosecution of criminal offences or the execution of
criminal penalties, including the safeguarding against and the prevention of threats to public
security.” This definition could include intelligence agencies, which could be seen as a public
authority competent for the safeguarding against threats to public security. This, however,
depends on how ‘safeguarding’ is interpreted, as intelligence agencies often do not have
executive powers and mainly function as a source of information for other authorities such as
defence ministries, public prosecutors, immigration agencies, etc. (although this may depend
on the Member State legal system). Does the collection of information amount to
‘safeguarding?’ Moreover, as intelligence agencies often lack executive powers, they are
commonly thought of as separate from law enforcement agencies, with a “jurisdictional
firewall”33 separating them. If intelligence agencies fall under “law enforcement authorities”
as defined in Article 3(40), they should be explicitly mentioned.
If intelligence agencies do not fall under ‘law enforcement,’ it is unclear whether AI systems
developed and used for national intelligence purposes fall within the scope of the proposed
Regulation, as national intelligence activities could be classified as related to military action,
law enforcement, border control, and the administration of justice – depending on the specific
context.
Moreover, national security is a fundamental rights-critical domain even if it does not
immediately inform the high-risk domains listed in Annex III. Suppose an intelligence agency
deploys a natural language processing (NLP) model which automatically screens social media
posts for potential extremist content. This model flags extremist content posted by an
individual. This individual is then put on a watchlist, which is shared with law enforcement
agencies. The individual is now put under extra surveillance by the intelligence agency and
might be apprehended by the police for having posted particular content. This NLP system
would not fall under the individual risk assessment, polygraphs, deep fake detection, evaluation
of the reliability of evidence, predicting the occurrence of a crime, profiling, or crime analytics
listed in Annex III under ‘law enforcement.’ Yet, such a system affects the fundamental rights
of the person subjected to it. Moreover, if intelligence agencies are not covered by Article
3(40), this might also prevent the application of the prohibition on real-time remote biometric
identification (Article 5(2)) to the activities of intelligence agencies. This would cause an
unacceptable gap in fundamental rights protection, as one of the goals of Article 5(2) is to
33
Jonathan M. Fredmant, “Intelligence Agencies, Law Enforcement, and the Prosecution Team,” Yale Law &
Policy Review 16, no. 2 (1998), 331.
19
4.1.3 The range of prohibited systems and the scope and content of the prohibitions need to
be strengthened, and their scope rendered amendable
So far, we have argued that the Proposal reflects an inadequate understanding of fundamental
rights and requires substantial clarifications regarding its scope. In the following sections, we
address the list of prohibited practices, which is an important element of the Proposal’s
protection of rights and its allocation of responsibilities.
We welcome the inclusion of a set of ‘prohibited practices’ in the proposed AI Regulation. It
provides much-needed legal protection against a set of AI applications and practices which are
so rights-intrusive that they cannot be justified and therefore should be legally prohibited. This
approach is in line with the HLEG’s Policy Recommendations which supported the
introduction of “‘precautionary measures’ when scientific evidence about an environmental,
human health hazard or other serious societal threat (such as threats to the democratic process),
and the stakes are high.”34 By offering concrete legal protection to individuals and communities
against the wrongs and harms which arise from such practices, the prohibitions have the
potential to significantly strengthen the legal protection of the fundamental rights they
implicate.
However, we argue that the way in which these prohibitions are currently drafted is deficient.
In particular, for various reasons set out below, we argue that the scope of the list of prohibited
practices should be revised to (a) strengthen the prohibition on manipulative practices, and (b)
clarify the provisions on AI-enabled social scoring. Moreover, the Proposal remains too
tolerant of rights-intrusive biometric applications which also require stronger prohibitions. The
subject of biometrics is however dealt with later (under section 4.1.4), since these systems fall
partly under prohibited practices and partly under high-risk applications.
a) The scope of prohibited AI practices should be open to future review and revision
The Proposal currently lists prohibited AI practices in Article 5. In contrast to the list of stand-
alone high-risk systems in Annex III, the list of prohibited systems in Article 5 of the Proposal
cannot be amended by the European Commission. The current list of prohibited practices seems
heavily inspired by recent controversies, especially the prohibitions on social scoring and
remote biometric identification by law enforcement authorities. The problematic nature of
certain uses of AI can sometimes only be grasped when those uses are actually put in practice
(e.g., much of the controversy around social scoring systems emerged in the context of recent,
concrete developments in China). However, the fact that some practices have received recent
media attention does not mean that they are the only AI practices which are deeply problematic.
Future uses of AI systems can be hard to predict, and it seems premature to permanently fix
the list of prohibited AI practices.
34
HLEG, “Policy and Investment Recommendations,” 38, section 26(2).
20
35
See also Victoria Canning and Steve Tombs, From Social Harm to Zemiology (London: Routledge, 2021),
Chapter 3.
36
See, for instance Siva Vaidhyanathan, Anti-Social Media: How Facebook Disconnects Us and Undermines
Democracy (Oxford: OUP, 2018); Nathaniel Persily and Joshua Tucker (Eds.), Social Media and Democracy:
The State of the Field, Prospects for Reform (Cambridge: CUP, 2021), various chapters.
21
37
Mary Still and Jeremiah Still, “Subliminal Techniques: Considerations and Recommendations for Analyzing
Feasibility,” International Journal of Human–Computer Interaction 35, no. 5 (2018), 457-466. See also Sid
Kouider and Stanislas Dehaene “Levels of Processing during Non-Conscious Perception: A Critical Review
of Visual Masking,” Philosophical Transactions: Biological Sciences 362, no. 1481 (2007), 857-875.
38
European Commission, “The Proposal,” Explanatory Memorandum, 12.
39
Caroline Jack, “Lexicon of Lies: Terms for Problematic Information,” Data & Society Research Institute,
August 9, 2017, https://datasociety.net/library/lexicon-of-lies/.
22
c) The provisions on AI-enabled social scoring need to be clarified and potentially extended
to private actors
In addition to the problems regarding the fixed nature of the list of prohibitions and the
seemingly arbitrary particulars of the prohibition on manipulation, the prohibition on social
scoring also needs to be clarified and extended.
Article 5(1)(c) of the Proposal prohibits AI systems to be used by “public authorities or on their
behalf for the evaluation or classification of the trustworthiness of natural persons over a certain
period of time based on their social behaviour or known or predicted personal or personality
characteristics, with the social score leading to either or both (…) detrimental or unfavourable
treatment of certain natural persons or whole groups (…)” and/or “treatment
(…) that is unjustified or disproportionate to their social behaviour or its gravity.” In short, it
prohibits AI-enabled social scoring by public authorities.
However, social scoring is often conducted by private actors, who potentially have access to
enormous amounts of personal data, covering many important areas of life, like job
applications, hiring policies, and loan applications. Social scoring models in these domains
could have devastating effects. Furthermore, private organisations are increasingly moving into
areas of social policy which would have previously been occupied by the state, and employ
social scoring models to identify areas of need, like the identification of children most at risk
of being mistreated with a view to taking them into care against the wishes of their parents, or
the children themselves. In this context, it is crucial to clarify what ‘on behalf of’ public
authorities’ means, as some social domains occupied by the state in one Member State could
be in private hands in another. This could potentially undermine the Proposal’s goal of a
‘coordinated European approach’ to the risks associated with AI.
Secondly, Article 5(1)(c) focuses on “social behaviour or known or predicted personal or
personality characteristics,” while it is well documented that proxies may be employed for
where such personal data is protected. These proxies could be drawn from factors not
mentioned in the Article, such geographical location (postcodes, etc.). Scoring on these
measures can be as discriminatory and devastating for individuals as those drawn from the
characteristics included in the Proposal. This should be reflected in the language of the Article.
40
Henk Aarts, Ruud Custers, and Martijn Veltkamp, “Motivating Consumer Behaviour by Subliminal
Conditioning in the Absence of Basic Needs: Striking Even While the Iron is Cold,” Journal of Consumer
Psychology 21, no. 1 (2011), 49-56.
23
41
Samuel Stolton, “LEAK: Commission Considers Facial Recognition Ban in AI ‘White Paper.’”
EURACTIV.Com, January 17, 2020, https://www.euractiv.com/section/digital/news/leak-commission-
considers-facial-recognition-ban-in-ai-white-paper/.
42
The Guardian Editorial, “The Guardian View on Facial Recognition: A Danger to Democracy,” The
Guardian, June 9, 2019, https://www.theguardian.com/commentisfree/2019/jun/09/the-guardian-view-on-
facial-recognition-a-danger-to-democracy.
43
Lucas Introna and David Wood, “Picturing Algorithmic Surveillance: The Politics of Facial Recognition
Systems,” Surveillance and Society 2, (2004), 177, 181-2.
24
Before we critique how the Proposal regulates biometric technologies, it is useful to outline the
different categories of biometric systems envisioned in the Proposal. First, the adjective
‘remote’ is used to indicate only those biometrics that collect data in a passive, remote manner
while excluding traditional ones requiring physical contact, e.g., fingerprints and DNA
samples. Second, biometrics used for identification or verification (known as ‘remote biometric
identification systems,’ or RBIS) are differentiated from systems which classify or categorise
people (known as ‘biometric categorisation systems,’ or BCS). Additionally, emotion
recognition systems (ERS) are separately defined and regulated despite their reliance on
biometric data. Such a differentiation renders RBIS subject to regulation of a higher threshold
whereas BCS and ERS are only subject to transparency obligations and not necessarily
regarded as high-risk.
A distinction is also made between ‘post’ (retrospective) and ‘real-time’ (live) use of biometric
systems, defined in Article 3. Both are classified as high-risk and subjected to requirements
such as logging capabilities and human oversight. 44 Further, the uniqueness of their high risks
renders them subjected to third party conformity assessment rather than to self-assessment, as
is the case for other high-risk AI systems. 45 The only normative difference is that real-time
biometrics are considered ‘particularly intrusive’46 and prohibited if used in publicly accessible
spaces for the purpose of law enforcement, subject to several exceptions.47
The use of RBIS in public spaces for the purpose of law enforcement is a prohibited practice
under Article 5(1)(d). However, the prohibition is so narrow and allows for such broad
exceptions that it barely deserves to be called a ‘prohibition.’
Firstly, the Proposal does not mention the use of live biometrics in public spaces by public
actors for non-law enforcement purposes, such as intelligence gathering for the purposes of
national security or resource allocation, or for migration management purposes. The
prohibition therefore only covers a limited range of uses of biometric systems, and by a very
limited range of actors (i.e., law enforcement). Regardless of whether it is used by law
enforcement or by other public (or private) actors, the use of this technology implicates
fundamental rights in a way that is almost inevitably disproportionate, as it requires the
processing of a great amount of sensitive data of many people to enable the identification of
few individuals. The prohibition should therefore be extended beyond the use of RBIS by law
44
European Commission, “The Proposal,” Recital 33.
45
European Commission, “The Proposal,” 14.
46
European Commission, “The Proposal,” Recital 18.
47
European Commission, “The Proposal,” Recital 19 and Article 5(1)(d).
25
c) The Proposal does not take a principled approach to the risks of various biometric systems
The prohibition on the use of RBIS by law enforcement in public spaces just discussed, is
justifiable from a fundamental rights perspective. Yet, it is unclear why the prohibition in
Article 5(1)(d) does not include BCS or ERS.
Firstly, the range of possible legitimate uses of remote live BCS for law enforcement purposes
in public places is very difficult to imagine. Differently than in the case of RBIS, this
technology does not aim to search for a specifically identified – potentially at danger or
dangerous – individual, but for a group of people. Indeed, the premise behind BSC is that
people can be categorised based on their physical characteristics, such as visible gender, race,
age, and the way they move their bodies. As these physical characteristics relate closely to the
protected characteristics outlined in outlined in Article 21 of the Charter of the Fundamental
Rights of the EU, any use of such technology for law enforcement purposes is suspect, and
requires justification based on its necessity and proportionality in a democratic society,
provided for by law. Moreover, these technologies present serious risks to minorities, as one
26
d) The distinction between private and public uses of remote biometric systems requires
justification
Not only does the Proposal not seem to adopt a principled approach to the regulation of the
different types of biometric systems, it also does not justify the distinctions it makes between
different users of biometric systems.
Currently, individuals are offered an asymmetrical level of protection against the use of AI-
enabled biometric recognition systems. While the use of RBIS in public spaces by law
enforcement authorities is prohibited, its use by private organisations is merely categorised as
‘high-risk.’ The Proposal thus creates a ‘two-tier’ system of regulation by which state
48
Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial
Gender Classification,” Proceedings of Machine Learning Research 81 (2018), 1–15. David Leslie,
“Understanding Bias in Facial Recognition Technologies: An Explainer,” Alan Turing Institute, October 26,
2020, https://doi.org/10.2139/ssrn.3705658.
49
Evan Selinger, “A.I. Can’t Detect Our Emotions,” OneZero, April 6, 2021, https://onezero.medium.com/a-i-
cant-detect-our-emotions-3c1f6fce2539. Luke Stark and Jesse Hoey, “The Ethics of Emotion in AI Systems,”
October 9, 2020, https://osf.io/preprints/9ad4u/. Douglas Heaven, “Why Faces Don’t Always Tell the Truth
about Feelings” Nature, February 26, 2020, https://www.nature.com/articles/d41586-020-00507-5.
27
However, these same problems may exist with private use of biometric technologies. It is not
immediately clear why this approach differs from that of the General Data Protection
Regulation, wherein no distinctions are made between public and private ‘data controllers,’
recognising that AI systems expand and extend the powers of private companies and opening
fundamental rights to new forms of abuse. 50
This is not to say that the Proposal could not legitimately distinguish between the use of
biometrics by public authorities and by private actors. Nonetheless, we would expect at least a
justification as to why such a distinction is necessary, and why government use of biometric
systems poses a greater risk to fundamental rights than use by private organisations. Otherwise,
these restrictions seem to operate on the assumption that the use of biometrics systems by
private organisations are somehow less intrusive, despite their potential for widespread use in
‘semi-public’ spaces such airports, shopping centres, and sports stadiums. Personal data
collected by private actors could be shared with law enforcement bodies, opening up a
worrying loophole in the protections afforded by the proposed Regulation.
The Proposal’s approach seems to overlook the fact that private biometric systems are likely
to be widespread and may also cause a ‘feeling of constant surveillance;’ and that private actors
may share information with law enforcement authorities or collaborate with them. It is true that
the justification for the interference with fundamental rights traditionally comes from public
actors. However, considering the enormous power of private actors in the space of AI and their
ability to directly contribute to chilling effects on the exercise of fundamental rights, the
Commission should consider tightening the rules for private use of biometric systems.
In sum, to ensure that the development and use of AI-enabled biometric technologies in the EU
are Legally Trustworthy, the Proposal needs to take seriously the duty to justify any
interference with fundamental rights according to the principles of fundamental rights law. To
enable such a discourse of justification, the Proposal requires a more principled approach
towards biometric technologies, which includes strengthening the current prohibition of RBIS,
prohibiting live remote BCS in public places as well as the use of ERS by law enforcement and
other public authorities with coercive powers, and reconsidering the existence of different
regimes for public and private actors. Moreover, uses of ERS and BCS which are not included
in the suggested prohibition should be added to the list of high-risk systems of Annex III, and
subjected to strong safeguards. Only then can the Proposal move towards more adequate
prevention of the harms associated with biometrics, and the appropriate allocation of
responsibility for such potential harms.
4.1.5 The requirements for high-risk AI systems need to be strengthened and should not be
entirely left to self-assessment
After having considered the proposed regulatory frameworks for prohibited systems and
biometric systems, we now turn to high-risk AI systems as defined by the Proposal.
50
Linnet Taylor, “Public Actors Without Public Values: Legitimacy, Domination and the Regulation of the
Technology Sector,” Philosophy and Technology (2021), https://doi.org/10.1007/s13347-020-00441-4.
28
As argued in section 4.1.1, the Proposal takes a rather technocratic approach to fundamental
rights, imposing a list of obligations on the providers of high-risk AI systems, rather than
making them engage with the justificatory discourse customary in human rights law. Not only
does this choice poorly reflect the spirit of fundamental rights, it also confers undue discretion
for the AI provider.
For high-risk AI systems, Article 9 of the Proposal mandates the establishment, implementation
and documentation of a risk management system to be maintained as an iterative process
throughout the AI system’s lifecycle. The risk management system should inter alia include
the identification and analysis of known and foreseeable risks, an estimation and evaluation of
risks which may emerge when the system is used in accordance with its intended purpose and
under conditions of ‘reasonably foreseeable misuse,’ and the adoption of ‘suitable’ risk
management measures. These measures, according to Article 9(3), “shall give due
consideration to the effects and possible interactions resulting from the combined application
of the requirements” and “take into account the generally acknowledged state of the art,
including as reflected in relevant harmonised standards or common specifications.” In short,
the effectiveness of the mandatory requirements imposed on high-risk AI systems thus hinges
on the quality of this risk management system and the protection which it is intended to provide.
Importantly, the Proposal goes beyond a mere requirement of risk management documentation,
and also requires that systems are tested so as to identify the necessary risk management
measures to ensure compliance with the various requirements (Article 9(5)). In addition, while
the risk management process primarily falls upon the provider of the AI system, the provider
51
European Commission, “The Proposal,” Explanatory Memorandum, 13.
29
b) It can be questioned why the listed high-risk AI systems are considered acceptable at all
Besides the fact that providers of high-risk AI systems have too much discretion to decide on
the acceptability of those systems, one could wonder why these systems – with clear and
potentially severe adverse implications for fundamental rights – are considered to be acceptable
at all. Their categorisation as posing a ‘high’ yet nevertheless ‘acceptable’ risk (since if
unacceptable, they would figure under the prohibited AI practices of Article 5) is especially
blatant given that no evidence is provided that their adverse impact on fundamental rights is
necessary and proportionate in a democratic society, as required by human rights law (see
section 4.1.1).
30
As noted above, the scope of Title III – i.e., high-risk AI systems – is limited to those systems
which are covered by Annex II of the Proposal (intended to be used as a safety component of
a product, or a product covered by the specifically listed Union harmonisation legislation) or
by the list of stand-alone high-risk systems laid down in Annex III. This Annex provides a list
of eight areas in which, according to the European Commission, high-risk AI applications are
used. Within each of these areas, the Annex lists a limited number of AI applications which
are considered as high-risk and hence subjected to the mandatory requirements laid down in
Title III.
Pursuant to Article 7(1) of the Proposal, the European Commission can amend the list in Annex
III and include additional systems if it can demonstrate that those systems pose a risk of harm
to the health, safety or fundamental rights of individuals which is “in respect of its severity and
probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact
posed by the high-risk AI systems already referred to in Annex III.” Furthermore, Article 84(1)
mentions that the Commission will review this list on a yearly basis, in order to keep it up to
52
European Data Protection Board, “EDPB-EDPS Joint Opinion 5/2021 on the proposal for a Regulation of the
European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial
Intelligence Act),” 18 June 2021, https://edpb.europa.eu/system/files/2021-06/edpb-
edps_joint_opinion_ai_regulation_en.pdf.
53
European Data Protection Board, “Joint Opinion,” 12.
54
Andrea Renda et al., “Study to Support an Impact Assessment of Regulatory Requirements for Artificial
Intelligence in Europe,” April 21, 2021, Publications Office of the European Union,
https://op.europa.eu/en/publication-detail/-/publication/55538b70-a638-11eb-9585-01aa75ed71a1/language-
en/format-PDF/source-204305195.
55
Renda, “Study,” 39.
31
Even when leaving aside concerns about the future-proofness of the list-based approach to
high-risk AI systems and about the potential miscategorisation of such systems and lack of
policy-congruence,57 questions already arise regarding the comprehensiveness of Annex III,
specifically in the domain of law enforcement.
Annex III(6) lists the high-risk systems in the domain of law enforcement. The list is mostly
focused on systems which have natural persons as their subjects (“individual risk assessments
for natural persons;” “detect the emotional state of a natural person;” “profiling of natural
persons”). By focusing on natural persons, the Annex fails to identify optimisation systems
which use geospatial data to determine law enforcement resource deployment and/or
prioritisation as high-risk systems (otherwise known as ‘predictive policing,’ or ‘crime hotspot
analytics systems’). Generally, such technologies are used to produce statistical predictions
regarding where future crime may take place, in order to enable law enforcement authorities to
‘optimise’ where and how they deploy their resources for maximum benefit. Resource
optimisation systems do not solely rely on personal data of natural persons, but draw from a
wide range of data relating to a geographical location, including the occurrence and rate of
occurrence of crimes in a specific geographical location.
56
See generally Maureen O’Hara, “High-Frequency Trading and Its Impact on Markets,” Financial Analysts
Journal 70, no. 3 (2018).
57
In this regard, see also Karen Yeung’s submission to the public consultation on the European Commission’s
White Paper on Artificial Intelligence of February 2020, which elaborates on the problems relating to a list-
based approach to high-risk AI systems. The submission can be accessed here:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3626915.
32
e) The requirements that high-risk AI systems must comply with need to be strengthened and
clarified
Our final remarks on the proposed regulatory framework for high-risk AI systems concerns the
strength and clarity of the requirements for such systems. While vague language – which might
cause legal uncertainty and a weak protection against AI’s adverse effects – can be found under
several requirements for high-risk systems, we highlight a few questions and concerns
regarding three requirements in particular: those pertaining to data governance, transparency
and human oversight.
Data governance obligations
The first requirement which raises questions is ‘data governance.’ Firstly, the large discretion
for providers of high-risk AI systems mentioned earlier is also reflected in the requirements for
data governance. Article 10 of the Proposal, dealing with requirements of data quality and
governance, also lets the term ‘appropriate’ do some heavy lifting. Indeed, it requires that
training, validation and testing data sets shall be subject to ‘appropriate’ data governance and
management practices, that the data sets shall have ‘appropriate’ statistical properties, and that
the processing of special categories of data to avoid the risk of bias is carried out subject to
‘appropriate’ safeguards for fundamental rights. While the Article can be commended for
specifying the minimal considerations that should be taken into account for the data
management process to be considered ‘appropriate,’ it leaves open what constitutes an
58
The Law Society of England and Wales, “Algorithms in the Criminal Justice System,” June 4, 2019,
https://www.lawsociety.org.uk/en/topics/research/algorithm-use-in-the-criminal-justice-system-report, 35.
59
Philip Alston, “Brief by the United Nations Special Rapporteur on extreme poverty and human rights as
Amicus Curiae in the case of NJCM c.s./De Staat der Nederlanden (SyRI) before the District Court of the
Hague (case number: C/9/550982/HA ZA 18/388),” OHCHR, September 26, 2019,
https://www.ohchr.org/Documents/Issues/Poverty/Amicusfinalversionsigned.pdf, 29.
33
60
European Commission, “The Proposal,” Explanatory Memorandum, 4. See also section 4.2.2 further below,
concerning the consistency of the proposed Regulation with data protection legislation.
61
European Commission, “The Proposal,” Recital 32.
34
62
European Commission, “The Proposal,” Recital 38.
35
4.2 The Proposal does not ensure an effective framework for the enforcement of legal
rights and responsibilities (rule of law)
As explained in Chapter 3, one of the functions of law is to allocate and distribute responsibility
for harms and wrongs in society. The previous sections explained that the Proposal does not
allocate responsibilities in ways which adequately protect against fundamental rights
infringements. Additionally, Chapter 3 argued that the distinctive character of legal (as
opposed to ethical) rules lies in an effective and legitimate framework through which legal
rights and duties are enforced. The following sections therefore comment on the enforcement
framework proposed in the proposed Regulation.
63
AlgorithmWatch, Automating Society Report 2020, AlgorithmWatch, 2020,
https://automatingsociety.algorithmwatch.org/, 227.
36
a) The Proposal leaves an unduly broad margin of discretion for AI providers and lacks
efficient control mechanisms
Section 4.1.1 argued that the Proposal understands fundamental rights protection in an overly
technocratic manner and grants too much discretion to AI providers regarding the balancing
exercise involved in fundamental rights protection. These concerns relate to the way in which
the Proposal allocates responsibility, but they also relate to the way in which the Proposal is
enforced.
As argued section 4.1.5.a, the Proposal gives AI providers and users very broad discretion to
determine what they consider to be respect for fundamental rights, given that there is no
requirement for (ex ante) independent verification and certification that the system is in fact
fundamental rights compliant. Although independent authorities are empowered to review the
training, validation and test data and associated technical documentation (per Article 64), they
only have residual power to inspect and test the technical and organisational systems if the
documentation is insufficient to ascertain whether a breach of fundamental rights has occurred.
It is difficult to understand the basis upon which an authority could establish whether the data
offered for inspection and associated documentation is in fact the basis upon which the AI
64
Per A.V. Dicey: “No man is punishable or can be lawfully made to suffer in body or goods, except for a
distinct breach of law established in the ordinary legal manner before the ordinary courts of the land,” in A.V.
Dicey, Introduction to the Study of the Law of the Constitution (New York: Liberty, 1982), 23.
37
38
b) The conformity assessment regime should be strengthened with more ex ante independent
control
65
European Commission, “The Proposal,” Article 29(5).
66
Sylvia Kierkegaard and Patrick Kierkegaard, “Danger to public health: medical devices, toxicity, virus and
fraud,” Computer Law and Security Review 29, no.1 (2013), 13-27.
39
4.2.2 There is currently insufficient attention to the coherency and consistency of the scope
and content of the rights, duties and obligations that the framework seeks to establish
The delegation of significant parts of the enforcement of the proposed Regulation to AI
providers and users is not the only concern which affects the rule of law – the second pillar of
Legal Trustworthiness. The lack of effective and meaningful protection accorded to
fundamental rights resulting from the Proposal’s current formulation of a ‘risk-based’ approach
also fails to ensure that EU laws are internally consistent and coherent because the level of
protection offered falls below the standards set out in the EU Charter of Rights and Freedoms.
Although we warmly welcome the attempts made in the proposed Regulation to ensure that
there is clarity about how AI systems which already fall within existing EU laws will be treated,
there are several legal instruments and legal doctrines which are implicated by the development
and deployment of AI systems, the relationship of which to the proposed AI Regulation is not
always clearly identified.
After outlining concerns regarding the Proposal’s internal coherency (a), we briefly discuss the
Proposal’s relationship with the GDPR (b), the Law Enforcement Directive (c) and the MiFID
II Regulation (d).
a) The consistency of the Proposal with EU (fundamental rights) law should be ensured
To ensure that the legal rights and obligations arising under the proposed Regulation are
coherent and consistent with existing laws, the locus of responsibility for the protection of
fundamental rights must be clarified. The Proposal states that one of its primary aims is to
ensure a high level of protection for fundamental rights across the EU. However, the Charter
obligations are primarily binding on Member States and EU institutions while the proposed
Regulation imposes various legal requirements on all those deploying or putting AI systems
into service. There is a lack of clarity concerning the obligation to ensure that those deploying
‘high-risk’ systems have quality management systems in place to ensure respect for
fundamental rights extends to mechanisms that guard against rights interferences arising from
the activities and actions other than those of Member States and EU authorities. In this respect,
we recommend that the Proposal be amended along the lines of the GDPR, which explicitly
confers legal obligations pertaining to the collection and processing of personal data on all
‘data controllers’ irrespective of whether the data controller is a public or private person (see
also section 4.1.4.d).
Unless the legal obligations to demonstrate respect for fundamental rights mean ensuring that
all actors, whether state or non-state, are required to demonstrate respect for fundamental rights
by all persons involved in deploying, using and putting on the market AI systems, the draft
40
b) The Proposal’s relationship with the General Data Protection Regulation should be
strengthened
The Proposal is designed to enable “full consistency with existing Union legislation applicable
to sectors where high-risk AI systems are already used or likely to be used in the future.” 68 In
addition to fundamental rights law, this includes both the General Data Protection Regulation
and the Law Enforcement Directive, both of which should complement with a new “set of
harmonised rules applicable to the design, development and use of certain high-risk AI systems
and restrictions on certain uses of remote biometric identification systems.”69 However, from
a fundamental rights perspective, it remains to be seen whether these new rules complement
existing standards of protection, or risk undermining them.
For instance, the GDPR contains stringent protections for ‘special category data’ for which
collection and processing is conditional on significant thresholds of protection, while the
Proposal does not offer analogous protection. Under the proposed Regulation, Article 1(b)
prohibits the following:
the placing on the market, putting into service or use of an AI system that exploits any of
the vulnerabilities of a specific group of persons due to their age, physical or mental
disability, in order to materially distort the behaviour of a person pertaining to that group in
a manner harm.
As already mentioned in section 4.1.3.a, there is an open question about why this provision
does not include all categories of special category data, recognised in the GDPR as requiring a
higher standard of protection.
Furthermore, questions can be raised about the way in which the Proposal deals with systems
relying on biometric data – which are also covered by the GDPR – as discussed under section
4.1.4 above.
At the same time, the European Commission did consider the relevance of the GDPR to this
Proposal elsewhere. For instance, Article 10(5) which deals with data governance requirements
67
This point was also rightfully raised in Michael Veale and Frederik Zuiderveen Borgesius, “Demystifying the
Draft EU Artificial Intelligence Act,” SocArXiv, July 6, 2021, doi:10.31235/osf.io/38p5f.
68
European Commission, “The Proposal,” 4.
69
European Commission, “The Proposal,” 4.
41
c) The Proposal’s relationship with the Law Enforcement Directive should be clarified
In addition to uncertainties stemming from the Proposal’s relationship to the GDPR, there are
questions regarding its relationship to the Law Enforcement Directive 72 (LED), including (a)
the duties and obligations of private organisations which develop and deploy AI systems for
law enforcement use; (b) the relationship between data protection impact assessments under
the LED, and conformity assessments under the Proposal; and (c) the role of safeguards
provided for by the LED, which have no equivalent in the text of the Proposal.
Firstly, the duties of private AI providers in the context of law enforcement must be clarified.
Private actors acting on behalf of law enforcement authorities (for the purposes of prevention,
investigation, detection, or prosecution of criminal offences, or the execution of criminal
penalties, including the safeguarding against and the prevention of threats to public security)
are explicitly included as ‘competent authorities’ under Article 3(7)(b) of the LED. This is of
crucial significance considering that many law enforcement authorities may procure and use
AI systems developed by private organisations. We must ask then, if ‘providers’ (found in the
Proposal) are captured by these same requirements, considering that the design of a given AI
system significantly influences both (a) the procedure of law enforcement decision-making
(e.g., providers of machine-learning recidivism risk assessment make important choices
regarding the relevance of specific data sets, the weighting of specific features, and the form
and content of decision outputs provided to law enforcement, in addition to the level of
technical transparency about the logic of the system itself) and (b) the kinds of decision that
can be made (prospective, future-oriented predictions based on large data sets that would not
be possible by human judgement alone). Similarly, considering the obligations of providers to
ensure post-market monitoring – which again, may significantly affect the form and content of
decisions made – is there a threshold at which this would be considered as the ‘exercise’ of
70
European Commission, “The Proposal,” Article 29(6).
71
European Commission, “The Proposal,” Recital 72.
72
European Parliament and Council, “Directive (EU) 2016/680 of the European Parliament and of the Council
of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by
competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal
offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council
Framework Decision 2008/977/JHA, OJ L 119, 4.5.2016, p. 89–131” (“LED”), 2016.
42
73
European Parliament and Council, “LED”, Article 27.
43
Finally, there is no direct indication of the Proposal’s harmonisation with the Markets in
Financial Instruments Regulation (MiFID II). 74 This aspect risks representing a deficiency in
the Proposal, given the growing use of algorithms in the context of multiple financial activities
and services (e.g., algorithmic trading and robo-advisory) and given the fact that practices like
algorithmic trading are not listed as a high-risk domain under Annex III.
The Proposal merely provides that financial authorities should be designated as competent
authorities for the purpose of supervising the implementation of this Regulation and that the
conformity assessment procedure should be integrated into existing procedures under the
Directive on prudential supervision – namely, Directive 2013/36 / EU (with some limited
derogations).75 These indications do not seem sufficient to ensure legal certainty and
consistency. Indeed, MiFID II contains several provisions that seem to overlap with the content
of this Proposal. Article 17, for example, provides a notion of algorithmic trading and delegates
the preparation of particular risk management practices (not based on conformity assessment
and CE marking) to the European Security and Markets Authority (ESMA) to protect public
savings and market stability. In requesting that the conformity assessment takes place within
the framework of the Directive on prudential supervision (focusing on the role of the European
Banking Authority), the Proposal might create a conflict between different risk assessment
procedures and an overlap of functions between different European agencies. It would hence
be important for the Commission to take this risk of inconsistency into consideration.
4.2.3 The lack of individual rights of enforcement in the Proposal undermines fundamental
rights protection
In the sections above, we argued that the enforcement of the Proposal overly relies on
conformity (self-) assessments and that its coherence with different parts of EU law must be
ensured. In this section, we draw attention to the fact that a significant part of the enforcement
mechanism is completely missing from the Proposal, namely individual rights of redress and
an accompanying complaints mechanism.
One of the Proposal’s aims is protecting the fundamental rights of individuals, yet these
individuals do not feature in the Proposal at all. Its provisions instead focus on the obligations
of the AI ‘provider’ and ‘user,’ who are often already in an asymmetrical power relation to
those individuals whom they subject to their systems. This asymmetrical representation of the
different stakeholders in the Proposal creates an enforcement architecture which potentially
threatens the rule of law. We therefore argue that the Proposal must guarantee procedural rights
to redress for individuals subjected to AI systems (a), and a complaints mechanism dealing
with potential violations of the Regulation or infringements of fundamental rights (b). Later,
in section 4.3.2, we also argue for more substantive rights for individuals in order to address
the blatant absence of ‘ordinary people’ from the Proposal.
74
European Parliament and Council, “Regulation (EU) No 600/2014 of the European Parliament and of the
Council of 15 May 2014 on markets in financial instruments and amending Regulation (EU) No 648/2012,
OJ L 173, 12.6.2014,” 2014, 84–148.
75
See European Parliament and Council, “Directive 2013/36/EU of the European Parliament and of the Council
of 26 June 2013 on access to the activity of credit institutions and the prudential supervision of credit
institutions and investment firms, amending Directive 2002/87/EC and repealing Directives 2006/48/EC and
2006/49/EC, OJ L 176, 27.6.2013,” 2013, 338–436.
44
Those whose rights have been interfered with by the operation of AI systems (whether high-
risk or otherwise) are not granted legal standing under this Proposal to initiate enforcement
action for violation of its provisions, nor any other enforceable legal rights for seeking
mandatory orders to bring violations to an end, or to seek any other form of remedy or redress.
The Proposal instead focuses on the imposition of financial penalties and market sanctions
where an AI system is non-compliant. For example, Member States are empowered to
implement rules on sanctions, and market surveillance authorities can require that a non-
compliant AI system be withdrawn or recalled from the market. 76 Yet, to reach the Proposal’s
goal of robustly protecting fundamental rights, individuals who encounter AI systems in the
EU must be able to rely on a clear system of remedies protecting them from harms and
fundamental rights interference caused by public or private adoption of AI systems. As outlined
in section 4.1.1, an independent body must then have competence to assess whether any
fundamental rights interferences are necessary and proportionate in a democratic society.
Individual rights of redress would address the current gap in enforcement which is especially
blatant in cases where individuals cannot reasonably opt out of certain outcomes produced by
AI systems (as recognised by the Proposal). Recital 38 calls for ‘effective redress’ in relation
to procedural fundamental rights violations (and draws attention to right to an effective
remedy), but classifying something as high-risk is itself not a remedy.
Although the risk classification process for AI systems under Article 7(2)(h)(I) includes
considering whether processes for remedies are necessary for a particular AI system, there is
no guidance on what an effective remedy looks like and no provisions through which
individuals can access such a remedy. Additionally, the imposition of penalties for breaches of
duties and obligations arising from the Proposal are not individual remedies sufficient for the
ongoing robust protection of fundamental rights – rather, they offer only a ‘deterrence-based’
approach for which there is no guarantee of success, despite clear risks to the fundamental
rights of individuals.
It is our understanding that the European Commission will soon publish a draft liability
framework for AI systems which could potentially strengthen the procedural rights of
individuals who are adversely impacted by AI systems. It is, however, regrettable that this
framework was not published simultaneously with the Proposal, since this renders it difficult
to holistically assess the protection offered to individuals. We hope that the concerns raised
above are either reflected in the revised AI Regulation, or in its accompanying liability rules –
or preferably both. The inclusion of individual rights to redress would contribute substantially
to an enforcement framework which empowers all relevant stakeholders, and therefore to
Legally Trustworthy AI.
In addition to a lack of rights which grant individuals standing, the Proposal currently also does
not provide the possibility for individuals to file a complaint with the national competent
authority – even though the Proposal renders this authority the sole actor who ensures
compliance with the Proposal. This absence of a complaints mechanism stands in sharp contrast
to the mechanism provided under the GDPR, whereby each supervisory authority has the task
to “handle complaints lodged by a data subject, or by a body, organisation or association in
accordance with Article 80, and investigate, to the extent appropriate, the subject matter of the
76
European Commission, “The Proposal,” Article 65(5).
45
The enforcement of the Proposal risks being undermined by the practical aspects of Member
State competencies. Article 59 of the Proposal enables Member States to designate or establish
national competent authorities “for the purpose of ensuring the application and implementation
of this regulation.”78 Unless otherwise provided for by the Member State in question, these
authorities will be required to act as both ‘notifying authority’ and ‘market surveillance
authority’ as part of a combined ‘national supervisory authority.’79 Notified bodies are
responsible for verifying the conformity of high-risk systems, and market surveillance
authorities are responsible for the evaluation of high-risk AI systems “in respect of its
compliance with all the requirements and obligations” of the Proposal.80 If a given system is
not compliant, the market surveillance authority shall take “all appropriate provisional
measures to prohibit or restrict the AI system's being made available on its national market, to
withdraw the product from that market or to recall it.”81 There are three practical concerns
about this setup.
77
See European Parliament and Council, “Regulation (EU) 2016/679 of the European Parliament and of the
Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data
and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection
Regulation), OJ L 119, 4.5.2016”, 2016, 1–88 (“GDPR”), Article 57.
78
European Commission, “The Proposal,” Article 59(1).
79
European Commission, “The Proposal,” Article 59(2).
80
European Commission, “The Proposal,” Article 65(2)
81
European Commission, “The Proposal,” Article 65(5).
46
There are also a number of outstanding issues in relation to specific powers of enforcement
conferred upon supervisory authorities.
Firstly, the powers of the supervisory authorities provided for by the Proposal focus on ex post
controls, yet it remains to be seen on which basis and at which scale these controls will be
carried out. The EU database of stand-alone high-risk AI systems, in which those systems need
to be publicly registered, could provide a helpful tool to enable supervisory authorities to
prioritise their inevitably limited resources. Nevertheless, a risk of under-enforcement remains,
particularly given the lack of a complaints mechanism that allows individuals to flag potentially
problematic AI practices, as mentioned above.
Secondly, Member States can deviate from the protection afforded by the Proposal if they find
there is ground to do so: “for exceptional reasons, including public security, the protection of
life and health of persons, environmental protection and the protection of key industrial and
infrastructural assets.”85 This exception is broad in scope and should be clarified further to
82
European Commission, “The Proposal,” Article 71(1).
83
This Board, established by Article 56 of the Proposal, shall be chaired by the European Commission and
composed of the national supervisory authorities (represented by the head or equivalent high-level official of
that authority) and the European Data Protection Supervisor.
84
Estelle Massé, “Two years under the EU GDPR: An implementation progress report - state of play, analysis
and recommendations,” Access Now, May, 2020,
https://www.accessnow.org/cms/assets/uploads/2020/05/Two-Years-Under-GDPR.pdf.
85
European Commission, “The Proposal,” Article 47.
47
4.3 The Proposal fails to ensure meaningful transparency, accountability and rights of
public participation (democracy)
As the final pillar of Legal Trustworthiness, democracy requires that members of communities
are entitled to participate actively in determining the norms and standards that apply to the life
of that community. This pertains particularly to activities which have a direct impact on the
rights, interests and legitimate expectations of its members, and the distribution of benefits,
burdens and opportunities associated with new technological capacities. The inevitability of
normative trade-offs in the design and deployment of many existing and anticipated AI
applications makes meaningful participation rights especially important. Put simply:
democratic participation in the formation of the legal framework around AI contributes to its
trustworthiness. Closely related to this idea of Legal Trustworthiness through democracy is the
right for individuals to be informed about decisions that affect them in a substantive manner,
and of the way in which the commonly agreed upon standards are accounted for. Indeed,
meaningful transparency and accountability are essential requirements of a well-functioning
democratic society.
We argue that the current Proposal does not do enough to acknowledge the importance of
democracy for Trustworthy AI, nor does it provide the institutional framework to robustly
incorporate the value of democracy in AI governance. Public consultation and participation
rights are entirely absent from the Regulation (4.3.1). Moreover, as highlighted earlier, the
Proposal does not provide individuals with any substantive rights, and only formulates
86
See European Union, “EU Charter,” Articles 16 and 17.
48
4.3.1 The Proposal does not provide consultation and participation rights
The opportunity for meaningful public deliberation – on the basis of which legislation can be
enacted – is especially important for politically controversial matters such as the AI
prohibitions or a ‘high-risk list.’ Yet the Proposal’s lack of consultation and participation rights
seems to fall short of this requirement. First, it is questionable whether the Proposal’s content
reflects the concerns that citizens have about prohibited and high-risk AI systems (a). Second,
the Proposal does not foresee any participation and consultation rights for stakeholders in the
context of future revisions (b).
a) The scope of the public consultation prior to the Proposal’s drafting would have benefitted
from more targeted questions regarding prohibited and high-risk applications
From the outset, the European Commission aimed at ensuring that the voices of citizens and
relevant stakeholders could be heard in order to give shape to the proposed Regulation. Thus,
a public consultation was organised on the draft Ethics Guidelines of the HLEG (published in
December 2018), which served as an inspiration for the Commission’s further work on AI, as
well as on its subsequently refined Assessment List in the form of a broad piloting phase (until
December 2019). Furthermore, public consultations were also launched after the
Commission’s publication of the White Paper on Artificial Intelligence (in February 2020),
and on the Inception Impact Assessment of the proposed AI Regulation (until September 2020).
Moreover, we warmly welcome the opportunity provided by the European Commission to
provide our feedback on the published Proposal for an AI Regulation in the context of the
current public consultation, running until August 2021.
While these efforts of the Commission should be commended, the scope of the last
consultations on the White Paper and Inception Impact Assessment would have benefitted from
more targeted questions regarding the stance on practices that should be prohibited and
practices that should be considered as high-risk AI systems. The structure of the current
Proposal accords great importance to an AI system’s status as either ‘no-risk,’ ‘prohibited,’ or
‘high-risk.’ It is unclear whether the current lists under Title II and Title III duly reflects popular
concerns about AI systems – particularly in light of the impact they can have on fundamental
rights. We doubt whether the lack of emotion recognition systems and biometric categorisation
systems under the list of high-risk AI systems (or even prohibited practices) truly echoes the
opinion that European citizens have towards such practices. In any case, it is not clear that
democratic deliberation is at the foundation of these distinctions.
b) Insufficient opportunities for consultation and participation enshrined in the Proposal itself
The risk that popular opinion on the harms associated with AI is not reflected in the content of
the Proposal is exacerbated by the fact that the Proposal itself does not provide consultation
rights for the future revision of its content – despite the European Commission’s competence
to update the Proposal’s various Annexes. The provision of such rights is especially important
given that the many current and anticipated AI applications, particularly those which entail the
collection and analysis of biometric data, claim to offer myriad ‘benefits’ without any robust
49
Accordingly, when the Commission revises the list of high-risk AI systems, it is crucial that
participation of the public at large is ensured within this revision exercise.
Additionally, the absence of any form of public consultation rights is problematic in relation
to the determination of what constitutes an ‘acceptable’ residual risk in the context of high-risk
AI systems, which is treated in the Proposal as a matter which developers and deployers are
free to determine themselves. As noted in section 4.1.5.a, any ‘residual risk’ is currently dealt
with by requiring that provider communicates these risks to the user, and that the user complies
with the stated instructions. Given that the ‘user’ is the organisation deploying the AI system,
this gives so little protection to individuals and communities that it cannot be properly
described as inviting democratic deliberation or as respectful of fundamental rights. More
generally, as stressed in section 4.1.5.a, the protection of individuals and the general public
against the threats posed by high-risk AI systems offered in the current Proposal relies heavily
on the effectiveness and legitimacy of the AI provider’s compliance with the proposed
mandatory requirements, and falls far short of living up to the aspiration of respecting
fundamental rights and safety, which requires, at minimum, open public discussion and
consultation in relation to what constitutes an ‘acceptable residual risk.’
50
a) The Proposal does not provide any substantive rights for individuals
The lack of substantive individual rights in the Proposal reduces individuals to entirely passive
entities, unacknowledged and unaddressed in the regulatory framework. The Proposal’s silence
on individuals is especially striking considering that one of the primary reasons why AI is being
regulated at all is to protect those very individuals from the risks generated by AI systems.
The absence of procedural rights of redress and a right to file a complaint with a national
supervisory authority was already set out in section 4.2.3. In addition, the Proposal does
provide individuals with any new substantive rights either. It only consists of obligations
imposed on AI providers - and to a lesser extent, on users and other actors. Indeed, as remarked
earlier, the Proposal offers no equivalent to the ‘data subject rights’ which can be found under
the GDPR. Consider Chapter III of the GDPR, which offers citizens whose personal data has
been processed a set of rights which they can exercise against data controllers. These include
a right to be informed about personal data collected, a right of access to information regarding
why and how their data is being processed, a right to ‘rectification’ of e.g., incomplete or
incorrect data, a right to be ‘forgotten,’ and rights relating to automated decision-making,
including profiling. In addition, data controllers must facilitate the exercise of these rights,
without undue delay, unless they can demonstrate that they cannot identify the data subject in
question.
No similar rights are offered in the Proposal, wherein responsibility is left mainly to providers
and users to monitor the design, implementation and use of AI systems. One can wonder why
the Commission did not see the need for any substantive rights in the context of AI, and whether
it considers data subject rights as sufficient to tackle relevant dangers posed by AI to
fundamental rights more generally. If the Proposal assumes that data protection law provides
sufficient individual rights to protect against the risks of AI systems to individuals, it is not
clear why, and the equivalence between AI systems and general ‘data processing’ must be
justified.
Concretely, at the very least, the Proposal should grant individuals a right not to be subjected
to prohibited AI practices as listed in Title II, and a right not to be subjected to high-risk AI
systems that do not meet the conformity requirements of Title III. This would transform the
obligations of AI providers and users from merely obligations towards the regulators of the
market to obligations towards specific individuals. Making the individual a central figure in
the Regulation would reintroduce the ‘ordinary person’ both as an explicit beneficiary of the
Regulation, and as an empowered legal actor in the regulatory framework around AI.
b) The Proposal does not provide meaningful information rights for individuals
For a democracy to flourish, the public must be adequately informed to be able to participate
in the political life of their community and to plan their own individual lives. In this context, it
is essential that individuals are provided with sufficient information about technological
developments which directly or indirectly affect their health, safety, and fundamental rights.
Additionally, democracy requires that the public can challenge those in power to keep a check
on their actions, which can also only be done if sufficient information is available to the public.
Both these democratic desiderata require a governance framework which affords an
51
52
87
Inspiration can inter alia be found when considering the transparency obligations under Article 13 of the
Proposal, which are currently solely directed to the commercial user of the AI system.
53
88
Veale & Zuiderveen Borgesius, “Demystifying the Draft.”
54
55
56
57
• Align the binding content of the Proposal with existing fundamental rights legislation
and practice which establishes substantive and procedural requirements for potential
58
59