5
5
5
Regulating
Artificial
Intelligence
Regulating Artificial Intelligence
Thomas Wischmeyer • Timo Rademacher
Editors
Regulating Artificial
Intelligence
Editors
Thomas Wischmeyer Timo Rademacher
Faculty of Law Faculty of Law
University of Bielefeld University of Hannover
Bielefeld, Germany Hannover, Germany
This Springer imprint is published by the registered company Springer Nature Switzerland AG.
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface: Good Artificial Intelligence
Policy and business decisions with broad social impact are increasingly based on 1
machine learning-based technology, today commonly referred to as artificial intel-
ligence (AI). At the same time, AI technology is becoming more and more complex
and difficult to understand, making it harder to control whether it is used in
accordance with existing laws. Given these circumstances, even tech enthusiasts
call for a stricter regulation of AI. Regulators, too, are stepping in and have begun to
pass respective laws, including the right not to be subject to a decision based solely
on automated processing in Article 22 of the EU’s General Data Protection Regulation
(GDPR), Section 140(d)(6) and (s) of the 2018 California Consumer Privacy Act on
safeguards and restrictions concerning commercial and non-commercial ‘research
with personal information’, or the 2017 amendments to the German Cartels Act and
the German Administrative Procedure Act.
While the belief that something needs to be done about AI is widely shared, there 2
is far less clarity about what exactly can or should be done and how effective
regulation might look like. Moreover, the discussion on AI regulation sometimes
only focuses on worst case scenarios based on specific instances of technical
malfunction or human misuses of AI-based systems. Regulations premised on
well-thought-out strategies and striving to balance opportunities and risks of AI
technologies (cf. Hoffmann-Riem) are still largely missing.
Against this backdrop, this book analyses the factual and legal challenges the 3
deployment of AI poses for individuals and the society. The contributions develop
regulatory recommendations that do not curb technology’s potential while preserv-
ing the accountability, legitimacy, and transparency of its use. In order to achieve
this aim, the authors all follow an approach that might be described as ‘threefold
contextualization’: Firstly, the analyses and propositions are norm-based,
i.e. consider and build on statutory and constitutional regimes shaping or restricting
the design and use of AI. Secondly, it is important to bear in mind that AI
v
vi Preface: Good Artificial Intelligence
Artificial Intelligence
5 From a technological point of view, there is no such thing as ‘the’ AI. While most
attention is currently paid to techniques extracting information from data through
‘machine learning’, AI research actually encompasses many different sub-fields and
methodologies.1 Similarly, machine learning is not a monolithic concept, but com-
prises a variety of techniques.2 These range from traditional linear regression over
support vector machines and decision tree algorithms to various types of neural
networks.3 Moreover, most machine learning-based systems are a ‘constellation’ of
1
Russell and Norvig (2010); Kaplan (2016); Wischmeyer (2018), pp. 9 et seq.
2
The terms machine learning and AI are used synonymously in this volume.
3
Cf. EU High Level Expert Group (2018), pp. 4–5.
Preface: Good Artificial Intelligence vii
processes and technologies rather than one well-defined entity, which makes it even
more difficult to determine the scope of the meaning of AI.4 As such, AI is a field of
research originating from the mid-1950s—the Dartmouth Summer Research Project
is often mentioned in this context. Today, its scholars mostly focus on a set of
technologies with the ability to ‘process potentially very large and heterogeneous
data sets using complex methods modelled on human intelligence to arrive at a result
which may be used in automated applications’.5
Against this backdrop, some scholars advocate abandoning the concept of AI and 6
propose, instead, to address either all ‘algorithmically controlled, automated
decision-making or decision support systems’6 or to focus specifically on machine
learning-based systems.7 Now, for some regulatory challenges such as the societal
impact of a decision-making or decision support system or its discriminatory
potential, the specific design of the system or the algorithms it uses are indeed
only of secondary interest. In this case, the contributions in this volume consider the
regulatory implications of ‘traditional’ programs as well (cf., e.g. Krönke, paras
1 et seq.; Wischmeyer, para 11). Nevertheless, the most interesting challenges arise
where advanced machine learning-based algorithms are deployed which, at least
from the perspective of the external observer, share important characteristics with
human decision-making processes. This raises important issues with regard to the
potential liability and culpability of the systems (see Schirmer). At the same time,
from the perspective of those affected by such decision-making or decision support
systems, the increased opacity, the new capacities, or, simply, the level of uncer-
tainty injected into society through the use of such systems, lead to various new
challenges for law and regulation.
This handbook starts with an extensive introduction into the topic by Wolfgang 7
Hoffmann-Riem. The introduction takes on a—at first glance—seemingly trivial
point which turns out to be most relevant and challenging: you can only regulate
what you can regulate! In this spirit, Hoffmann-Riem offers an in-depth analysis of
why establishing a legitimate and effective governance of AI challenges the
4
Kaye (2018), para 3. Cf. also Ananny and Crawford (2018), p. 983: ‘An algorithmic system is not
just code and data but an assemblage of human and non-human actors—of “institutionally situated
code, practices, and norms with the power to create, sustain, and signify relationships among people
and data through minimally observable, semiautonomous action” [. . .]. This requires going beyond
“algorithms as fetishized objects” to take better account of the human scenes where algorithms,
code, and platforms intersect [. . .].’
5
Datenethikkommission (2018).
6
Algorithm Watch and Bertelsmann Stiftung (2018), p. 9.
7
Wischmeyer (2018), p. 3.
viii Preface: Good Artificial Intelligence
regulatory capacities of the law and its institutional architecture as we know it. He
goes on to offer a taxonomy of innovative regulatory approaches to meet the
challenges.
8 The following chapters address the Foundations of AI Regulation and focus on
features most AI systems have in common. They ask how these features relate to the
legal frameworks for data-driven technologies, which already exist in national and
supra-national law. Among the features shared by most, if not all AI technologies,
our contributors speak the following:
9 Firstly, the dependency on processing vast amounts of personal data ‘activates’
EU data protection laws and consequently reduces the operational leeway of public
and private developers and users of AI technologies significantly. In his contribu-
tion, Nikolaus Marsch identifies options for an interpretation of the European
fundamental right to data protection that would offer, for national and EU legislators
alike, more leeway to balance out chances and risks associated with AI systems. The
threats AI systems pose to human autonomy and the corresponding right to individ-
ual self-determination are then described by Christian Ernst, using the examples of
health insurances, creditworthiness scores, and the Chinese Social Credit System.
Thomas Wischmeyer critically examines the often-cited lack of transparency of
AI-based decisions and predictions (‘black box’ phenomenon), which seem to
frustrate our expectation to anticipate, review, and understand decision-making
procedures. He advises policy makers to redirect their focus, at least to some extent,
from the individuals affected by the specific use of an AI-based system in favour of
creating institutional bodies and frameworks, which can provide an effective control
of the system. Alexander Tischbirek analyses most AI technologies’ heavy reliance
on statistical methods that reveal correlations, patterns, and probabilities instead of
causation and reason and thus are prone to perpetuate discriminatory practices. He—
perhaps counterintuitively—highlights that, in order to reveal such biases and
distortions, it might be necessary to gather and store more personal data, rather
than less. Finally, Jan-Erik Schirmer asks the ‘million-dollar question’, i.e. whether
or not AI systems should be treated as legal persons, or as mere objects, answering
that question with a straightforward ‘a little bit of each’.
10 While the design and use of AI-based systems thus raise a number of general
questions, the success of AI regulation is highly dependent on the specific field of
application. Therefore, regulatory proposals for the Governance of AI must consider
the concrete factual and legal setting in which the technology is to be deployed. To
this end, the chapters in Part II examine in detail several industry and practice sectors
in which AI is, in our view, shaping decision-making processes to an ever-growing
extent: social media (by Christoph Krönke), legal tech (by Gabriele Buchholtz),
financial markets (by Jakob Schemmel), health care (by Sarah Jabri and Fruzsina
Molnár-Gabor), and competition law (by Moritz Hennemann). The analyses reveal
that in most of these settings AI does not only present itself as the object of
regulation, but, often simultaneously, as a potential instrument to apply and/or
enforce regulation. Therefore, most chapters in Part II also include considerations
regarding Governance through AI. Other articles, namely the chapters on adminis-
trative decision-making under uncertainty (by Yoan Hermstrüwer), law enforcement
Preface: Good Artificial Intelligence ix
(by Timo Rademacher), public administration (by Christian Djeffal), and AI and
taxation (by Nadja Braun Binder), actually focus on AI technologies that are
supposed to apply and/or support the application of the law.
References
xi
xii Contents
xiii
xiv Contributors
Wolfgang Hoffmann-Riem
Contents
1 Fields of Application for Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Levels of Impact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3 Legal Aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
4 Modes of Governance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
5 Exercising the State’s Enabling Responsibility Through Measures for Good Digital
Governance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
5.1 Displacements in the Responsibility of Public and Private Actors . . . . . . . . . . . . . . . . . . 7
5.2 Innovative Case-law as an Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
5.3 System Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
5.4 Systemic Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5.5 Regulatory Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5.6 Regarding Regulatory Possibilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
6 Obstacles to the Effective Application of Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
6.1 Openness to Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
6.2 Insignificance of Borders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
6.3 Lack of Transparency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
6.4 Concentration of Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
6.5 Escaping Legal Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
7 Types of Rules and Regulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
7.1 Self-structuring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
7.2 Self-imposed Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
7.3 Company Self-regulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
7.4 Regulated Self-regulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
7.5 Hybrid Regulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
7.6 Regulation by Public Authorities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
7.7 Techno-regulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
8 Replacement or Supplementation of Legal Measures with Extra-Legal, Particularly Ethical
Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
9 On the Necessity of Transnational Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
W. Hoffmann-Riem (*)
Bucerius Law School, Hamburg, Germany
1
The literature on digital transformation is highly diverse. See, inter alia, Cole (2017), Pfliegl and
Seibt (2017), Rolf (2018), Kolany-Raiser et al. (2018) and Precht (2018). For an illustration of the
diversity of the issues this raises, see the articles in No. 10 (2017) of Elektrotechnik &
Informationstechnik, pp. 323–88.
2
For an introduction to AI: Russell and Norvig (2012), Kaplan (2016), Lenzen (2018) and
Misselhorn (2018).
Artificial Intelligence as a Challenge for Law and Regulation 3
and for controlling behaviour (see Rademacher),3 but also new kinds of criminal
activity.4
AI is—currently—dominated by techniques of machine learning. The term refers 3
to computer programs that are able to learn from records of past conduct.5 Machine
learning is used for such purposes as identifying patterns, evaluating and classifying
images, translating language in texts, and automatically generating rough audio and
video cuts (e.g. ‘robot journalism’). Also possible are even more advanced applica-
tions of AI, sometimes referred to as ‘Deep Learning’.6 This has to do with IT
systems that, by using neural networks, are capable of learning on their own how to
enhance digital programs created by humans and thus of evolving independently of
human programming.
The expansion of AI’s capabilities and the tasks for which it can be used is 4
associated with both risks and opportunities. The following will look at the chal-
lenges that AI poses for law and regulation.7
2 Levels of Impact
3
Hoffmann-Riem (2017).
4
On the latter: Bishop (2008) and Müller and Guido (2017).
5
Surden (2014), p. 89.
6
See e.g. Goodfellow et al. (2016).
7
For insight into the variety of challenges and proposed approaches for solving them, see Jakobs
(2016), Pieper (2018), Bundesnetzagentur (2017), Eidenmüller (2017), Castilla and Elman (2017),
BITKOM (2017), Schneider (2018), Djeffal (2018), Bafin (2018) and Bundesregierung (2018).
8
See, inter alia, Roßnagel (2003, 2017) and Simitis et al. (2019).
4 W. Hoffmann-Riem
9
See Sattler (2017).
10
See e.g. Latzer et al. (2016), p. 395.
Artificial Intelligence as a Challenge for Law and Regulation 5
all three levels of effect mentioned (output, outcome, and impact). This is the
direction in which trends are currently headed in China. Commercially oriented
companies—primarily, but not limited to, such market-dominant IT companies as
the Alibaba Group (inter alia, various trading platforms and the widespread online
payment system Alipay) or Tencent Holdings (inter alia, social networks, news
services, online games)—are working closely with state institutions and the Com-
munist Party in order to extensively collect data and link them for a variety of
analyses. The aim is to optimise market processes, align the social behaviour of
people with specific values (such as honesty, reliability, integrity, cleanliness,
obeying the law, responsibility in the family, etc.), and ensure state and social
stability. China is in the process of developing a comprehensive social scoring
system/social credit system (currently being pilot tested, but soon to be applied in
wide areas of China).11 It would be short-sighted to analyse the development of this
system primarily from the aspect of the surveillance and suppression of people. Its
objectives are much more extensive.12
The aim of this Introduction cannot and should not be to examine and evaluate the 11
Chinese social credit system. I mention it only to illustrate the potentials that can be
unleashed with the new opportunities for using information technology. This
Chapter is limited to discussing the challenges facing law and regulation under the
conditions currently prevailing in Germany and the EU.
3 Legal Aspects
Digital technologies, including AI, can have desired or undesired effects from an 12
ethical, social, or economic perspective. Depending on the result of such an assess-
ment, one important issue is whether the creation and/or use of AI requires a legal
framework and, in particular, regulatory boundaries in order to promote individual
and public interests and to protect against negative effects.
It goes without saying that when digital technologies are used, all relevant norms 13
in the areas affected are generally applicable, such as those of national law—in
Germany, civil, criminal and public law, as well as their related areas—and of
transnational and international law, including EU law. Such laws remain applicable
without requiring any express nexus to digitalisation. But it needs to be asked
whether and to what extent these laws, which largely relate to the conditions of
the ‘analog world’, meet the requirements associated with digitalisation and, in
particular, with AI or instead need to be modified and supplemented.
Another question is where such new or newly created laws that relate or are 14
relatable to digitalisation should be located within the overall context of the legal
system. It is well known that individual legal norms are linked in systematic terms
11
See Chen and Cheung (2017), Creemers (2018) and Dai (2018).
12
Creemers (2018) and Dai (2018).
6 W. Hoffmann-Riem
with other parts of the legal system. In addition, in many cases they are embedded in
complex regulatory structures—in Germany often called ‘Regelungsstrukturen’.13
This term comprises the relevant norms and specialised personnel used for problem-
solving and the formal and informal procedures of interaction that can be utilised for
specifying the law and applying it. Also relevant are the resources (such as time,
money, and expertise) and forms of action available in organisations, as well as,
where necessary, the opportunities and measures for collaboration and networking
between various actors, both public and private. Such regulatory structures can
exhibit significant complexity in multilevel systems, such as those in the EU.
4 Modes of Governance
15 The creation of law and, in particular, of measures for regulation by public author-
ities must be tailored to the modes selected to solve the problem in a given case (the
‘modes of governance’: market, competition, negotiation, networking, contract, and
digital control).14 How do these modes and their specific configuration help to
achieve socially desirable goals and avoid undesired effects? Appropriate standards
are needed in order to determine what is desired. They include, in particular,
fundamental constitutional values (including democracy, rule of law, and social
justice; see Article 20 of the German Basic Law (Grundgesetz, GG)), protection of
the freedom to develop economically, culturally, politically, and the like, prevention
of manipulation and discrimination, and much more. Also particularly important are
the principles, objectives, and values enshrined in the Treaty on European Union and
in the Charter of Fundamental Rights of the European Union, as well as in other EU
legal acts.
16 One challenge consists of ensuring good governance in connection with the
development of algorithmic systems—‘governance of algorithms’15—and their
application—‘governance by algorithms’.16 Luciano Floridi describes ‘governance
of the digital’ as follows:
Digital Governance is the practice of establishing and implementing policies, procedures and
standards for the proper development, use and management of the infosphere. It is also a
matter of convention and good coordination, sometimes neither moral nor immoral, neither
legal nor illegal. For example, through digital governance, a government agency or a
company may (i) determine and control processes and methods used by data stewards and
data custodians in order to improve the data quality, reliability, access, security and
availability of its services; and (ii) devise effective procedures for decision-making and for
the identification of accountabilities with respect to data-related processes.17
13
See Hoffmann-Riem (2016), pp. 9–12.
14
On governance generally, see Benz and Dose (2010) and Schuppert (2011).
15
Saurwein (2015).
16
Just and Latzer (2016) and Latzer et al. (2016).
17
See Floridi (2018).
Artificial Intelligence as a Challenge for Law and Regulation 7
The area of good governance also includes observing ethical requirements and 17
ensuring compliance.18
One of several examples of important standards for configuring AI is a list created 18
by the European Group on Ethics in Science and New Technologies, an entity
organised by the European Commission, in its Statement on Ethics of Artificial
Intelligence, Robotics and ‘Autonomous’ Systems: (a) human dignity;
(b) autonomy; (c) responsibility; (d) justice, equity, and solidarity; (e) democracy;
(f) rule of law and accountability; (g) security, safety, bodily and mental integrity;
(h) data protection and privacy; and (i) sustainability.19 Even though the group
placed such standards within the field of ethics, this does not alter the fact that
they also have substantial legal relevance. This highlights the interaction frequently
seen between law and ethics. Law also has ethical foundations, and ethical principles
are shaped in part through law (see Sect. 8).
There are many ways to ensure good governance. In this regard, the measures do 19
not need to take the form of written rules. Also important are, for example, technical
approaches, such as the chosen technical design (see Sect. 5.4).
However, good governance does not come about on its own. Where the focus is 20
primarily on law, as in this Chapter, the key aspects are the legal and, moreover,
extra-legal (e.g. ethical or moral) requirements, as well as the response by those to
whom the requirements are addressed, such as their willingness to comply with
them. The tasks of the state include creating or modifying law that facilitates and
stimulates good digital governance.
18
Floridi (2018), pp. 4 et seq.
19
European Group on Ethics in Science and New Technologies (2018), pp. 16 et seq.
8 W. Hoffmann-Riem
according to self-created guidelines that are set and enforced unilaterally in most
cases, including where third parties are concerned, such as the users of their
services.20
22 The heavy weight of private self-structuring and self-regulation (see Sect. 7) does
not however alter the fact that state authorities are responsible for protecting
individual and public interests. Nonetheless, the displacements have changed the
basic conditions under which state authorities exercise influence and the instruments
available to them, as well as the prospects for success.
23 Although because of the protection they enjoy through fundamental rights,
private entities are generally unconstrained in the specification and pursuit of their
interests, they are not fully relieved of the obligation to pay regard to the interests of
others and to matters of public interest. It is the state’s role to concentrate on
stimulating and encouraging private individuals to pursue the common good,
thereby enabling them to provide public services that previously were managed by
the state—without the latter completely abdicating its responsibility for overseeing
the process.21 Public-private partnerships have been introduced for standard setting
and oversight in several areas, all of which demand sophisticated and previously
untried legal frameworks.22 As the role of public authorities has changed, many
authors in Germany have taken to referring to the state as the ‘Gewährleistungsstaat’,
which Gunnar Folke Schuppert (2003) calls in English the ‘ensuring state’ and
others the ‘enabling state’. They stress the state’s ‘Gewährleistungsverantwortung’,
i.e. its responsibility to ensure sufficient legal and non-legal guarantees to protect the
common good.23 In the following, I will refer to ‘Gewährleistungsverantwortung’ as
the state’s ‘enabling responsibility’.
24 Where necessary, the state is under a positive obligation to create a framework for
safeguarding, above all, the exercise of political, economic, social, cultural, and
other fundamental rights. Positive obligations of the state are recognised not only in
the German legal system but also, increasingly, with regard to the Charter of
Fundamental Rights of the European Union and to the European Convention on
Human Rights, as well as in a number of international agreements.24 Norm-based
requirements for meeting positive obligations can be found not only in fundamental
rights but also in the provisions concerning constitutional principles (e. g. Article
20 GG) and fundamental values (e. g. Article 2 of the Treaty on European Union).
25 To the extent that digital transformation is shaped by private entities, the enabling
state is tasked with protecting individual and public interests, including through law.
The state has the ability, as well as the obligation, to create suitable structures,
provide normative orientation for conduct, and, if necessary, set boundaries.
20
Nemitz (2018), pp. 2 et seq.
21
See Voßkuhle and Wischmeyer (2017), p. 90.
22
See Voßkuhle and Wischmeyer (2017), p. 90.
23
See Ruge (2004); Franzius (2009); Schulze-Fielitz (2012), pp. 896 et seq.
24
Fischer-Lescano (2014); Schliesky et al. (2014); Marauhn (2015); Harris et al. (2018), pp. 24–27;
Marsch (2018), chapter 4.
Artificial Intelligence as a Challenge for Law and Regulation 9
Examples of how the law is capable of responding to non-legal changes can also be 27
found in case-law, such as several IT-related innovations by the German Federal
Constitutional Court in the area of fundamental rights. As early as 1983, the Court
elaborated a ‘fundamental right to informational self-determination’ in response to
the risks to the protection of the right of privacy that were associated with emerging
digitalisation.25 In 2008 the Court took the innovative step of extending the reach of
fundamental rights protection to the ‘fundamental right to the guarantee of the
confidentiality and integrity of information technology systems’.26 Although
owing to the subject matter of the dispute, this decision was directed at online
searches of an individual’s personal computer, the Court later held in 2016 that the
protection afforded to information technology systems covers more than simply the
computers used by individuals but also includes the networking of those computers
with other computers, such as in connection with storage of data in the cloud.27 At
the same time, it emphasised that data that are stored on external servers with a
legitimate expectation of confidentiality are also deserving protection. Protection is
also granted where a user’s movements on the internet are tracked. As a result, the
use of AI associated with such networks also may fall within the protective scope of
this fundamental right.
As is the case with the fundamental right to informational self-determination, the 28
Court has understood this right—often referred to in literature as the ‘IT fundamental
right’—as giving greater specificity to the constitutional guarantee of human dignity
and the protection of free development of personality (Articles 1 (1) and 2 (1) GG).
Norms like Articles 1 and 2 of the German Basic Law, but also other fundamental
rights, do more than simply obligate the state to refrain from placing restrictions on
the rights of individuals; they also require the state to guarantee that individuals are
protected against acts by others and to take positive action to safeguard human
rights. This latter mandate is referred to as the ‘third-party’ or ‘horizontal’ effect of
25
Bundesverfassungsgericht 1 BvR 209, 269, 362, 420, 440, 484/83 ‘Volkszählung’ (15 October
1983), BVerfGE 89, p. 1; see also, inter alia, Britz (2007).
26
Bundesverfassungsgericht 1 BvR 370, 595/07 ‘Online-Durchsuchungen’ (27 February 2008),
BVerfGE 124, p. 313; Luch (2011); Wehage (2013); Hauser (2015).
27
Bundesverfassungsgericht 1 BvR 966, 1140/09 ‘BKA-Gesetz’ (20 April 2016), BVerfGE
141, pp. 264-265, 268 et seq., 303 et seq.
10 W. Hoffmann-Riem
28
Dolderer (2000); Calliess (2006), margin nos. pp. 5 et seq.; Schliesky et al. (2014);
Knebel (2018).
29
Wischmeyer (2017) and Leisterer (2018).
Artificial Intelligence as a Challenge for Law and Regulation 11
System protection should not be confused with systemic protection. The latter uses 31
the relevant technology for building into the technical system itself measures that
independently safeguard protected interests, especially those of third parties.30 The
objective here, in particular, is protection through technology design,31 including
through default settings intended to increase protection.32 Such systemic protection
has long been utilised as a means of data protection. A current example can be found
in Article 25 of the EU General Data Protection Regulation (GDPR): Data protection
by design and by default. Article 32 GDPR contains additional requirements.
However, the field to which data protection through design can be applied is
considerably broader and also covers the use of AI. Discussions are also focusing
on the extent to which the effectiveness not only of basic legal principles but also, to
supplement them, basic ethical principles can be ensured (or at least facilitated)
through technology design.33
30
Spiecker gen. Döhmann (2016), pp. 698 et seq.
31
Yeung (2008, 2017).
32
Hildebrandt (2017) and Baumgartner and Gausling (2017).
33
Winfield and Jirotka (2018); European Group on Ethics in Science and New Technologies (2018);
Data Ethics Commission (2019).
34
Wischmeyer (2018), Part III 1-6, IV.
12 W. Hoffmann-Riem
33 In view of the diverse number of fields in which AI is used, this Chapter obviously
cannot describe all possible instruments for achieving a legal impact on the devel-
opment and use of AI. Instead, I will give several examples along with general
remarks. Area-specific suggestions for legal approaches are also presented in the
subsequent Chapters in this volume (particularly in Part II).
34 The proposition has been put forward above (Sect. 2) that in many cases it is not
enough to develop rules for AI that are detached from the contextual conditions of
the areas in which they are applied and, above all, from the specific ways in which
they are applied.35 Also conceivable are rules that are applicable across a number of
areas. In this regard, a starting point is offered by the types of norms that are applied
in the law of data protection (see Marsch). These norms are applicable to AI where
personal data are processed.36 Moreover, they can to some extent be used as a
template for rules designed to safeguard legally protected interests other than
privacy.
35 One instrument that can be used in nearly every field in which AI is used are
prospective impact assessments (see Article 35 GDPR). Certification by publicly
accredited or government bodies, such as for particularly high-risk developments
and/or possible uses of AI, can also be provided for (see Articles 42 and 43 GDPR).
To the extent that, as is customarily the case, certification is voluntary (see Article 42
(3) GDPR), it makes sense to create incentives for its enforcement, such as by
exempting or limiting liability, e.g. for robotics under product liability law. In
sensitive areas, however, certification can also be made obligatory by law.
36 Because further developments and the pace of software changes are often
unpredictable, particularly in learning algorithmic systems, continual monitoring is
also called for, as are retrospective impact assessments carried out through self-
monitoring and/or outside monitoring. Such monitoring can be facilitated by duties
to document software and software changes and, in the case of learning systems, the
training programs (see Wischmeyer, paras 15, 48). It may also make sense to impose
35
See also Pagallo (2016), pp. 209 et seq.; Scherer (2016); Tutt (2017); Martini and Nink (2017).
See also Veale et al. (2018).
36
One area of growing importance is the use of facial-recognition technologies, which employ AI
extensively.
Artificial Intelligence as a Challenge for Law and Regulation 13
duties to label the utilised data and to keep logs about the application and use of
training programmes, as well as reporting and information duties.37
Particularly in intelligent information technology systems, it is especially difficult 37
to create measures that ensure appropriate transparency, accountability, responsibil-
ity, and, where appropriate, the ability to make revisions (see Sect. 6.3).38 Steps also
have to be taken to ensure the continuous development of standards for evaluating
trends, such as adapting ethical requirements in the face of newly emerging fields of
application and risks, particularly with respect to the limits associated with
recognising and controlling consequences.
Moreover, imperative (command and control) law may be indispensable, such as 38
for preventing discrimination (see Tischbirek) and for ensuring the safeguarding of
cybersecurity, which is particularly important for the future.39 Also conceivable are
prohibitions or restrictions for special applications. German law already provides for
these in some cases, such as for automated decisions by public authorities.40
However, it can be expected that the fields of application will expand considerably,
particularly as e-government becomes more commonplace, and, above all, that new
experiences will engender restrictions on applications.
In view of the risks and opportunities associated with AI, which extend far 39
beyond the processing of personal data, it needs to be clarified whether it makes
sense to assign monitoring responsibilities to current data protection authorities. If
so, their powers would have to be expanded, and they would need to have larger
staffs with corresponding expertise. Ideally, however, an institution should be
created at the German federal level or at the EU level that specialises particularly
(but not only) in the monitoring of AI, such as a digital agency. For the U.S., Andrew
Tutt (2017) has proposed the establishment of an authority with powers similar to
those of the Food and Drug Administration. In addition to monitoring,41 such an
institution should also be entrusted with developing standards (performance
37
On these possibilities, see e.g. (although specifically in relation to the protection of privacy)
Leopoldina et al. (2018).
38
For more about these problem areas, please see the multifaceted considerations by Wischmeyer
(2018) in his article on ‘Regulation of Intelligent Systems’. Requiring further exploration are, in
particular, his suggestions relating to the establishment of a ‘collaborative architecture for reason-
giving and control’: Wischmeyer (2018), pp. 32 et seq. See also Wischmeyer, in this volume.
39
Wischmeyer (2017) and Beucher and Utzerath (2013).
40
Cf. sections 3a; 24 (1); 35a; 37 (2), (3), and (4); and 41 (2) sentence 2 of the German
Administrative Procedure Act (Verwaltungsverfahrensgesetz, VwVfG); section 32a (2), No. 2 of
the German Fiscal Code (Abgabenordnung, AO); section 31a of Book X of the German Social Code
(Sozialgesetzbuch [SGB] X).
41
One way to do this is by certifying AI systems. Corresponding information is necessary for this
purpose. See e.g. Scherer (2016), p. 397: ‘Companies seeking certification of an AI system would
have to disclose all technical information regarding the product, including (1) the complete source
code; (2) a description of all hardware/software environments in which the AI has been tested;
(3) how the AI performed in the testing environments; and (4) any other information pertinent to the
safety of the AI.’
14 W. Hoffmann-Riem
42
See Hoffmann-Riem (2016), pp. 430 et seq., with further references.
43
On dealing with uncertainty and lack of knowledge generally, see Hoffmann-Riem (2018), Part
5, with further references.
Artificial Intelligence as a Challenge for Law and Regulation 15
significant risks, particularly where, without being noticed, learning processes set
out in directions that cause undesired or even irreversible consequences.
It is presumably no coincidence today that warnings are increasingly being raised 44
about the risks associated with the use of AI and that regulatory protection is being
called for, including by individuals who have devoted their careers to the develop-
ment of artificial intelligence and have used it extensively in their businesses, such as
PayPal co-founder and Tesla owner Elon Musk, Microsoft co-founder Bill Gates,
and Apple co-founder Steve Wozniak.44 The brilliant researcher Stephen Hawking
shared this concern. While acknowledging the tremendous potential of AI, he called
for increased focus on the issue of AI security.45 Very often warnings about the
consequences of AI relate not just to specific applications but also to the fundamental
risk that AI could defy human control and develop destructive potentials for human-
ity as a whole.
One of many examples of risks involves how we should deal with brain trans- 45
plants and other neural devices that use AI.46 The discussion of future risks focuses
not just on the primarily ethical consequences for human development and the
concept of human intelligence and the way it functions. There are also fears about
new forms of cybercrime, such as the hacking of pacemakers and other AI-controlled
implants. For instance, Gasson and Koops state:
The consequences of attacks on human implants can be far greater to human life and health
than is the case with classic cybercrime. Moreover, as implant technology develops further,
it becomes difficult to draw an exact border between body and technology, and attacks not
only affect the confidentiality, integrity and availability of computers and computer data, but
also affect the integrity of the human body itself. The combination of network technologies
with human bodies may well constitute a new step change in the evolution of cybercrime,
thus making attacks on humans through implants a new generation of cybercrime.47
44
For references, see Scherer (2016), p. 355. On risks (also in relation to other consequences of IT),
see the—sometimes alarmist—works by Bostrom (2014) and Tegmark (2017). See also Precht
(2018) and Kulwin (2018). It is telling that Brad Smith (2017), President and Chief Legal Officer of
Microsoft, has called for the creation of an international ‘Digital Geneva Convention’, which
focuses mainly on cyberattacks and thus cybersecurity, but makes no mention of issue of AI. See
https://blogs.microsoft.com/on-the-issues/2017/02/14/need-digital-geneva-convention/.
45
Hawking (2018), p. 209 et seq., 213 et seq.
46
See e.g. Wu and Goodman (2013).
47
See Gasson and Koops (2013), p. 276.
48
Cornils (2017), pp. 391 et seq.; Vesting (2017); Hoffmann-Riem (2018), pp. 36–37.
16 W. Hoffmann-Riem
the infrastructures, and the business models utilised are not confined by any borders
or, in exceptional cases, have only limited regional borders, such as national borders.
They are often available on a transnational and, in particular, global basis. The same
applies to the services provided with digitalised technology, as well as their effects.
To the same extent, the use of AI also takes place mostly in unbordered spaces.
Large, globally positioned IT companies in particular are interested in operating as
far as possible with uniform structures that have a global or transnational reach. For
these companies, regulations that are set down in various national legal systems and
accordingly differ from one another constitute an impediment to the use of their
business models. As a result, they look for and exploit opportunities to thwart or
avoid such regulations.49
47 However, this does not mean that it is impossible to subject transnationally
operating companies to legal regulations with territorially limited applicability,
insofar as they operate on that territory. A recent example is Article 3(1) GDPR,
which specifies that the GDPR ‘applies to the processing of personal data in the
context of the activities of an establishment of a controller or a processor in the
Union, regardless of whether the processing takes place in the Union or not.’
Complementary regulations are contained in section 1 of the German Federal Data
Protection Act (Bundesdatenschutzgesetz, BDSG) (new version).
48 The relative insignificance of borders also relates to other dimensions discussed
here. For instance, in the IT sector, the borders are blurring between hardware and
software, and certain problems can be solved both in the hardware area and with
software. Also, private and public communication are becoming blended to an ever-
greater extent.50 Offline and online communication are increasingly interwoven—as
on the Internet of Things—meaning that a new type of world, which some call the
‘online world’, is becoming the norm.51
49 The impact that the insignificance of borders has on legal regulation is also
evident in the fact that digitalisation affects nearly all areas of life, meaning that
requirements can be, or even may have to be, imposed on the use of AI both across
the board and, if necessary, for each area specifically.
50 To the extent that use is made of self-learning algorithmic systems that enhance
their software independently, tackle new problems, and develop solutions to them,
these systems transcend the limits of the field of application or the abilities for
problem-solving that were placed on them by their initial programming.
49
Nemitz (2018).
50
See, inter alia, Schmidt (2017), focusing on social media.
51
Floridi (2015); Hildebrandt (2015), pp. 41 et seq., pp. 77 et seq.
Artificial Intelligence as a Challenge for Law and Regulation 17
Although digital transformation has created new spaces for generating, capturing, 51
and exploiting information that previously were essentially inaccessible, technology
design and other measures of secrecy block access to the approaches employed and
to the results.52 Lack of transparency can also result from the collaboration involved
in developing various program components and creating hardware. This is all the
more so the case where there is insufficient knowledge about the ‘building blocks’
originating from the other actors involved and about how these components func-
tion. Where learning algorithms are used, not even the programmers involved in the
creation of the algorithms know the programs that are being modified by automatic
learning. Even though it is possible to overcome the black-box character of infor-
mation technology systems,53 such as through reverse engineering, this normally
presupposes a high level of expertise and the use of complex procedures. The
barriers are significant (see Wischmeyer).
Examples of legal obstacles to transparency include where algorithms are 52
recognised as business secrets54 or as official secrets (section 32a (2), No. 2 Fiscal
Code—Abgabenordnung) (see Braun Binder).
Not only for users but also for supervisory authorities and the general public, it is 53
important that the treatment of digital technologies, including the use of AI, be
generally comprehensible and controllable. In this regard, sufficient transparency is a
prerequisite for creating not only trust but also accountability and, in some cases,
liability.55
Furthermore, the application of law and its outcome are made considerably more 54
difficult by the concentration of power in the IT field.56 In this regard, the develop-
ment of AI is becoming increasingly dominated by large IT companies and
specialised firms associated with them. Powerful IT companies have managed to
keep the development of software and internet-based services, as well as their output,
largely unregulated. Although antitrust law, which is commonly used to limit
economic power, is applicable to companies with IT business areas,57 national and
EU antitrust law is limited both with respect to its territorial applicability and in
52
See, inter alia, Kroll (2018) and Wischmeyer (2018).
53
See Leopoldina et al. (2018), p. 50.
54
See e.g. BGHZ 200, 38 for credit scoring by the German private credit bureau Schufa.
55
Wischmeyer (2018) and Hoeren and Niehoff (2018).
56
Welfens et al. (2010); Rolf (2018); Nemitz (2018), pp. 2 et seq.
57
And has already been so applied—see Körber (2017). For references concerning legal actions,
particularly against Google, see Schneider (2018), pp. 156–159.