Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

5

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

Thomas Wischmeyer

Timo Rademacher Editors

Regulating
Artificial
Intelligence
Regulating Artificial Intelligence
Thomas Wischmeyer • Timo Rademacher
Editors

Regulating Artificial
Intelligence
Editors
Thomas Wischmeyer Timo Rademacher
Faculty of Law Faculty of Law
University of Bielefeld University of Hannover
Bielefeld, Germany Hannover, Germany

ISBN 978-3-030-32360-8 ISBN 978-3-030-32361-5 (eBook)


https://doi.org/10.1007/978-3-030-32361-5

© Springer Nature Switzerland AG 2020


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the
material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology
now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG.
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface: Good Artificial Intelligence

Mission and Methodology

Policy and business decisions with broad social impact are increasingly based on 1
machine learning-based technology, today commonly referred to as artificial intel-
ligence (AI). At the same time, AI technology is becoming more and more complex
and difficult to understand, making it harder to control whether it is used in
accordance with existing laws. Given these circumstances, even tech enthusiasts
call for a stricter regulation of AI. Regulators, too, are stepping in and have begun to
pass respective laws, including the right not to be subject to a decision based solely
on automated processing in Article 22 of the EU’s General Data Protection Regulation
(GDPR), Section 140(d)(6) and (s) of the 2018 California Consumer Privacy Act on
safeguards and restrictions concerning commercial and non-commercial ‘research
with personal information’, or the 2017 amendments to the German Cartels Act and
the German Administrative Procedure Act.
While the belief that something needs to be done about AI is widely shared, there 2
is far less clarity about what exactly can or should be done and how effective
regulation might look like. Moreover, the discussion on AI regulation sometimes
only focuses on worst case scenarios based on specific instances of technical
malfunction or human misuses of AI-based systems. Regulations premised on
well-thought-out strategies and striving to balance opportunities and risks of AI
technologies (cf. Hoffmann-Riem) are still largely missing.
Against this backdrop, this book analyses the factual and legal challenges the 3
deployment of AI poses for individuals and the society. The contributions develop
regulatory recommendations that do not curb technology’s potential while preserv-
ing the accountability, legitimacy, and transparency of its use. In order to achieve
this aim, the authors all follow an approach that might be described as ‘threefold
contextualization’: Firstly, the analyses and propositions are norm-based,
i.e. consider and build on statutory and constitutional regimes shaping or restricting
the design and use of AI. Secondly, it is important to bear in mind that AI

v
vi Preface: Good Artificial Intelligence

technologies, to put it briefly, provide instruments addressing real-life problems


which so far humans had to solve: officers, doctors, drivers, etc. Hence, it does not
suffice to ask whether or not AI ‘works well’ or ‘is dangerous’. Instead, it is
necessary to compare the characteristics of the new technology with the
corresponding human actions it replaces or complements on a case-by-case basis.
Therefore, the question asked in the following chapters is whether AI actually
‘works better’ or is ‘more dangerous’ than the human counterpart from the point
of view of the respective legal framework. Thirdly, this book and the selection of its
authors reflect the necessity of not only having an interdisciplinary understanding,
but also taking a decidedly intradisciplinary approach in developing a legal perspec-
tive on AI regulation. This presupposes that private and public law scholars, legal
theoreticians, and scholars of economic law approach the subject in the spirit of
mutual cooperation.
4 The debate on AI regulation is global. The influence of EU law has been growing
since the coming into force of the GDPR in 2018. Nonetheless, the scholarly
discourse, pursuant to the technological leadership in this field, mostly focuses on
the United States. Intentionally or not, scholars frequently adapt a normative and
conceptual framework tailored to US constitutional and administrative tradition,
e.g. focusing on AI ‘accountability’. This book’s authors all have German or
Swiss legal backgrounds and are thus well acquainted with EU law, bringing a
different legal tradition to the global discourse. This tradition is shaped by many
decades of comparatively strict and comprehensive data protection legislation and is,
therefore, correspondingly diverse and ‘rich’. Therefore, the following chapters will,
when it comes to describing existing AI laws and regulations, predominantly refer to
German and EU law. Nevertheless, these references merely serve to illustrate which
challenges AI poses for regulators across the globe.

Artificial Intelligence

5 From a technological point of view, there is no such thing as ‘the’ AI. While most
attention is currently paid to techniques extracting information from data through
‘machine learning’, AI research actually encompasses many different sub-fields and
methodologies.1 Similarly, machine learning is not a monolithic concept, but com-
prises a variety of techniques.2 These range from traditional linear regression over
support vector machines and decision tree algorithms to various types of neural
networks.3 Moreover, most machine learning-based systems are a ‘constellation’ of

1
Russell and Norvig (2010); Kaplan (2016); Wischmeyer (2018), pp. 9 et seq.
2
The terms machine learning and AI are used synonymously in this volume.
3
Cf. EU High Level Expert Group (2018), pp. 4–5.
Preface: Good Artificial Intelligence vii

processes and technologies rather than one well-defined entity, which makes it even
more difficult to determine the scope of the meaning of AI.4 As such, AI is a field of
research originating from the mid-1950s—the Dartmouth Summer Research Project
is often mentioned in this context. Today, its scholars mostly focus on a set of
technologies with the ability to ‘process potentially very large and heterogeneous
data sets using complex methods modelled on human intelligence to arrive at a result
which may be used in automated applications’.5
Against this backdrop, some scholars advocate abandoning the concept of AI and 6
propose, instead, to address either all ‘algorithmically controlled, automated
decision-making or decision support systems’6 or to focus specifically on machine
learning-based systems.7 Now, for some regulatory challenges such as the societal
impact of a decision-making or decision support system or its discriminatory
potential, the specific design of the system or the algorithms it uses are indeed
only of secondary interest. In this case, the contributions in this volume consider the
regulatory implications of ‘traditional’ programs as well (cf., e.g. Krönke, paras
1 et seq.; Wischmeyer, para 11). Nevertheless, the most interesting challenges arise
where advanced machine learning-based algorithms are deployed which, at least
from the perspective of the external observer, share important characteristics with
human decision-making processes. This raises important issues with regard to the
potential liability and culpability of the systems (see Schirmer). At the same time,
from the perspective of those affected by such decision-making or decision support
systems, the increased opacity, the new capacities, or, simply, the level of uncer-
tainty injected into society through the use of such systems, lead to various new
challenges for law and regulation.

Structure and Content

This handbook starts with an extensive introduction into the topic by Wolfgang 7
Hoffmann-Riem. The introduction takes on a—at first glance—seemingly trivial
point which turns out to be most relevant and challenging: you can only regulate
what you can regulate! In this spirit, Hoffmann-Riem offers an in-depth analysis of
why establishing a legitimate and effective governance of AI challenges the

4
Kaye (2018), para 3. Cf. also Ananny and Crawford (2018), p. 983: ‘An algorithmic system is not
just code and data but an assemblage of human and non-human actors—of “institutionally situated
code, practices, and norms with the power to create, sustain, and signify relationships among people
and data through minimally observable, semiautonomous action” [. . .]. This requires going beyond
“algorithms as fetishized objects” to take better account of the human scenes where algorithms,
code, and platforms intersect [. . .].’
5
Datenethikkommission (2018).
6
Algorithm Watch and Bertelsmann Stiftung (2018), p. 9.
7
Wischmeyer (2018), p. 3.
viii Preface: Good Artificial Intelligence

regulatory capacities of the law and its institutional architecture as we know it. He
goes on to offer a taxonomy of innovative regulatory approaches to meet the
challenges.
8 The following chapters address the Foundations of AI Regulation and focus on
features most AI systems have in common. They ask how these features relate to the
legal frameworks for data-driven technologies, which already exist in national and
supra-national law. Among the features shared by most, if not all AI technologies,
our contributors speak the following:
9 Firstly, the dependency on processing vast amounts of personal data ‘activates’
EU data protection laws and consequently reduces the operational leeway of public
and private developers and users of AI technologies significantly. In his contribu-
tion, Nikolaus Marsch identifies options for an interpretation of the European
fundamental right to data protection that would offer, for national and EU legislators
alike, more leeway to balance out chances and risks associated with AI systems. The
threats AI systems pose to human autonomy and the corresponding right to individ-
ual self-determination are then described by Christian Ernst, using the examples of
health insurances, creditworthiness scores, and the Chinese Social Credit System.
Thomas Wischmeyer critically examines the often-cited lack of transparency of
AI-based decisions and predictions (‘black box’ phenomenon), which seem to
frustrate our expectation to anticipate, review, and understand decision-making
procedures. He advises policy makers to redirect their focus, at least to some extent,
from the individuals affected by the specific use of an AI-based system in favour of
creating institutional bodies and frameworks, which can provide an effective control
of the system. Alexander Tischbirek analyses most AI technologies’ heavy reliance
on statistical methods that reveal correlations, patterns, and probabilities instead of
causation and reason and thus are prone to perpetuate discriminatory practices. He—
perhaps counterintuitively—highlights that, in order to reveal such biases and
distortions, it might be necessary to gather and store more personal data, rather
than less. Finally, Jan-Erik Schirmer asks the ‘million-dollar question’, i.e. whether
or not AI systems should be treated as legal persons, or as mere objects, answering
that question with a straightforward ‘a little bit of each’.
10 While the design and use of AI-based systems thus raise a number of general
questions, the success of AI regulation is highly dependent on the specific field of
application. Therefore, regulatory proposals for the Governance of AI must consider
the concrete factual and legal setting in which the technology is to be deployed. To
this end, the chapters in Part II examine in detail several industry and practice sectors
in which AI is, in our view, shaping decision-making processes to an ever-growing
extent: social media (by Christoph Krönke), legal tech (by Gabriele Buchholtz),
financial markets (by Jakob Schemmel), health care (by Sarah Jabri and Fruzsina
Molnár-Gabor), and competition law (by Moritz Hennemann). The analyses reveal
that in most of these settings AI does not only present itself as the object of
regulation, but, often simultaneously, as a potential instrument to apply and/or
enforce regulation. Therefore, most chapters in Part II also include considerations
regarding Governance through AI. Other articles, namely the chapters on adminis-
trative decision-making under uncertainty (by Yoan Hermstrüwer), law enforcement
Preface: Good Artificial Intelligence ix

(by Timo Rademacher), public administration (by Christian Djeffal), and AI and
taxation (by Nadja Braun Binder), actually focus on AI technologies that are
supposed to apply and/or support the application of the law.

University of Hannover, Hannover Timo Rademacher


Germany
University of Bielefeld, Bielefeld Thomas Wischmeyer
Germany

References

AlgorithmWatch (2018) Automating society – taking stock of automated decision-


making in the EU. www.bertelsmann-stiftung.de/de/publikationen/publikation/
did/automating-society. Accessed 6 Mar 2019
Ananny M, Crawford K (2018) Seeing without knowing: limitations of the trans-
parency ideal and its application to algorithmic accountability. New Media Soc
20(3):973–989
Datenethikkommission (2018) Empfehlungen der Datenethikkommission für die
Strategie Künstliche Intelligenz der Bundesregierung. www.bmi.bund.de/
SharedDocs/downloads/DE/veroeffentlichungen/2018/empfehlungen-
datenethikkommission.pdf?__blob¼publicationFile&v¼1. Accessed 6 Mar 2019
EU High Level Expert Group on Artificial Intelligence (2018) A definition of AI:
main capabilities and scientific disciplines. EU Commission. https://ec.europa.eu/
digital-single-market/en/news/definition-artificial-intelligence-main-capabilities-
and-scientific-disciplines. Accessed 6 Mar 2019
Kaplan J (2016) Artificial intelligence. Oxford University Press, Oxford
Kaye D (2018) Report of the Special Rapporteur on the promotion and protection of
the right to freedom of opinion and expression. 29 August 2018. United Nations
A/73/348
Russell S, Norvig P (2010) Artificial intelligence, 3rd edn. Addison Wesley, Boston
Wischmeyer (2018) Regulierung intelligenter Systeme. Archiv des öffentlichen
Rechts 143:1–66
Contents

Artificial Intelligence as a Challenge for Law and Regulation . . . . . . . . . 1


Wolfgang Hoffmann-Riem

Part I Foundations of Artificial Intelligence Regulation


Artificial Intelligence and the Fundamental Right to Data Protection:
Opening the Door for Technological Innovation and Innovative
Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Nikolaus Marsch
Artificial Intelligence and Autonomy: Self-Determination in the Age of
Automated Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Christian Ernst
Artificial Intelligence and Transparency: Opening the Black Box . . . . . . 75
Thomas Wischmeyer
Artificial Intelligence and Discrimination: Discriminating Against
Discriminatory Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Alexander Tischbirek
Artificial Intelligence and Legal Personality: Introducing
“Teilrechtsfähigkeit”: A Partial Legal Status Made in Germany . . . . . . 123
Jan-Erik Schirmer

Part II Governance of and Through Artificial Intelligence


Artificial Intelligence and Social Media . . . . . . . . . . . . . . . . . . . . . . . . . 145
Christoph Krönke
Artificial Intelligence and Legal Tech: Challenges to the Rule
of Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Gabriele Buchholtz

xi
xii Contents

Artificial Intelligence and Administrative Decisions


Under Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
Yoan Hermstrüwer
Artificial Intelligence and Law Enforcement . . . . . . . . . . . . . . . . . . . . . . 225
Timo Rademacher
Artificial Intelligence and the Financial Markets: Business as Usual? . . . 255
Jakob Schemmel
Artificial Intelligence and Public Governance: Normative Guidelines
for Artificial Intelligence in Government and Public Administration . . . 277
Christian Djeffal
Artificial Intelligence and Taxation: Risk Management in Fully
Automated Taxation Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Nadja Braun Binder
Artificial Intelligence and Healthcare: Products and Procedures . . . . . . 307
Sarah Jabri
Artificial Intelligence in Healthcare: Doctors, Patients and Liabilities . . . 337
Fruzsina Molnár-Gábor
Artificial Intelligence and Competition Law . . . . . . . . . . . . . . . . . . . . . . 361
Moritz Hennemann
Contributors

Nadja Braun Binder Faculty of Law, University of Basel, Basel, Switzerland


Gabriele Buchholtz Bucerius Law School, Hamburg, Germany
Christian Djeffal Munich Center for Technology in Society, Technical University
of Munich, Munich, Germany
Christian Ernst Helmut-Schmidt-Universität/Universität der Bundeswehr Hamburg,
Hamburg, Germany
Moritz Hennemann Institute for Media and Information Law, Department I:
Private Law, Albert-Ludwigs-Universität Freiburg, Freiburg, Germany
Yoan Hermstrüwer Max Planck Institute for Research on Collective Goods, Bonn,
Germany
Transatlantic Technology Law Forum, Stanford Law School, Stanford, CA, USA
Wolfgang Hoffmann-Riem Bucerius Law School, Hamburg, Germany
Sarah Jabri Department of Law, University of Constance, Constance, Germany
Christoph Krönke Institute for Public Policy and Law, Ludwig Maximilian
University of Munich, Munich, Germany
Nikolaus Marsch Rechtswissenschaftliche Fakultät, Saarland University,
Saarbrücken, Germany
Fruzsina Molnár-Gábor Heidelberg Academy of Sciences and Humanities,
BioQuant Centre, Heidelberg, Germany
Timo Rademacher Faculty of Law, University of Hannover, Hannover, Germany

xiii
xiv Contributors

Jakob Schemmel Institute for Staatswissenschaft and Philosophy of Law, Albert-


Ludwigs-University Freiburg, Freiburg, Germany
Jan-Erik Schirmer Faculty of Law, Humboldt-Universität zu Berlin, Berlin,
Germany
Alexander Tischbirek Faculty of Law, Humboldt-Universität zu Berlin, Berlin,
Germany
Thomas Wischmeyer Faculty of Law, University of Bielefeld, Bielefeld, Germany
Artificial Intelligence as a Challenge
for Law and Regulation

Wolfgang Hoffmann-Riem

Contents
1 Fields of Application for Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Levels of Impact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3 Legal Aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
4 Modes of Governance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
5 Exercising the State’s Enabling Responsibility Through Measures for Good Digital
Governance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
5.1 Displacements in the Responsibility of Public and Private Actors . . . . . . . . . . . . . . . . . . 7
5.2 Innovative Case-law as an Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
5.3 System Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
5.4 Systemic Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5.5 Regulatory Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5.6 Regarding Regulatory Possibilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
6 Obstacles to the Effective Application of Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
6.1 Openness to Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
6.2 Insignificance of Borders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
6.3 Lack of Transparency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
6.4 Concentration of Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
6.5 Escaping Legal Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
7 Types of Rules and Regulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
7.1 Self-structuring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
7.2 Self-imposed Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
7.3 Company Self-regulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
7.4 Regulated Self-regulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
7.5 Hybrid Regulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
7.6 Regulation by Public Authorities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
7.7 Techno-regulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
8 Replacement or Supplementation of Legal Measures with Extra-Legal, Particularly Ethical
Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
9 On the Necessity of Transnational Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

W. Hoffmann-Riem (*)
Bucerius Law School, Hamburg, Germany

© Springer Nature Switzerland AG 2020 1


T. Wischmeyer, T. Rademacher (eds.), Regulating Artificial Intelligence,
https://doi.org/10.1007/978-3-030-32361-5_1
2 W. Hoffmann-Riem

Abstract The Introduction begins by providing examples of the fields in which AI


is used, along with the varying impact that this has on society. It focuses on the
challenges that AI poses when it comes to setting and applying law, particularly in
relation to legal rules that seek to preserve the opportunities associated with AI while
avoiding or at least minimising the potential risks. The law must aim to ensure good
digital governance, both with respect to the development of algorithmic systems
generally but also with respect to the use of AI specifically. Particularly formidable
are the challenges associated with regulating the use of learning algorithms, such as
in the case of machine learning. A great difficulty in this regard is ensuring
transparency, accountability, responsibility, and the ability to make revisions, as
well as preventing hidden discrimination. The Chapter explores the types of rules
and regulations that are available. At the same time, it emphasises that it is not
enough to trust that companies that use AI will adhere to ethical principles. Rather,
supplementary legal rules are indispensable, including in the areas examined in the
Chapter, which are mainly characterised by company self-regulation. The
Chapter concludes by stressing the need for transnational agreements and
institutions.

1 Fields of Application for Artificial Intelligence

1 The ongoing process of digital transformation1 is being accomplished in part with


the use of artificial intelligence (AI), an interdisciplinary technology that aims to use
large data sets (Big Data), suitable computing power, and specific analytical and
decision-making procedures in order to enable computers to accomplish tasks that
approximate human abilities and even exceed them in certain respects.2
2 AI is used, for example, in search engines, in communication platforms and
robots (see Krönke), in intelligent traffic-control systems, for automated administra-
tive or judicial decision-making (see Buchholtz, Djeffal, Hermstrüwer), in auto-
mated vehicle-assistance systems, in medical diagnostics and therapy (see Jabri,
Molnár-Gábor), in smart homes, and in cyberphysical production systems (Industry
4.0), as well as in the military field. The expansion of algorithm-based analytical and
decision-making systems that work with AI also facilitates new forms of surveillance

1
The literature on digital transformation is highly diverse. See, inter alia, Cole (2017), Pfliegl and
Seibt (2017), Rolf (2018), Kolany-Raiser et al. (2018) and Precht (2018). For an illustration of the
diversity of the issues this raises, see the articles in No. 10 (2017) of Elektrotechnik &
Informationstechnik, pp. 323–88.
2
For an introduction to AI: Russell and Norvig (2012), Kaplan (2016), Lenzen (2018) and
Misselhorn (2018).
Artificial Intelligence as a Challenge for Law and Regulation 3

and for controlling behaviour (see Rademacher),3 but also new kinds of criminal
activity.4
AI is—currently—dominated by techniques of machine learning. The term refers 3
to computer programs that are able to learn from records of past conduct.5 Machine
learning is used for such purposes as identifying patterns, evaluating and classifying
images, translating language in texts, and automatically generating rough audio and
video cuts (e.g. ‘robot journalism’). Also possible are even more advanced applica-
tions of AI, sometimes referred to as ‘Deep Learning’.6 This has to do with IT
systems that, by using neural networks, are capable of learning on their own how to
enhance digital programs created by humans and thus of evolving independently of
human programming.
The expansion of AI’s capabilities and the tasks for which it can be used is 4
associated with both risks and opportunities. The following will look at the chal-
lenges that AI poses for law and regulation.7

2 Levels of Impact

Given the pervasiveness of digitalisation in many areas of society, it would be too 5


narrow to limit remarks about the role of law and about regulatory options to
individual aspects, such as only to the direct treatment of AI in and of itself. AI is
one of many elements associated with the use of intelligent IT systems. Its signif-
icance can vary with regard to the type of processing and the impact on actions.
Accordingly, each area of the legal system faces different challenges, which call not
only for overarching rules but also in many cases for area-specific responses.
The legal treatment of AI must overcome the still dominant view that associates 6
AI primarily with surveillance. The same applies to the law of data protection, which
has long focused on the regulation of digital interactions. It has prioritised the
treatment of personal information and, in particular, the protection of privacy.8
Admittedly, data protection remains significant also with respect to the use of AI,
insofar as personal information is processed with it. However, a variety of other data
are also exploited, including information that is stripped of its personal connection
through anonymisation and data that lack any past or present personal connection,

3
Hoffmann-Riem (2017).
4
On the latter: Bishop (2008) and Müller and Guido (2017).
5
Surden (2014), p. 89.
6
See e.g. Goodfellow et al. (2016).
7
For insight into the variety of challenges and proposed approaches for solving them, see Jakobs
(2016), Pieper (2018), Bundesnetzagentur (2017), Eidenmüller (2017), Castilla and Elman (2017),
BITKOM (2017), Schneider (2018), Djeffal (2018), Bafin (2018) and Bundesregierung (2018).
8
See, inter alia, Roßnagel (2003, 2017) and Simitis et al. (2019).
4 W. Hoffmann-Riem

such as machine-generated data in the area of Industry 4.0.9 In addition to data


protection law, many other legal areas may be relevant in dealing with AI and the
manifold ways in which it can be used, such as telecommunications law, competition
law (see Hennemann), the law of the protection of intellectual property, and liability
law (see Schirmer), particularly product liability law. Also important is law that is
specific to the relevant field of application, such as medical law (see Jabri), financial
markets law (see Schemmel), and road traffic law.
7 Particularly important for the realisation of individual and public interests are the
effects associated with abilities to use complex IT systems in various areas of
society. Therefore, the risks and opportunities associated with AI and its applications
require a broad-based analysis. In the process, the view should not be confined to
services provided directly with digital technologies using AI (i.e. the output). Also
important is the impact that the use of complex IT system has on those to whom
decisions are addressed or on affected third parties (i.e. impact as micro-effects).
Moreover, it may be appropriate to identify the farther-reaching, longer-term impact
on the relevant areas of society or on society as a whole and to determine the extent
to which such impact is significant for law and regulation (i.e. outcome as macro-
effects).
8 This point is illustrated by the fact that a number of new digitally based services
have an impact not only on those to whom such services are addressed but also on
third parties, as well as on the viability of social sub-systems. For instance,
digitalisation and the use of AI provide substantial abilities to influence lifestyles,
experiences, cultural orientations, attentions, and civic values, with potential effects
on private life, the educational system, the development of public opinion, and
political decision-making processes (see Krönke).10
9 In addition, specific (remote) effects can be observed in various sub-areas of
society. For example, robotics, which is being used to increase efficiency and save
costs in production processes, can massively alter the labour market, including
working conditions. The same can be expected from the growing use of Legal
Technology in the field of legal services (see Buchholtz). New sales channels for
goods, which can be purchased on platforms like Amazon, also alter markets, such as
retail markets, and this may have an impact on the urban availability of businesses
and service providers and thus on the nature of social interaction (see Hennemann).
The brokering of lodging through Airbnb has an impact on the availability of long-
term residential housing, as well as on the hotel industry. The algorithmic control of
dealings on financial markets can lead to unpredictable developments, such as
collapsing or skyrocketing prices, etc. (see Schemmel).
10 When information technology systems that employ state-of-the-art infrastructures
and the latest technologies, including highly developed AI, are used for purposes of
wide-ranging social engineering or to control the economic and state order and
individual and social behaviour, this has particularly far-reaching significance for

9
See Sattler (2017).
10
See e.g. Latzer et al. (2016), p. 395.
Artificial Intelligence as a Challenge for Law and Regulation 5

all three levels of effect mentioned (output, outcome, and impact). This is the
direction in which trends are currently headed in China. Commercially oriented
companies—primarily, but not limited to, such market-dominant IT companies as
the Alibaba Group (inter alia, various trading platforms and the widespread online
payment system Alipay) or Tencent Holdings (inter alia, social networks, news
services, online games)—are working closely with state institutions and the Com-
munist Party in order to extensively collect data and link them for a variety of
analyses. The aim is to optimise market processes, align the social behaviour of
people with specific values (such as honesty, reliability, integrity, cleanliness,
obeying the law, responsibility in the family, etc.), and ensure state and social
stability. China is in the process of developing a comprehensive social scoring
system/social credit system (currently being pilot tested, but soon to be applied in
wide areas of China).11 It would be short-sighted to analyse the development of this
system primarily from the aspect of the surveillance and suppression of people. Its
objectives are much more extensive.12
The aim of this Introduction cannot and should not be to examine and evaluate the 11
Chinese social credit system. I mention it only to illustrate the potentials that can be
unleashed with the new opportunities for using information technology. This
Chapter is limited to discussing the challenges facing law and regulation under the
conditions currently prevailing in Germany and the EU.

3 Legal Aspects

Digital technologies, including AI, can have desired or undesired effects from an 12
ethical, social, or economic perspective. Depending on the result of such an assess-
ment, one important issue is whether the creation and/or use of AI requires a legal
framework and, in particular, regulatory boundaries in order to promote individual
and public interests and to protect against negative effects.
It goes without saying that when digital technologies are used, all relevant norms 13
in the areas affected are generally applicable, such as those of national law—in
Germany, civil, criminal and public law, as well as their related areas—and of
transnational and international law, including EU law. Such laws remain applicable
without requiring any express nexus to digitalisation. But it needs to be asked
whether and to what extent these laws, which largely relate to the conditions of
the ‘analog world’, meet the requirements associated with digitalisation and, in
particular, with AI or instead need to be modified and supplemented.
Another question is where such new or newly created laws that relate or are 14
relatable to digitalisation should be located within the overall context of the legal
system. It is well known that individual legal norms are linked in systematic terms

11
See Chen and Cheung (2017), Creemers (2018) and Dai (2018).
12
Creemers (2018) and Dai (2018).
6 W. Hoffmann-Riem

with other parts of the legal system. In addition, in many cases they are embedded in
complex regulatory structures—in Germany often called ‘Regelungsstrukturen’.13
This term comprises the relevant norms and specialised personnel used for problem-
solving and the formal and informal procedures of interaction that can be utilised for
specifying the law and applying it. Also relevant are the resources (such as time,
money, and expertise) and forms of action available in organisations, as well as,
where necessary, the opportunities and measures for collaboration and networking
between various actors, both public and private. Such regulatory structures can
exhibit significant complexity in multilevel systems, such as those in the EU.

4 Modes of Governance

15 The creation of law and, in particular, of measures for regulation by public author-
ities must be tailored to the modes selected to solve the problem in a given case (the
‘modes of governance’: market, competition, negotiation, networking, contract, and
digital control).14 How do these modes and their specific configuration help to
achieve socially desirable goals and avoid undesired effects? Appropriate standards
are needed in order to determine what is desired. They include, in particular,
fundamental constitutional values (including democracy, rule of law, and social
justice; see Article 20 of the German Basic Law (Grundgesetz, GG)), protection of
the freedom to develop economically, culturally, politically, and the like, prevention
of manipulation and discrimination, and much more. Also particularly important are
the principles, objectives, and values enshrined in the Treaty on European Union and
in the Charter of Fundamental Rights of the European Union, as well as in other EU
legal acts.
16 One challenge consists of ensuring good governance in connection with the
development of algorithmic systems—‘governance of algorithms’15—and their
application—‘governance by algorithms’.16 Luciano Floridi describes ‘governance
of the digital’ as follows:
Digital Governance is the practice of establishing and implementing policies, procedures and
standards for the proper development, use and management of the infosphere. It is also a
matter of convention and good coordination, sometimes neither moral nor immoral, neither
legal nor illegal. For example, through digital governance, a government agency or a
company may (i) determine and control processes and methods used by data stewards and
data custodians in order to improve the data quality, reliability, access, security and
availability of its services; and (ii) devise effective procedures for decision-making and for
the identification of accountabilities with respect to data-related processes.17

13
See Hoffmann-Riem (2016), pp. 9–12.
14
On governance generally, see Benz and Dose (2010) and Schuppert (2011).
15
Saurwein (2015).
16
Just and Latzer (2016) and Latzer et al. (2016).
17
See Floridi (2018).
Artificial Intelligence as a Challenge for Law and Regulation 7

The area of good governance also includes observing ethical requirements and 17
ensuring compliance.18
One of several examples of important standards for configuring AI is a list created 18
by the European Group on Ethics in Science and New Technologies, an entity
organised by the European Commission, in its Statement on Ethics of Artificial
Intelligence, Robotics and ‘Autonomous’ Systems: (a) human dignity;
(b) autonomy; (c) responsibility; (d) justice, equity, and solidarity; (e) democracy;
(f) rule of law and accountability; (g) security, safety, bodily and mental integrity;
(h) data protection and privacy; and (i) sustainability.19 Even though the group
placed such standards within the field of ethics, this does not alter the fact that
they also have substantial legal relevance. This highlights the interaction frequently
seen between law and ethics. Law also has ethical foundations, and ethical principles
are shaped in part through law (see Sect. 8).
There are many ways to ensure good governance. In this regard, the measures do 19
not need to take the form of written rules. Also important are, for example, technical
approaches, such as the chosen technical design (see Sect. 5.4).

5 Exercising the State’s Enabling Responsibility Through


Measures for Good Digital Governance

However, good governance does not come about on its own. Where the focus is 20
primarily on law, as in this Chapter, the key aspects are the legal and, moreover,
extra-legal (e.g. ethical or moral) requirements, as well as the response by those to
whom the requirements are addressed, such as their willingness to comply with
them. The tasks of the state include creating or modifying law that facilitates and
stimulates good digital governance.

5.1 Displacements in the Responsibility of Public and Private


Actors

In this regard, the digital transformation is confronted by a realignment of the 21


relationship between private and public law that began much earlier, namely as a
result of, in particular, measures of deregulation and privatisation. In recent decades,
there has been a significant shift in responsibility to private actors, especially in the
fields of telecommunications and information technology and related services. This
includes the business fields of the large IT firms, particularly the global Big Five:
Alphabet/Google, Facebook, Amazon, Microsoft, and Apple. They mainly act

18
Floridi (2018), pp. 4 et seq.
19
European Group on Ethics in Science and New Technologies (2018), pp. 16 et seq.
8 W. Hoffmann-Riem

according to self-created guidelines that are set and enforced unilaterally in most
cases, including where third parties are concerned, such as the users of their
services.20
22 The heavy weight of private self-structuring and self-regulation (see Sect. 7) does
not however alter the fact that state authorities are responsible for protecting
individual and public interests. Nonetheless, the displacements have changed the
basic conditions under which state authorities exercise influence and the instruments
available to them, as well as the prospects for success.
23 Although because of the protection they enjoy through fundamental rights,
private entities are generally unconstrained in the specification and pursuit of their
interests, they are not fully relieved of the obligation to pay regard to the interests of
others and to matters of public interest. It is the state’s role to concentrate on
stimulating and encouraging private individuals to pursue the common good,
thereby enabling them to provide public services that previously were managed by
the state—without the latter completely abdicating its responsibility for overseeing
the process.21 Public-private partnerships have been introduced for standard setting
and oversight in several areas, all of which demand sophisticated and previously
untried legal frameworks.22 As the role of public authorities has changed, many
authors in Germany have taken to referring to the state as the ‘Gewährleistungsstaat’,
which Gunnar Folke Schuppert (2003) calls in English the ‘ensuring state’ and
others the ‘enabling state’. They stress the state’s ‘Gewährleistungsverantwortung’,
i.e. its responsibility to ensure sufficient legal and non-legal guarantees to protect the
common good.23 In the following, I will refer to ‘Gewährleistungsverantwortung’ as
the state’s ‘enabling responsibility’.
24 Where necessary, the state is under a positive obligation to create a framework for
safeguarding, above all, the exercise of political, economic, social, cultural, and
other fundamental rights. Positive obligations of the state are recognised not only in
the German legal system but also, increasingly, with regard to the Charter of
Fundamental Rights of the European Union and to the European Convention on
Human Rights, as well as in a number of international agreements.24 Norm-based
requirements for meeting positive obligations can be found not only in fundamental
rights but also in the provisions concerning constitutional principles (e. g. Article
20 GG) and fundamental values (e. g. Article 2 of the Treaty on European Union).
25 To the extent that digital transformation is shaped by private entities, the enabling
state is tasked with protecting individual and public interests, including through law.
The state has the ability, as well as the obligation, to create suitable structures,
provide normative orientation for conduct, and, if necessary, set boundaries.

20
Nemitz (2018), pp. 2 et seq.
21
See Voßkuhle and Wischmeyer (2017), p. 90.
22
See Voßkuhle and Wischmeyer (2017), p. 90.
23
See Ruge (2004); Franzius (2009); Schulze-Fielitz (2012), pp. 896 et seq.
24
Fischer-Lescano (2014); Schliesky et al. (2014); Marauhn (2015); Harris et al. (2018), pp. 24–27;
Marsch (2018), chapter 4.
Artificial Intelligence as a Challenge for Law and Regulation 9

Because change is happening so quickly, developments must furthermore be con-


tinuously monitored, and action must be taken when they go awry.
The specification of these requirements through interpretation or amendment of 26
existing law or through creation of new law also consists of responding to change, in
our case, to technological and social changes associated with digitalisation.

5.2 Innovative Case-law as an Example

Examples of how the law is capable of responding to non-legal changes can also be 27
found in case-law, such as several IT-related innovations by the German Federal
Constitutional Court in the area of fundamental rights. As early as 1983, the Court
elaborated a ‘fundamental right to informational self-determination’ in response to
the risks to the protection of the right of privacy that were associated with emerging
digitalisation.25 In 2008 the Court took the innovative step of extending the reach of
fundamental rights protection to the ‘fundamental right to the guarantee of the
confidentiality and integrity of information technology systems’.26 Although
owing to the subject matter of the dispute, this decision was directed at online
searches of an individual’s personal computer, the Court later held in 2016 that the
protection afforded to information technology systems covers more than simply the
computers used by individuals but also includes the networking of those computers
with other computers, such as in connection with storage of data in the cloud.27 At
the same time, it emphasised that data that are stored on external servers with a
legitimate expectation of confidentiality are also deserving protection. Protection is
also granted where a user’s movements on the internet are tracked. As a result, the
use of AI associated with such networks also may fall within the protective scope of
this fundamental right.
As is the case with the fundamental right to informational self-determination, the 28
Court has understood this right—often referred to in literature as the ‘IT fundamental
right’—as giving greater specificity to the constitutional guarantee of human dignity
and the protection of free development of personality (Articles 1 (1) and 2 (1) GG).
Norms like Articles 1 and 2 of the German Basic Law, but also other fundamental
rights, do more than simply obligate the state to refrain from placing restrictions on
the rights of individuals; they also require the state to guarantee that individuals are
protected against acts by others and to take positive action to safeguard human
rights. This latter mandate is referred to as the ‘third-party’ or ‘horizontal’ effect of

25
Bundesverfassungsgericht 1 BvR 209, 269, 362, 420, 440, 484/83 ‘Volkszählung’ (15 October
1983), BVerfGE 89, p. 1; see also, inter alia, Britz (2007).
26
Bundesverfassungsgericht 1 BvR 370, 595/07 ‘Online-Durchsuchungen’ (27 February 2008),
BVerfGE 124, p. 313; Luch (2011); Wehage (2013); Hauser (2015).
27
Bundesverfassungsgericht 1 BvR 966, 1140/09 ‘BKA-Gesetz’ (20 April 2016), BVerfGE
141, pp. 264-265, 268 et seq., 303 et seq.
10 W. Hoffmann-Riem

fundamental rights.28 However, in order to be able to implement this guarantee,


targeted legal norms are necessary. Examples include the law of data protection and
the law of IT security. Legal measures concerning the treatment of AI and the
ensuring of the innovation responsibility associated with it should likewise be
bound by this concept.

5.3 System Protection

29 The above-mentioned IT fundamental right relates to the protection of information


technology systems, i.e. to system protection. The right mandates that public author-
ities must protect the integrity of IT systems not only against unjustified interference
by the state but also, in addition, against that by third parties. Constitutional
requirements that measures be taken to protect the viability of information technol-
ogy systems, or at least that there be possibilities for doing so, can also be derived
from other fundamental rights (such as, inter alia, Articles 3, 5, 6, 10, 13, and
14 GG), as well as in addition from provisions concerning constitutional principles
and fundamental values (see Sect. 5.1): The state’s responsibility for guaranteeing
the realisation of such objectives becomes all the more pressing as individual and
public interests become increasingly affected by digital technologies, as well as by
the business models, modes of conduct, and infrastructures tailored to those tech-
nologies. This responsibility ultimately means ensuring a viable democracy,
enforcing compliance with the requirements of the rule of law, implementing
protection founded on social justice, taking measures to prevent risks that are
foreseeable and not yet foreseen (here, such as the further development and use of
AI), as well as safeguarding the capabilities of important institutions, such as the
market. Of particular significance is ensuring the quality of technical IT systems,
including taking measures to protect cybersecurity.29
30 System protection is particularly important in the IT sector, because individuals
as users are virtually unable to influence how the system is configured and are no
longer capable of recognising threats, i.e. where they have no ability whatsoever to
protect themselves individually. Moreover, the protection of important social areas is
certain to fail if the employment of protective measures depends exclusively on the
initiation and outcome of individual and thus gradual actions. The task here is an
important one for society as a whole, which also must manage it as a whole with the
aid of law. System protection is an important starting point for this.

28
Dolderer (2000); Calliess (2006), margin nos. pp. 5 et seq.; Schliesky et al. (2014);
Knebel (2018).
29
Wischmeyer (2017) and Leisterer (2018).
Artificial Intelligence as a Challenge for Law and Regulation 11

5.4 Systemic Protection

System protection should not be confused with systemic protection. The latter uses 31
the relevant technology for building into the technical system itself measures that
independently safeguard protected interests, especially those of third parties.30 The
objective here, in particular, is protection through technology design,31 including
through default settings intended to increase protection.32 Such systemic protection
has long been utilised as a means of data protection. A current example can be found
in Article 25 of the EU General Data Protection Regulation (GDPR): Data protection
by design and by default. Article 32 GDPR contains additional requirements.
However, the field to which data protection through design can be applied is
considerably broader and also covers the use of AI. Discussions are also focusing
on the extent to which the effectiveness not only of basic legal principles but also, to
supplement them, basic ethical principles can be ensured (or at least facilitated)
through technology design.33

5.5 Regulatory Guidelines

Exercise of the state’s enabling responsibility (particularly through the legislature, 32


the government, and the public administration), ideally supported by private actors
involved in the development and utilisation of intelligent IT systems, requires
clarification not only of objectives but also of strategies and concepts for
implementing them. This can be accomplished by formulating guidelines directed
at them. In this regard, I will borrow from Thomas Wischmeyer’s list of such
guidelines34 but also augment it:
– Exposure of the regulatory effect of intelligent systems;
– Appropriate quality level for intelligent systems;
– No discrimination by intelligent systems;
– Data protection and information security in connection with the use of intelligent
systems;
– Use of intelligent systems that is commensurate with the problem;
– Guarantee of transparency in connection with the use of intelligent systems;
– Clarity about liability and responsibility in connection with the use of intelligent
systems;

30
Spiecker gen. Döhmann (2016), pp. 698 et seq.
31
Yeung (2008, 2017).
32
Hildebrandt (2017) and Baumgartner and Gausling (2017).
33
Winfield and Jirotka (2018); European Group on Ethics in Science and New Technologies (2018);
Data Ethics Commission (2019).
34
Wischmeyer (2018), Part III 1-6, IV.
12 W. Hoffmann-Riem

– Enabling of democratic and constitutional control of intelligent systems;


– Protection against sustained impairment of the living standards of future gener-
ations by intelligent systems;
– Sensitivity to errors and openness to revision of intelligent systems.
Although this list could be expanded, it serves here to illustrate the multifaceted
aspects of the regulatory tasks.

5.6 Regarding Regulatory Possibilities

33 In view of the diverse number of fields in which AI is used, this Chapter obviously
cannot describe all possible instruments for achieving a legal impact on the devel-
opment and use of AI. Instead, I will give several examples along with general
remarks. Area-specific suggestions for legal approaches are also presented in the
subsequent Chapters in this volume (particularly in Part II).
34 The proposition has been put forward above (Sect. 2) that in many cases it is not
enough to develop rules for AI that are detached from the contextual conditions of
the areas in which they are applied and, above all, from the specific ways in which
they are applied.35 Also conceivable are rules that are applicable across a number of
areas. In this regard, a starting point is offered by the types of norms that are applied
in the law of data protection (see Marsch). These norms are applicable to AI where
personal data are processed.36 Moreover, they can to some extent be used as a
template for rules designed to safeguard legally protected interests other than
privacy.
35 One instrument that can be used in nearly every field in which AI is used are
prospective impact assessments (see Article 35 GDPR). Certification by publicly
accredited or government bodies, such as for particularly high-risk developments
and/or possible uses of AI, can also be provided for (see Articles 42 and 43 GDPR).
To the extent that, as is customarily the case, certification is voluntary (see Article 42
(3) GDPR), it makes sense to create incentives for its enforcement, such as by
exempting or limiting liability, e.g. for robotics under product liability law. In
sensitive areas, however, certification can also be made obligatory by law.
36 Because further developments and the pace of software changes are often
unpredictable, particularly in learning algorithmic systems, continual monitoring is
also called for, as are retrospective impact assessments carried out through self-
monitoring and/or outside monitoring. Such monitoring can be facilitated by duties
to document software and software changes and, in the case of learning systems, the
training programs (see Wischmeyer, paras 15, 48). It may also make sense to impose

35
See also Pagallo (2016), pp. 209 et seq.; Scherer (2016); Tutt (2017); Martini and Nink (2017).
See also Veale et al. (2018).
36
One area of growing importance is the use of facial-recognition technologies, which employ AI
extensively.
Artificial Intelligence as a Challenge for Law and Regulation 13

duties to label the utilised data and to keep logs about the application and use of
training programmes, as well as reporting and information duties.37
Particularly in intelligent information technology systems, it is especially difficult 37
to create measures that ensure appropriate transparency, accountability, responsibil-
ity, and, where appropriate, the ability to make revisions (see Sect. 6.3).38 Steps also
have to be taken to ensure the continuous development of standards for evaluating
trends, such as adapting ethical requirements in the face of newly emerging fields of
application and risks, particularly with respect to the limits associated with
recognising and controlling consequences.
Moreover, imperative (command and control) law may be indispensable, such as 38
for preventing discrimination (see Tischbirek) and for ensuring the safeguarding of
cybersecurity, which is particularly important for the future.39 Also conceivable are
prohibitions or restrictions for special applications. German law already provides for
these in some cases, such as for automated decisions by public authorities.40
However, it can be expected that the fields of application will expand considerably,
particularly as e-government becomes more commonplace, and, above all, that new
experiences will engender restrictions on applications.
In view of the risks and opportunities associated with AI, which extend far 39
beyond the processing of personal data, it needs to be clarified whether it makes
sense to assign monitoring responsibilities to current data protection authorities. If
so, their powers would have to be expanded, and they would need to have larger
staffs with corresponding expertise. Ideally, however, an institution should be
created at the German federal level or at the EU level that specialises particularly
(but not only) in the monitoring of AI, such as a digital agency. For the U.S., Andrew
Tutt (2017) has proposed the establishment of an authority with powers similar to
those of the Food and Drug Administration. In addition to monitoring,41 such an
institution should also be entrusted with developing standards (performance

37
On these possibilities, see e.g. (although specifically in relation to the protection of privacy)
Leopoldina et al. (2018).
38
For more about these problem areas, please see the multifaceted considerations by Wischmeyer
(2018) in his article on ‘Regulation of Intelligent Systems’. Requiring further exploration are, in
particular, his suggestions relating to the establishment of a ‘collaborative architecture for reason-
giving and control’: Wischmeyer (2018), pp. 32 et seq. See also Wischmeyer, in this volume.
39
Wischmeyer (2017) and Beucher and Utzerath (2013).
40
Cf. sections 3a; 24 (1); 35a; 37 (2), (3), and (4); and 41 (2) sentence 2 of the German
Administrative Procedure Act (Verwaltungsverfahrensgesetz, VwVfG); section 32a (2), No. 2 of
the German Fiscal Code (Abgabenordnung, AO); section 31a of Book X of the German Social Code
(Sozialgesetzbuch [SGB] X).
41
One way to do this is by certifying AI systems. Corresponding information is necessary for this
purpose. See e.g. Scherer (2016), p. 397: ‘Companies seeking certification of an AI system would
have to disclose all technical information regarding the product, including (1) the complete source
code; (2) a description of all hardware/software environments in which the AI has been tested;
(3) how the AI performed in the testing environments; and (4) any other information pertinent to the
safety of the AI.’
14 W. Hoffmann-Riem

standards, design standards, liability standards) or at least be involved in their


development.
40 The possibilities for exercise of the state’s enabling responsibility that have been
discussed here primarily describe only approaches, but not the standards that can be
used to test whether the use of AI leads to unacceptable risks in a certain field of
application. These standards must be in line with constitutional as well as transna-
tional legal requirements, but they also should satisfy ethical principles (see Sects. 4
and 8).
41 The technical difficulties associated with implementing legal instruments must be
left aside here. However, insofar there is a need for new instruments that require
further innovations, consideration should also be given to legally driven ‘innovation
forcing’.42 This means using law to establish targets or standards that in accordance
with the current standard of development are not yet able to be met but the meeting
of which is plausible. Such law then specifies an implementation period. If that
period expires without implementation, and if it is not extended, the development
and use of the relevant type of AI must be stopped.

6 Obstacles to the Effective Application of Law

42 A number of aspects specific to the field of information technology, as well as to the


business models used to exploit it, can be expected to pose difficulties for effective
regulation. Only a few of them will be discussed here.

6.1 Openness to Development

43 In view of the relentless speed of technological change, the development of new


fields of activity and business models, and the changes to society associated with
each of them, the application of law will in many cases have to take place with a
significant amount of uncertainty.43 Because of lack of knowledge and, often,
unpredictability, there is a risk that legal measures may be ineffective or have
dysfunctional consequences. On the one hand, a legal rule must be amenable to
new innovations in order to avoid obstructing the opportunities associated with
digitalisation, but not be so flexible as to be ill-suited to averting or minimising
risks. On the other, the legal system must make it possible to shift direction where the
legal objective is not met and/or unforeseen negative consequences occur (measures
for reversibility). In this regard, self-learning information technology systems lead to

42
See Hoffmann-Riem (2016), pp. 430 et seq., with further references.
43
On dealing with uncertainty and lack of knowledge generally, see Hoffmann-Riem (2018), Part
5, with further references.
Artificial Intelligence as a Challenge for Law and Regulation 15

significant risks, particularly where, without being noticed, learning processes set
out in directions that cause undesired or even irreversible consequences.
It is presumably no coincidence today that warnings are increasingly being raised 44
about the risks associated with the use of AI and that regulatory protection is being
called for, including by individuals who have devoted their careers to the develop-
ment of artificial intelligence and have used it extensively in their businesses, such as
PayPal co-founder and Tesla owner Elon Musk, Microsoft co-founder Bill Gates,
and Apple co-founder Steve Wozniak.44 The brilliant researcher Stephen Hawking
shared this concern. While acknowledging the tremendous potential of AI, he called
for increased focus on the issue of AI security.45 Very often warnings about the
consequences of AI relate not just to specific applications but also to the fundamental
risk that AI could defy human control and develop destructive potentials for human-
ity as a whole.
One of many examples of risks involves how we should deal with brain trans- 45
plants and other neural devices that use AI.46 The discussion of future risks focuses
not just on the primarily ethical consequences for human development and the
concept of human intelligence and the way it functions. There are also fears about
new forms of cybercrime, such as the hacking of pacemakers and other AI-controlled
implants. For instance, Gasson and Koops state:
The consequences of attacks on human implants can be far greater to human life and health
than is the case with classic cybercrime. Moreover, as implant technology develops further,
it becomes difficult to draw an exact border between body and technology, and attacks not
only affect the confidentiality, integrity and availability of computers and computer data, but
also affect the integrity of the human body itself. The combination of network technologies
with human bodies may well constitute a new step change in the evolution of cybercrime,
thus making attacks on humans through implants a new generation of cybercrime.47

6.2 Insignificance of Borders

Difficulties involving the legal treatment of complex information technology sys- 46


tems also result from the fact that in many respects, technologies dissolve borders or
demand action in borderless areas.48 For instance, the digital technologies employed,

44
For references, see Scherer (2016), p. 355. On risks (also in relation to other consequences of IT),
see the—sometimes alarmist—works by Bostrom (2014) and Tegmark (2017). See also Precht
(2018) and Kulwin (2018). It is telling that Brad Smith (2017), President and Chief Legal Officer of
Microsoft, has called for the creation of an international ‘Digital Geneva Convention’, which
focuses mainly on cyberattacks and thus cybersecurity, but makes no mention of issue of AI. See
https://blogs.microsoft.com/on-the-issues/2017/02/14/need-digital-geneva-convention/.
45
Hawking (2018), p. 209 et seq., 213 et seq.
46
See e.g. Wu and Goodman (2013).
47
See Gasson and Koops (2013), p. 276.
48
Cornils (2017), pp. 391 et seq.; Vesting (2017); Hoffmann-Riem (2018), pp. 36–37.
16 W. Hoffmann-Riem

the infrastructures, and the business models utilised are not confined by any borders
or, in exceptional cases, have only limited regional borders, such as national borders.
They are often available on a transnational and, in particular, global basis. The same
applies to the services provided with digitalised technology, as well as their effects.
To the same extent, the use of AI also takes place mostly in unbordered spaces.
Large, globally positioned IT companies in particular are interested in operating as
far as possible with uniform structures that have a global or transnational reach. For
these companies, regulations that are set down in various national legal systems and
accordingly differ from one another constitute an impediment to the use of their
business models. As a result, they look for and exploit opportunities to thwart or
avoid such regulations.49
47 However, this does not mean that it is impossible to subject transnationally
operating companies to legal regulations with territorially limited applicability,
insofar as they operate on that territory. A recent example is Article 3(1) GDPR,
which specifies that the GDPR ‘applies to the processing of personal data in the
context of the activities of an establishment of a controller or a processor in the
Union, regardless of whether the processing takes place in the Union or not.’
Complementary regulations are contained in section 1 of the German Federal Data
Protection Act (Bundesdatenschutzgesetz, BDSG) (new version).
48 The relative insignificance of borders also relates to other dimensions discussed
here. For instance, in the IT sector, the borders are blurring between hardware and
software, and certain problems can be solved both in the hardware area and with
software. Also, private and public communication are becoming blended to an ever-
greater extent.50 Offline and online communication are increasingly interwoven—as
on the Internet of Things—meaning that a new type of world, which some call the
‘online world’, is becoming the norm.51
49 The impact that the insignificance of borders has on legal regulation is also
evident in the fact that digitalisation affects nearly all areas of life, meaning that
requirements can be, or even may have to be, imposed on the use of AI both across
the board and, if necessary, for each area specifically.
50 To the extent that use is made of self-learning algorithmic systems that enhance
their software independently, tackle new problems, and develop solutions to them,
these systems transcend the limits of the field of application or the abilities for
problem-solving that were placed on them by their initial programming.

49
Nemitz (2018).
50
See, inter alia, Schmidt (2017), focusing on social media.
51
Floridi (2015); Hildebrandt (2015), pp. 41 et seq., pp. 77 et seq.
Artificial Intelligence as a Challenge for Law and Regulation 17

6.3 Lack of Transparency

Although digital transformation has created new spaces for generating, capturing, 51
and exploiting information that previously were essentially inaccessible, technology
design and other measures of secrecy block access to the approaches employed and
to the results.52 Lack of transparency can also result from the collaboration involved
in developing various program components and creating hardware. This is all the
more so the case where there is insufficient knowledge about the ‘building blocks’
originating from the other actors involved and about how these components func-
tion. Where learning algorithms are used, not even the programmers involved in the
creation of the algorithms know the programs that are being modified by automatic
learning. Even though it is possible to overcome the black-box character of infor-
mation technology systems,53 such as through reverse engineering, this normally
presupposes a high level of expertise and the use of complex procedures. The
barriers are significant (see Wischmeyer).
Examples of legal obstacles to transparency include where algorithms are 52
recognised as business secrets54 or as official secrets (section 32a (2), No. 2 Fiscal
Code—Abgabenordnung) (see Braun Binder).
Not only for users but also for supervisory authorities and the general public, it is 53
important that the treatment of digital technologies, including the use of AI, be
generally comprehensible and controllable. In this regard, sufficient transparency is a
prerequisite for creating not only trust but also accountability and, in some cases,
liability.55

6.4 Concentration of Power

Furthermore, the application of law and its outcome are made considerably more 54
difficult by the concentration of power in the IT field.56 In this regard, the develop-
ment of AI is becoming increasingly dominated by large IT companies and
specialised firms associated with them. Powerful IT companies have managed to
keep the development of software and internet-based services, as well as their output,
largely unregulated. Although antitrust law, which is commonly used to limit
economic power, is applicable to companies with IT business areas,57 national and
EU antitrust law is limited both with respect to its territorial applicability and in

52
See, inter alia, Kroll (2018) and Wischmeyer (2018).
53
See Leopoldina et al. (2018), p. 50.
54
See e.g. BGHZ 200, 38 for credit scoring by the German private credit bureau Schufa.
55
Wischmeyer (2018) and Hoeren and Niehoff (2018).
56
Welfens et al. (2010); Rolf (2018); Nemitz (2018), pp. 2 et seq.
57
And has already been so applied—see Körber (2017). For references concerning legal actions,
particularly against Google, see Schneider (2018), pp. 156–159.

You might also like