Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Christopher Knight - Automated Decision Making and Judicial Review

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Judicial Review

ISSN: 1085-4681 (Print) 1757-8434 (Online) Journal homepage: https://www.tandfonline.com/loi/rjdr20

Automated Decision-making and Judicial Review

Christopher Knight

To cite this article: Christopher Knight (2020): Automated Decision-making and Judicial Review,
Judicial Review, DOI: 10.1080/10854681.2020.1732740

To link to this article: https://doi.org/10.1080/10854681.2020.1732740

Published online: 17 Mar 2020.

Submit your article to this journal

Article views: 89

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=rjdr20
JUDICIAL REVIEW
https://doi.org/10.1080/10854681.2020.1732740

Automated Decision-making and Judicial Review*


Christopher Knight
Barrister, 11 KBW

Introduction and the issues


1. The use of automated tools and technology in public authority decision-making is not
an area over which there is an extensive degree of transparency. We know that such
automated decision-making is happening; for example, in checks undertaken by the
Department of Work and Pensions (DWP) and HMRC within the settled status appli-
cation process, or in risk-based verification processes used by local authorities in
relation to benefit applications, or in policing harm assessment of risk tools to predict
the risks of future offending.

2. But there is a lack of public access to the details and methods used in such automated
decision-making, as well as it being very likely that such techniques are being used in
areas not currently known about at all. To some extent, that lack of public awareness is
inevitable: algorithmic codes will never be realistically accessible to the general public
because of the degree of technical knowledge required to assess what is done. There
will also be understandable intellectual property concerns on the part of private con-
tractors whose expertise has created the models used.

3. Nonetheless, wherever automated decision-making is being used by public auth-


orities in the exercise of their legal functions, important issues of public law can
arise. In particular, issues arise from the relative new-ness, and untested nature, of
the techniques and models used. There is a great deal of research going on into
all aspects of artificial intelligence, machine learning and automated decision-
making, but it is inevitably in its infancy and constantly struggling to keep up with
the technological developments.

4. It will doubtless always be the assumption on the part of a public authority that
automated decision-making will be quicker, more cost-effective and (at least may)
reduce the risks of human error. Those aims are understandable and legitimate.
But public authorities buying the technology and the public who are the guinea
pigs are entitled to ask the extent to which those aims will in fact be met, and
over what time period.

*This article is a slightly amended version of a paper delivered at the Public Law Project’s “Judicial Review Trends and Forecasts
2019” conference on 30 October 2019.
© 2020 Informa UK Limited, trading as Taylor & Francis Group
2 C. KNIGHT

5. Similarly, the constant theme of the reports and wider research into artificial intelli-
gence (AI) and automated decision-making is the risks that it poses to fair and legiti-
mate decisions, and the extent to which actual or potential adverse impacts might
arise. There is more than enough research to indicate the issue; less that provides
answers one way or the other, and answers are always likely to be very context-
specific anyway.1

6. One established risk is that of accidental bias on the part of the algorithms. Such biases
may have been built into the system because they have been built and tested by people
of a particular gender, ethnicity and class.2 Another risk is of a learned bias, where the
machine learns the biases of those using it from the nature of the information it is asked
to focus on and the questions it is tasked with answering.3 A third risk arises from the
other end of the telescope: that human operators will assume that the computer is
correct, even where it is producing results that appear odd, or that are slowly pulling
in a particular direction: the “numbers don’t lie”.4

7. The difficulty of outside challengers being able to garner enough information about any
particular automated decision-making process is obvious. But the relative infancy of the
technology means that it may well be difficult for the public authority fully to compre-
hend and assess the impacts of the automated processes they wish to adopt.

8. This means that, from both sides of a judicial review challenge to automated decision-
making, new and imaginative thinking is likely to be required about the sorts of legal
arguments to be made.

Claimants: contextualising Tameside


9. Where the concerns about automated decision-making are premised upon the risks
posed by the algorithms used – which are not themselves likely to be known by the
claimant in detail, but which are likely to be able to be evidenced in principle from
the research which is available in the public domain about analogous AI programmes
or general issues arising from automated decision-making – the traditional public law
mechanism for assessing the extent to which the public authority has considered the
risks is the duty of enquiry: the principle derived from the speech of Lord Diplock in Sec-
retary of State for Education and Science v Tameside MBC [1977] AC 1014 at 1065. This is
the duty that falls upon a decision-maker to “take reasonable steps to acquaint himself
with the relevant information” in order to enable him to answer the question that he
has to answer.
1
See J. Cobbe, “Administrative law and the machines of government: judicial review of automated public-sector decision-making”
(2019) 39 LS 636.
2
See e.g. the fascinating and high-profile Invisible Women (Chatto & Windus, 2019) by Caroline Criado-Perez.
3
See e.g. the House of Commons Science and Technology Select Committee report Algorithms in decision making (23 May 2018,
HC 351)
4
See Cobbe (n. 1 above).
JUDICIAL REVIEW 3

10. The Tameside duty is not always an especially attractive one for a claimant, because of
the irrationality standard the courts have tended to apply to it. The recent summary of
the legal principles in Balajigari v Secretary of State for the Home Department [2019]
EWCA Civ 673, [2019] 1 WLR 4647 at [70] was in the following terms:
The general principles on the Tameside duty were summarised by Haddon-Cave J in R (Planta-
genet Alliance Ltd) v Secretary of State for Justice [2014] EWHC 1662 (QB) at paras. 99–100. In that
passage, having referred to the speech of Lord Diplock in Tameside, Haddon-Cave J summarised
the relevant principles which are to be derived from authorities since Tameside itself as follows.
First, the obligation on the decision-maker is only to take such steps to inform himself as are
reasonable. Secondly, subject to a Wednesbury challenge, it is for the public body and not the
court to decide upon the manner and intensity of enquiry to be undertaken: see R (Khatun) v
Newham LBC [2004] EWCA Civ 55; [2005] QB 37, at [35] (Laws LJ). Thirdly, the court should
not intervene merely because it considers that further enquiries would have been sensible or
desirable. It should intervene only if no reasonable authority could have been satisfied on the
basis of the enquiries made that it possessed the information necessary for its decision. Fourthly,
the court should establish what material was before the authority and should only strike down a
decision not to make further enquiries if no reasonable authority possessed of that material
could suppose that the enquiries they had made were sufficient. Fifthly, the principle that
the decision-maker must call his own attention to considerations relevant to his decision, a
duty which in practice may require him to consult outside bodies with a particular knowledge
or involvement in the case, does not spring from a duty of procedural fairness to the applicant
but rather from the Secretary of State’s duty so to inform himself as to arrive at a rational con-
clusion. Sixthly, the wider the discretion conferred on the Secretary of State, the more important
it must be that he has all the relevant material to enable him properly to exercise it.

11. There are recent examples of such claims succeeding in particular circumstances, where the
necessary legal questions could not have been properly answered without the decision-
maker considering a particular evidence-base or risk posed: for example of past and
future violations of international humanitarian law in R (Campaign Against Arms Trade) v Sec-
retary of State for International Trade [2019] EWCA Civ 1020, [2019] HRLR 14, or of the econ-
omic impact/financial viability of operating assumptions adopted in R (Law Centres
Federation Ltd (t/a Law Centres Network)) v Lord Chancellor [2018] EWHC 1588 (Admin).

12. The content of the duty must, however, take some measure of colour from its context.
Where a public authority has adopted automated decision-making against a context of
known risks of adverse impacts (and possibly risks of overstated benefits) there is a
logical basis for asking whether and to what extent the authority has properly
researched and assessed those risks. The risks are in various forms of technological,
methodological and ethical issues.

13. There are sensible ways of framing the Tameside duty as a result.

14. One, well-established and routine one, is the public sector equality duty (PSED) – s. 149
of the Equality Act 2010 – given the established risk of unconscious bias and discrimi-
nation in automation.
4 C. KNIGHT

15. Another is the duty on a data controller – as the public authority, or at the least its con-
tractor, will be – to carry out a data protection impact assessment under Art. 35 of EU
Regulation No. 2016/679, i.e. the General Data Protection Regulation (GDPR) (or s. 64 of
the Data Protection Act (DPA) 2018 in the law enforcement context).

16. Article 35 relevantly provides:


1. Where a type of processing in particular using new technologies, and taking into account
the nature, scope, context and purposes of the processing, is likely to result in a high risk to
the rights and freedoms of natural persons, the controller shall, prior to the processing,
carry out an assessment of the impact of the envisaged processing operations on the protec-
tion of personal data. A single assessment may address a set of similar processing operations
that present similar high risks.

3. A data protection impact assessment referred to in paragraph 1 shall in particular be


required in the case of:
(a) systematic and extensive evaluation of personal aspects relating to natural persons
which is based on automated processing, including profiling, and on which decisions are
based that produce legal effects concerning the natural person or similarly significantly
affect the natural person;
(b) processing on a large scale of special categories of data referred to in Article 9(1), or of
personal data relating to criminal convictions and offences referred to in Article 10; or
(c) a systematic monitoring of a publicly accessible area on a large scale.

17. The legal requirement to conduct such an assessment must have some inter-relation-
ship with the content of the Tameside duty, just as the PSED does, because it is not
possible to dismiss the need to consider the issues as unnecessary or rationally
unnecessary when they operate as a free-standing legal requirement.

18. That is not to say that a data protection impact assessment will necessarily reveal sub-
stantive unlawfulness in the adoption of automated decision-making itself, because
the courts have indicated a hands-off approach to scrutiny of such assessments. In
R (Bridges) v Chief Constable of South Wales Police [2019] EWHC 2341 (Admin) at
[146] (a case under Pt 3 of the DPA 2018), it was stated:
What is required is compliance itself, i.e. not simply an attempt to comply that falls within a
range of reasonable conduct. However, when determining whether the steps taken by the
data controller meet the requirements of section 64, the Court will not necessarily substitute
its own view for that of the data controller on all matters. The notion of an assessment brings
with it a requirement to exercise reasonable judgement based on reasonable enquiry and con-
sideration. If it is apparent that a data controller has approached its task on a footing that is
demonstrably false, or in a manner that is clearly lacking, then the conclusion should be
that there has been a failure to meet section 64 obligation. However, when conscientious
assessment has been brought to bear, any attempt by a court to second-guess that assess-
ment will overstep the mark.
JUDICIAL REVIEW 5

But that is quite a different point to the Tameside one as to whether and the extent to
which the impact assessment reveals that the authority has been asking itself the right
questions on an appropriately informed basis.

19. Moreover, data protection law can further inform the content of the duty because of
other obligations the public authority will be under that will have required positive con-
sideration of the issues raised by automated decision-making. The public authority data
controller is under an obligation to have informed the data subject of a wide variety of
matters, when it obtains their data from them, including “the existence of automated
decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in
those cases, meaningful information about the logic involved, as well as the significance
and the envisaged consequences for the data subject”: Art. 13(2)(f) GDPR. Materially the
same obligation applies in respect of data subjects whose personal data has been
obtained from another and not from them. In that context, the controller is required
to provide the information – whether it is direct to the subject or by way of generally
provided privacy notice – set out in Art. 14(1)–(2) GDPR. (These are less specific for
law enforcement processing under Pt 3 of DPA 2018.)

20. If complied with, such information should provide a toehold for those wishing to
understand the nature of the decision-making. If not complied with, then it is
another basis for a free-standing legal complaint and a string to the Tameside bow.

21. Seeking out, forcing the disclosure of, these sorts of assessments and notices may, at
the worst, provide a slightly better picture of what is in fact being done, enabling con-
sideration of research and assessment. At best, it may reveal unlawful gaps in the work
done, and obvious ones at that. In some cases, it may be helpful to consider the assess-
ment of the evidence relied upon by reference to the standards described in great
detail by Green J in R (British American Tobacco) v Secretary of State for Health [2016]
EWHC 1169 (Admin) at [276]–[404].

22. Seeking to adapt the Tameside tools to reflect a changed landscape – factually and
legally – is a potential way for those seeking to challenge the adoption and use of
automated decision-making to force a clearer picture to emerge and to expose
what has and has not been considered by public authorities in their approaches.

Defendants: reading across the precautionary principle


23. The application of the general principles of judicial review will tend to favour defen-
dant public authorities, and so the impetus for imaginative development of existing
legal doctrines is likely to be less pressing. Nonetheless, one possibility arises by
analogy.

24. The precautionary principle is well-established in environmental law – and indeed is


prescribed, albeit not defined, in Art. 191 TFEU – and in public health matters. As
6 C. KNIGHT

the Court of Appeal recently summarised it in R (Langton) v Secretary of State for


Environment, Food and Rural Affairs [2019] EWCA Civ 1562 at [53], in the context of
badger culling, the “essence of that principle is that measures should be taken,
where there is uncertainty about the existence of risks, without having to wait until
the reality and seriousness of those risks becomes fully apparent”.

25. It could be said that there is the potential for this to be read-across to the use of AI and
automated decision-making, although it will doubtless have different degrees of
plausibility in different circumstances. The context is not, of course, quite the same
as the ordinary use of the precautionary principle. But there is an analogy with the
balance to be struck between the lack of evidence on risks of harm as against the
speculative hoped-for benefits of the use of technology. The question is one of how
much scope public authorities should have in fixing that balance and the acceptable
level of risk.

26. At the upper end of the spectrum may be automated decision-making as used by law
enforcement agencies (or security and intelligence agencies), which seeks to track and
identify risks and vulnerabilities from the patterns of data that may be materially invis-
ible to the naked eye. The nature of the research and wider evidence may, at this stage,
be unclear as to the precise degree of effectiveness of such techniques. The public
interest generally favours measures that prevent crime or reduce the risk of it. The
potential for adverse impact on those subject to the algorithms may be difficult to
assess, and the precautionary principle may be said to justify seeking to use them:
the risks and seriousness of the harms that the AI is intended to address may outweigh
the risks and seriousness of the harms that using AI may cause.

27. A willingness to adopt an extended application to the precautionary principle was


seen in R (EU Lotto Ltd) v Secretary of State for Digital, Culture, Media and Sport
[2018] EWHC 3111 (Admin), [2019] 1 CMLR 41, where the Divisional Court upheld a
ban on betting on the Euromillions lottery draw for reasons that included consumer
protection (preventing those interested in the “soft” gambling of lottery playing
being drawn in to the “harder” gambling of betting). There was little direct evidence
of such a link or harm. The court accepted that (para. 89):
Measures taken to protect the public from dangers to health are a prime example; the State
does not have to await the accrual or manifestation of actual harm before acting and it can act
to forestall that adverse eventuality. As a matter of logic there is no reason why good govern-
ment should not involve precautionary measures in a range of different policy fields beyond
health.

28. A similar approach had been taken in R (Lumsdon) v Legal Services Board [2015] UKSC
41, [2016] AC 697, applying the principle to permit the QASA system of regulatory
assessment of advocates in criminal trials. The Supreme Court there held (paras 58–
59) that:
JUDICIAL REVIEW 7

58. In a case concerned with an authorisation scheme designed to protect public health, the
court required it to ensure that authorisation could be refused only if a genuine risk to public
health was demonstrated by a detailed assessment using the most reliable scientific data avail-
able and the most recent results of international research: Criminal proceedings against Green-
ham (C-95/01) EU:C:2004:71; [2004] 3 CMLR 33, paras 40–42. As in Commission of the European
Communities v Netherlands, the Court acknowledged that such an assessment could reveal
uncertainty as to the existence or extent of real risks, and that in such circumstances a
member state could take protective measures without having to wait until the existence
and gravity of those risks were fully demonstrated. The risk assessment could not however
be based on purely hypothetical considerations. The approach adopted in these cases is ana-
logous to that adopted in relation to EU measures establishing authorisation schemes
designed to protect public health, as for example in the Alliance for Natural Health case, dis-
cussed earlier.

59. It is not, however, necessary to establish that the measure was adopted on the basis of
studies which justified its adoption: see, for example, Stoβ v Wetteraukreis (C-316/07) EU:
C:2010:504; [2011] 1 CMLR 20, para 72.

29. As the court put it in EU Lotto (para. 91):


In any given case there is inevitably a correlation between the remoteness of the risk being
protected against and the cogency of the evidence required to justify intervention. The
more remote the risk the more cogent must be the evidence of risk.

30. An imaginative use of the precautionary principle authorities, by analogy, may enable
public authorities to re-characterise any legal challenge as, in effect, a risk balancing
exercise where neither the benefits nor the harms are the subject of clear evidence
or research. How that plays out will depend on the context in which the automated
decision-making takes place. The catch of adopting it is that purely hypothetical
benefits will not be enough, and a basis in the science and the research is required.
The weaker the existing evidence, the more may be required by way of promises to
keep matters under review. Here again there is an analogy with the PSED, under
which the case law has repeatedly recognised and accepted the relevance of a defen-
dant public authority agreeing to carry out an ex post facto review to address possible
deficiencies in the evidence-base at the policy-making stage.5

31. Whether approaching judicial review challenges to automated decision-making as a


claimant or a defendant, the changing technological context suggests that an auto-
mated application of traditional approach to public law may be less effective.

5
See e.g. R (UNISON) v Lord Chancellor [2015] EWCA Civ 935, [2016] ICR 1 at [121], per Underhill LJ.

You might also like