Christopher Knight - Automated Decision Making and Judicial Review
Christopher Knight - Automated Decision Making and Judicial Review
Christopher Knight - Automated Decision Making and Judicial Review
Christopher Knight
To cite this article: Christopher Knight (2020): Automated Decision-making and Judicial Review,
Judicial Review, DOI: 10.1080/10854681.2020.1732740
Article views: 89
2. But there is a lack of public access to the details and methods used in such automated
decision-making, as well as it being very likely that such techniques are being used in
areas not currently known about at all. To some extent, that lack of public awareness is
inevitable: algorithmic codes will never be realistically accessible to the general public
because of the degree of technical knowledge required to assess what is done. There
will also be understandable intellectual property concerns on the part of private con-
tractors whose expertise has created the models used.
4. It will doubtless always be the assumption on the part of a public authority that
automated decision-making will be quicker, more cost-effective and (at least may)
reduce the risks of human error. Those aims are understandable and legitimate.
But public authorities buying the technology and the public who are the guinea
pigs are entitled to ask the extent to which those aims will in fact be met, and
over what time period.
*This article is a slightly amended version of a paper delivered at the Public Law Project’s “Judicial Review Trends and Forecasts
2019” conference on 30 October 2019.
© 2020 Informa UK Limited, trading as Taylor & Francis Group
2 C. KNIGHT
5. Similarly, the constant theme of the reports and wider research into artificial intelli-
gence (AI) and automated decision-making is the risks that it poses to fair and legiti-
mate decisions, and the extent to which actual or potential adverse impacts might
arise. There is more than enough research to indicate the issue; less that provides
answers one way or the other, and answers are always likely to be very context-
specific anyway.1
6. One established risk is that of accidental bias on the part of the algorithms. Such biases
may have been built into the system because they have been built and tested by people
of a particular gender, ethnicity and class.2 Another risk is of a learned bias, where the
machine learns the biases of those using it from the nature of the information it is asked
to focus on and the questions it is tasked with answering.3 A third risk arises from the
other end of the telescope: that human operators will assume that the computer is
correct, even where it is producing results that appear odd, or that are slowly pulling
in a particular direction: the “numbers don’t lie”.4
7. The difficulty of outside challengers being able to garner enough information about any
particular automated decision-making process is obvious. But the relative infancy of the
technology means that it may well be difficult for the public authority fully to compre-
hend and assess the impacts of the automated processes they wish to adopt.
8. This means that, from both sides of a judicial review challenge to automated decision-
making, new and imaginative thinking is likely to be required about the sorts of legal
arguments to be made.
10. The Tameside duty is not always an especially attractive one for a claimant, because of
the irrationality standard the courts have tended to apply to it. The recent summary of
the legal principles in Balajigari v Secretary of State for the Home Department [2019]
EWCA Civ 673, [2019] 1 WLR 4647 at [70] was in the following terms:
The general principles on the Tameside duty were summarised by Haddon-Cave J in R (Planta-
genet Alliance Ltd) v Secretary of State for Justice [2014] EWHC 1662 (QB) at paras. 99–100. In that
passage, having referred to the speech of Lord Diplock in Tameside, Haddon-Cave J summarised
the relevant principles which are to be derived from authorities since Tameside itself as follows.
First, the obligation on the decision-maker is only to take such steps to inform himself as are
reasonable. Secondly, subject to a Wednesbury challenge, it is for the public body and not the
court to decide upon the manner and intensity of enquiry to be undertaken: see R (Khatun) v
Newham LBC [2004] EWCA Civ 55; [2005] QB 37, at [35] (Laws LJ). Thirdly, the court should
not intervene merely because it considers that further enquiries would have been sensible or
desirable. It should intervene only if no reasonable authority could have been satisfied on the
basis of the enquiries made that it possessed the information necessary for its decision. Fourthly,
the court should establish what material was before the authority and should only strike down a
decision not to make further enquiries if no reasonable authority possessed of that material
could suppose that the enquiries they had made were sufficient. Fifthly, the principle that
the decision-maker must call his own attention to considerations relevant to his decision, a
duty which in practice may require him to consult outside bodies with a particular knowledge
or involvement in the case, does not spring from a duty of procedural fairness to the applicant
but rather from the Secretary of State’s duty so to inform himself as to arrive at a rational con-
clusion. Sixthly, the wider the discretion conferred on the Secretary of State, the more important
it must be that he has all the relevant material to enable him properly to exercise it.
11. There are recent examples of such claims succeeding in particular circumstances, where the
necessary legal questions could not have been properly answered without the decision-
maker considering a particular evidence-base or risk posed: for example of past and
future violations of international humanitarian law in R (Campaign Against Arms Trade) v Sec-
retary of State for International Trade [2019] EWCA Civ 1020, [2019] HRLR 14, or of the econ-
omic impact/financial viability of operating assumptions adopted in R (Law Centres
Federation Ltd (t/a Law Centres Network)) v Lord Chancellor [2018] EWHC 1588 (Admin).
12. The content of the duty must, however, take some measure of colour from its context.
Where a public authority has adopted automated decision-making against a context of
known risks of adverse impacts (and possibly risks of overstated benefits) there is a
logical basis for asking whether and to what extent the authority has properly
researched and assessed those risks. The risks are in various forms of technological,
methodological and ethical issues.
13. There are sensible ways of framing the Tameside duty as a result.
14. One, well-established and routine one, is the public sector equality duty (PSED) – s. 149
of the Equality Act 2010 – given the established risk of unconscious bias and discrimi-
nation in automation.
4 C. KNIGHT
15. Another is the duty on a data controller – as the public authority, or at the least its con-
tractor, will be – to carry out a data protection impact assessment under Art. 35 of EU
Regulation No. 2016/679, i.e. the General Data Protection Regulation (GDPR) (or s. 64 of
the Data Protection Act (DPA) 2018 in the law enforcement context).
17. The legal requirement to conduct such an assessment must have some inter-relation-
ship with the content of the Tameside duty, just as the PSED does, because it is not
possible to dismiss the need to consider the issues as unnecessary or rationally
unnecessary when they operate as a free-standing legal requirement.
18. That is not to say that a data protection impact assessment will necessarily reveal sub-
stantive unlawfulness in the adoption of automated decision-making itself, because
the courts have indicated a hands-off approach to scrutiny of such assessments. In
R (Bridges) v Chief Constable of South Wales Police [2019] EWHC 2341 (Admin) at
[146] (a case under Pt 3 of the DPA 2018), it was stated:
What is required is compliance itself, i.e. not simply an attempt to comply that falls within a
range of reasonable conduct. However, when determining whether the steps taken by the
data controller meet the requirements of section 64, the Court will not necessarily substitute
its own view for that of the data controller on all matters. The notion of an assessment brings
with it a requirement to exercise reasonable judgement based on reasonable enquiry and con-
sideration. If it is apparent that a data controller has approached its task on a footing that is
demonstrably false, or in a manner that is clearly lacking, then the conclusion should be
that there has been a failure to meet section 64 obligation. However, when conscientious
assessment has been brought to bear, any attempt by a court to second-guess that assess-
ment will overstep the mark.
JUDICIAL REVIEW 5
But that is quite a different point to the Tameside one as to whether and the extent to
which the impact assessment reveals that the authority has been asking itself the right
questions on an appropriately informed basis.
19. Moreover, data protection law can further inform the content of the duty because of
other obligations the public authority will be under that will have required positive con-
sideration of the issues raised by automated decision-making. The public authority data
controller is under an obligation to have informed the data subject of a wide variety of
matters, when it obtains their data from them, including “the existence of automated
decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in
those cases, meaningful information about the logic involved, as well as the significance
and the envisaged consequences for the data subject”: Art. 13(2)(f) GDPR. Materially the
same obligation applies in respect of data subjects whose personal data has been
obtained from another and not from them. In that context, the controller is required
to provide the information – whether it is direct to the subject or by way of generally
provided privacy notice – set out in Art. 14(1)–(2) GDPR. (These are less specific for
law enforcement processing under Pt 3 of DPA 2018.)
20. If complied with, such information should provide a toehold for those wishing to
understand the nature of the decision-making. If not complied with, then it is
another basis for a free-standing legal complaint and a string to the Tameside bow.
21. Seeking out, forcing the disclosure of, these sorts of assessments and notices may, at
the worst, provide a slightly better picture of what is in fact being done, enabling con-
sideration of research and assessment. At best, it may reveal unlawful gaps in the work
done, and obvious ones at that. In some cases, it may be helpful to consider the assess-
ment of the evidence relied upon by reference to the standards described in great
detail by Green J in R (British American Tobacco) v Secretary of State for Health [2016]
EWHC 1169 (Admin) at [276]–[404].
22. Seeking to adapt the Tameside tools to reflect a changed landscape – factually and
legally – is a potential way for those seeking to challenge the adoption and use of
automated decision-making to force a clearer picture to emerge and to expose
what has and has not been considered by public authorities in their approaches.
25. It could be said that there is the potential for this to be read-across to the use of AI and
automated decision-making, although it will doubtless have different degrees of
plausibility in different circumstances. The context is not, of course, quite the same
as the ordinary use of the precautionary principle. But there is an analogy with the
balance to be struck between the lack of evidence on risks of harm as against the
speculative hoped-for benefits of the use of technology. The question is one of how
much scope public authorities should have in fixing that balance and the acceptable
level of risk.
26. At the upper end of the spectrum may be automated decision-making as used by law
enforcement agencies (or security and intelligence agencies), which seeks to track and
identify risks and vulnerabilities from the patterns of data that may be materially invis-
ible to the naked eye. The nature of the research and wider evidence may, at this stage,
be unclear as to the precise degree of effectiveness of such techniques. The public
interest generally favours measures that prevent crime or reduce the risk of it. The
potential for adverse impact on those subject to the algorithms may be difficult to
assess, and the precautionary principle may be said to justify seeking to use them:
the risks and seriousness of the harms that the AI is intended to address may outweigh
the risks and seriousness of the harms that using AI may cause.
28. A similar approach had been taken in R (Lumsdon) v Legal Services Board [2015] UKSC
41, [2016] AC 697, applying the principle to permit the QASA system of regulatory
assessment of advocates in criminal trials. The Supreme Court there held (paras 58–
59) that:
JUDICIAL REVIEW 7
58. In a case concerned with an authorisation scheme designed to protect public health, the
court required it to ensure that authorisation could be refused only if a genuine risk to public
health was demonstrated by a detailed assessment using the most reliable scientific data avail-
able and the most recent results of international research: Criminal proceedings against Green-
ham (C-95/01) EU:C:2004:71; [2004] 3 CMLR 33, paras 40–42. As in Commission of the European
Communities v Netherlands, the Court acknowledged that such an assessment could reveal
uncertainty as to the existence or extent of real risks, and that in such circumstances a
member state could take protective measures without having to wait until the existence
and gravity of those risks were fully demonstrated. The risk assessment could not however
be based on purely hypothetical considerations. The approach adopted in these cases is ana-
logous to that adopted in relation to EU measures establishing authorisation schemes
designed to protect public health, as for example in the Alliance for Natural Health case, dis-
cussed earlier.
59. It is not, however, necessary to establish that the measure was adopted on the basis of
studies which justified its adoption: see, for example, Stoβ v Wetteraukreis (C-316/07) EU:
C:2010:504; [2011] 1 CMLR 20, para 72.
30. An imaginative use of the precautionary principle authorities, by analogy, may enable
public authorities to re-characterise any legal challenge as, in effect, a risk balancing
exercise where neither the benefits nor the harms are the subject of clear evidence
or research. How that plays out will depend on the context in which the automated
decision-making takes place. The catch of adopting it is that purely hypothetical
benefits will not be enough, and a basis in the science and the research is required.
The weaker the existing evidence, the more may be required by way of promises to
keep matters under review. Here again there is an analogy with the PSED, under
which the case law has repeatedly recognised and accepted the relevance of a defen-
dant public authority agreeing to carry out an ex post facto review to address possible
deficiencies in the evidence-base at the policy-making stage.5
5
See e.g. R (UNISON) v Lord Chancellor [2015] EWCA Civ 935, [2016] ICR 1 at [121], per Underhill LJ.