Xu 2023 A Theological Account of Artificial Moral Agency
Xu 2023 A Theological Account of Artificial Moral Agency
Xu 2023 A Theological Account of Artificial Moral Agency
Ximian Xu
School of Divinity, and the Centre for Technomoral Futures of the
Edinburgh Futures Institute, University of Edinburgh, Edinburgh, UK
Abstract
This article seeks to explore the idea of artificial moral agency from a theological perspective. By
drawing on the Reformed theology of archetype-ectype, it will demonstrate that computational
artefacts are the ectype of human moral agents and, consequently, have a partial moral agency.
In this light, human moral agents mediate and extend their moral values through computational
artefacts, which are ontologically connected with humans and only related to limited particular
moral issues. This moral leitmotif opens up a way to deploy carebots into Christian pastoral
care while maintaining the human agent’s uniqueness and responsibility in pastoral caregiving
practices.
Keywords
Archetype and ectype, artificial intelligence, carebots, Christian pastoral care, computational
artefacts, Herman Bavinck
The terms ‘moral agency’ and ‘moral agent’ have a weighty tradition and an ubiquity in
usage such that they have become clichés and are understood in different ways. It is hard
to make a unanimous definition of ‘moral agency’ or ‘moral agent’. However, more often
than not, we do come across general accounts of moral agent. The Routledge
Encyclopedia of Philosophy, for example, aims to set forth a generic idea of moral agent:
Moral agents are those agents expected to meet the demands of morality. Not all agents are
moral agents. Young children and animals, being capable of performing actions, may be
agents in the way that stones, plants and cars are not. But though they are agents they are not
Corresponding author:
Ximian Xu, School of Divinity, University of Edinburgh, New College, Mound Place, Edinburgh EH1 2LX, UK.
Email: Simeon.Xu@ed.ac.uk
Xu 643
automatically considered moral agents. For a moral agent must also be capable of conforming to
at least some of the demands of morality.1
This passage points us to the three idiosyncrasies that need to be mulled over before
defining ‘moral agency’ and ‘moral agent’. First, the expectation of the moral agent
shows that the one who expects knows that the agent has the potential to meet moral
demands. In this sense, the moral status of an agent is, to a certain extent, presupposed.
Second, the agent’s moral agency is determined by moral demands, which means that the
agent is morally responsible to meet these demands. Although moral demands vary in dif-
ferent communities and across time, it holds true that moral demands per se make agents
responsible for their own actions. Third, moral agency reflects a capability to act in a
moral manner in order to meet these demands. Moral agency does not mean that the
agent is capable of acting in accordance with all moral demands. An agent would fail
to live up to some moral demands, and this is her incapability to act morally.
This generic portrayal of moral agency is vague and, consequently, opens up a way for
the extensive usage of ‘moral agent’ and ‘moral agency’ not only in speaking of humans
and animals but also in the representations of machines and computational artefacts, espe-
cially artificial intelligence (AI). Whatever moral demands are, computational artefacts
are expected to meet these demands, and many believe that computational artefacts are
capable of acting morally.
The idea that computational artefacts qualify as agents is not novel. More than two
decades ago, Ian Kerr suggested that computational artefacts and systems can be consid-
ered agents in a legal sense in electronic commerce.2 Yet the idea of artificial moral agent
(AMA) is not unanimously approved. Kerr maintains that these electronic agents are not
moral agents.3 As with Kerr, Aimee van Wynsberghe and Scott Robbins contest that the
idea of AMA is delusive because machines can never fully emulate human ethical reason-
ing.4 On the contrary, some scholars contend that computational artefacts can be fully
moral agents. To cite an instance, John Sullins asserts that smart machines and compu-
tational artefacts are fully moral agents when they perform human-level duties, are
autonomous and intentional, and fully understand their responsibilities in performing
their duties.5 To further complicate the debates over AMA, others suggest that moral
agency is tangled up with consciousness. Scholars like Richard Spinello stress that
AMA is untenable because computational artefacts cannot have human-level
1. Vinit Haksar, ‘Moral Agents’, in Edward Craig (ed.), Routledge Encyclopedia of Philosophy
(London: Routledge, 1998), https://doi.org/10.4324/9780415249126-L049-1.
2. Ian R. Kerr, ‘Spirits in the Material World: Intelligent Agents as Intermediaries in Electronic
Commerce’, Dalhousie Law Journal 22.2 (1999), pp. 190–249.
3. Kerr, ‘Spirits in the Material World’, p. 216.
4. Aimee van Wynsberghe and Scott Robbins, ‘Critiquing the Reasons for Making Artificial
Moral Agents’, Science and Engineering Ethics 25.3 (2019), p. 722, https://doi.org/10.
1007/s11948-018-0030-8.
5. John Sullins, ‘When is a Robot a Moral Agent?’, in Michael Anderson and Susan Leigh
Anderson (eds.), Machine Ethics (Cambridge: Cambridge University Press, 2011),
pp. 151–61.
644 Studies in Christian Ethics 36(3)
6. Richard A. Spinello, ‘Karol Wojtyla on Artificial Moral Agency and Moral Accountability’,
The National Catholic Bioethics Quarterly 11.3 (2011), pp. 469–501.
7. For example, Kenneth Einar Himma, ‘Artificial Agency, Consciousness, and the Criteria for
Moral Agency: What Properties Must an Artificial Agent Have to Be a Moral Agent?’, Ethics
and Information Technology 11 (2009), pp. 19–29.
8. For further on this see Margaret A. Boden, AI: Its Nature and Future (Oxford: Oxford
University Press, 2016), pp. 1–20. I leave aside the discussion on AI as designed by an
alien and on artificial general intelligence as well as artificial superintelligence. Such fictional
AI is out of tune with current mainstream AI ethics which focuses on humans as the designers
of AI and on ethical issues surrounding the application of AI in human daily lives.
9. A typical example is the Turing Test; see Alan M. Turing, ‘Computing Machinery and
Intelligence’, Mind: A Quarterly Review of Psychology and Philosophy 59.236 (1950),
pp. 433–60.
Xu 645
human agency. Furthermore, ‘pastoral care’, another theme discussed in this article, pre-
supposes a human-and-human relationship or communication. Hence, human-designed
and human-like computational artefacts as well as AI are conducive to the exploration
of issues surrounding AMA and AI-powered pastoral care.
In what follows, I will first examine Luciano Floridi and Jeff Sanders’s endorsement of
AMA through exploration of their Method of Abstraction, followed by critical analysis of
such computerisation of morality with a particular eye on Deborah Johnson’s contribu-
tion to debates over AMA. Second, the theology of archetype-ectype will be unfolded
in relation to theological anthropology. By doing so, it will come to be seen that the ques-
tion of AMA is the question of God’s creation and God-human relationship writ large.
Third, I will expand on the idea of partial artificial moral agency in the sense of ectype
and on the moral connection between computational artefacts and humans. Finally, I
will demonstrate how the idea of ectypal artificial moral agency offers some guiding prin-
ciples for the deployment of computational artefacts into Christian pastoral care.
10. Luciano Floridi and J. W. Sanders, ‘On the Morality of Artificial Agents’, Minds and
Machine 14.3 (2004), pp. 349–50. This paper can also be found at Luciano Floridi, ‘On
the Morality of Artificial Agents’, in Michael Anderson and Susan Leigh Anderson (eds.),
Machine Ethics (Cambridge: Cambridge University Press, 2011), pp. 184–212.
11. Luciano Floridi and Jeff W. Sanders, ‘Artificial Evil and the Foundation of Computer Ethics’,
Ethics and Information Technology 3.1 (2001), pp. 55–66.
12. Floridi and Sanders, ‘On the Morality of Artificial Agents’, p. 354.
646 Studies in Christian Ethics 36(3)
finite but non-empty set of observables, which are expected to be the building blocks in a
theory characterised by their very choice’.13 Floridi and Sanders argue that the level of
abstraction at which we discuss moral agents largely relies on the conviction that
human beings are moral agents. However, this level of abstraction is lower and includes
too many details about HMAs. In order to steer clear of the anthropocentrically defined
meaning of moral agent, they claim that a higher level of abstraction must be adopted so
that fewer details about moral agents need be considered.
Floridi and Sanders argue that the level of abstraction at which AMA can be conceived
needs to be upgraded by considering another three criteria: interactivity (interaction with
environments), autonomy (capability to change state independently), and adaptability
(learning to operate in a new way).14 Machine Learning is cited in support. Machine
Learning can interact with its environment, is autonomous and non-deterministic, and
can learn to change its model of operation to adapt to new circumstances.15 In light of
this upgraded, higher level of abstraction, a moral agent can be defined as an agent
that ‘is capable of morally qualifiable action’ which ‘can cause moral good or evil’.16
By identifying computational artefacts as moral agents, Floridi and Sanders rightly
recognise their moral importance for human life. Needless to say, technology has a
bearing on humans and changes the way of human life. Yet their methodology of recast-
ing the concept of moral agent invites criticism.
First, presupposed in Floridi and Sanders’s methodology is the computerisation of
morality. Taking the Method of Abstraction from Computer Science, they implicitly
equate the essence of morality with information processing. They believe that moral
observables can unequivocally and forthrightly reveal moral nature, and that moral
models can be built every bit as similarly as scientific models. The essential difference
between morality and science falls through the cracks. Morality is complex, and moral
rules that underlie moral observables may vary across time. Hence, computerising or
modelling morality at the levels of abstraction is nothing other than simplifying the
agent’s moral life.
Second, Floridi and Sanders’s redefinition of moral agent is a non-starter since they
make one particular level of abstraction dominant among others and blur the boundaries
between levels of abstraction by adopting univocal senses of the criteria for formalising
models. As noted earlier, they draw on Machine Learning and stress its interactivity,
autonomy, and adaptability as criteria for justifying the idea of AMA. In this regard,
they are oblivious to the distinction between the meanings of these criteria in understand-
ing AMAs and HMAs. Joanna J. Bryson’s observation on the design of Machine
Learning can help us here:
The mere fact that part of the process of design has been automated does not mean that the
system itself is not designed. The choice of an [Machine Learning] algorithm, the data fed
13. Floridi and Sanders, ‘On the Morality of Artificial Agents’, p. 355.
14. Floridi and Sanders, ‘On the Morality of Artificial Agents’, pp. 357–58.
15. Floridi and Sanders, ‘On the Morality of Artificial Agents’, pp. 361–62.
16. Floridi and Sanders, ‘On the Morality of Artificial Agents’, p. 364.
Xu 647
into it to train it, the point at which it is considered adequately trained to be released, how that
point is detected by testing, and whether that testing is ongoing if the learning continues during
the system’s operation—all of these things are design decisions that not only must be made but
also can easily be documented.17
17. Joanna J. Bryson, ‘The Artificial Intelligence of the Ethics of Artificial Intelligence: An
Introductory Overview for Law and Regulation’, in Markus D. Dubber, Frank Pasquale,
and Sunit Das (eds.), The Oxford Handbook of Ethics of AI (Oxford: Oxford University
Press, 2020), p. 6.
18. Frances S. Grodzinsky, Keith W. Miller, and Marty J. Wolf, ‘The Ethics of Designing
Artificial Agents’, Ethics and Information Technology 10 (2008), pp. 115–21, https://doi.
org/10.1007/s10676-008-9163-9.
19. Deborah G. Johnson, ‘Computer Systems: Moral Entities But Not Moral Agents’, Ethics and
Information Technology 8 (2006), pp. 197–98.
20. Deborah G. Johnson and Keith W. Miller, ‘Un-making Artificial Moral Agents’, Ethics and
Information Technology 10 (2008), p. 129.
648 Studies in Christian Ethics 36(3)
Computer systems and other artifacts have intentionality, the intentionality put into them by the
intentional acts of their designers. The intentionality of artifacts is related to their functionality.
Computer systems (like other artifacts) are poised to behave in certain ways in response to input.
… The output (the resulting behavior) is a function of how the system has been designed and the
input I gave it.23
Accordingly, the intentionality of artefacts rests with the human designer’s intention-
ality, which is actualised through intendings to act. At the same time, this inbuilt inten-
tionality of computers means that they can, to a certain degree, operate independently of
humans.
Johnson contests that computational artefacts do not have intendings to act precisely
because intendings to act arise from freedom. ‘The intending to act is the locus of
freedom; it explains how two agents with the same desires and beliefs may behave differ-
ently’.24 Computational artefacts are designed and produced in a standardised way and
are expected to operate in a specific way. As such, computational artefacts can never
be fully moral agents since they cannot be completely extricated from the human
designer’s intendings to act.
That said, Johnson categorically refuses to deny the moral status of computational
artefacts and is of the view that computational artefacts belong to the human moral world.
Computer systems (and other artifacts) can be part of the moral agency of humans insofar as they
provide efficacy to human moral agents and insofar as they can be the result of human moral
agency. In this sense, computer systems can be moral entities but not alone moral agents.25
Johnson unfolds this viewpoint elsewhere in two respects. Firstly, being part of human
moral agency means that computational artefacts should always be ‘conceptually tethered
to human agents’ in such a sense that it is humans who create, design, and use computa-
tional artefacts.26 Secondly, computational artefacts as moral entities have surrogate
agency. Surrogate agents are employed to perform tasks on behalf and in the interests
of clients. Human surrogate agents and computational surrogate agents differ in that
the former ‘have a first-person perspective independent of their surrogacy role’,
whereas computational artefacts ‘do not have interests, properly speaking, nor do they
have a self or a sense of self’.27 Hence, as surrogate moral agents, computational artefacts
only pursue the interests of their human users. As such, humans should constantly take on
moral responsibility for the operations of their artificial surrogate agents.
Johnson rightly hedges morality against computerisation since morality is more
complex than constructing models. Furthermore, she cautions us against the view that
technology as well as computational artefacts are morally neutral. In fact, artefacts are
embedded with moral values while being designed. Nonetheless, Johnson does not
make clear two points. In what metaphysical sense shall we understand the tether
between computational artefacts and humans? This question concerns the metaphysical
foundation for drawing the distinction between AMAs and HMAs. Is it possible to under-
stand the connection between AMAs and HMAs in a non-utilitarian sense? Johnson
meticulously delineates how humans are morally intertwined with computational arte-
facts while designing and using artefacts for utilitarian purposes, that is, human intend-
ings to act through artefacts. However, the term ‘surrogate’ entails the impression that
the only sense in which humans and AMAs are connected is utilitarian. In this light, it
seems impossible for us to uncover the ontological connection between what the
human being is and what the AMA is, and the divide between HMAs and AMAs over-
whelms their connection and resemblance.
Mahi Hardalupas raises the idea of partial moral agency, which keeps the close ties
between HMAs and AMAs while differentiating these two kinds of moral agenthood.
She suggests that there are four conditions for judging full moral agenthood: (1) action
evaluated by moral rules; (2) acting according to moral rules; (3) possibility to follow dif-
ferent rules; (4) moral motivators, which means either believing an action as moral or the
rules to follow as moral.28 Machines and computational artefacts are currently partial
moral agents because they can only fulfil parts of these conditions, especially the first
three conditions.
Hardalupas does not unfold the four conditions; neither does she discuss whether
humans can create machines that are able to fulfil all of these conditions in the future.
It is also unclear whether her four conditions would eventually result in a rule-based mor-
ality, a variant of computerisation of morality. That said, the idea of partial moral agency
is a better conceptual apparatus than surrogate agency through which to construe the fact
that computational artefacts are part of human moral agency. This is so for two reasons.
Firstly, unlike ‘surrogate agency’ that implies more separation than connection, ‘partial
moral agency’ intensifies that AMAs are part of human moral agency. Secondly,
‘partial moral agency’ stresses that AMAs can never escape moral responsibility. I
proceed to tease out the concept of partial moral agency from a theological perspective.
By doing so, the two ambiguous points in Johnson’s thought can be clarified.
27. Deborah G. Johnson and Thomas M. Powers, ‘Computers as Surrogate Agents’, in Jeroen van
den Hoven and John Weckert (eds.), Information Technology and Moral Philosophy
(Cambridge: Cambridge University Press, 2008), p. 257, http://doi:10.1017/CBO9780511
498725.014.
28. Mahi Hardalupas, ‘A Systematic Account of Machine Moral Agency’, in Vincent C. Müller
(ed.), Philosophy and Theory of Artificial Intelligence 2017 (Cham: Springer, 2018), p. 253.
650 Studies in Christian Ethics 36(3)
Theology of Archetype-Ectype
In the Reformed tradition, the ideas of archetype and ectype are not esoteric but appeared
in tandem with the rise of Reformed prolegomena.29 From the sixteenth century onwards,
the archetype-ectype thinking occupied a significant place in Reformed theology and
other Protestant traditions.
Franciscus Junius (1545–1602), who studied in Geneva with John Calvin (1509–
1564), was the first Protestant theologian to distinguish between archetypal theology
(theologia archetypa) and ectypal theology (theologia ectypa). Junius contended that
while archetypal theology refers to God’s self-knowledge, ectypal theology means all
knowledge of God revealed to creatures.30 Moreover, he stressed that the distinction
between the archetypal and the ectypal rests in the qualitative distinction between the
Creator and creatures.
For this one [ectypal theology] is created, it is dispositional; nor is it absolute except in its own
mode, but rather finite, discrete, and divinely communicated. It is, as it were, a true and definite
image of that theology [archetypal theology] which we have explained is uncreated, essential or
formal, most absolute, infinite, at once complete, and incommunicable.31
In this passage, Junius explicitly introduces a crucial rationale that underlies the dis-
tinction between archetypal and ectypal theology, that is, the ontological distinction
between the created and the uncreated. His idea of archetype-ectype, along with this
rationale, was formative to Protestant theology. Protestant orthodox theologians, includ-
ing both Lutheran and Reformed theologians, took note of the idea of archetype-ectype
while developing their own theology.32
However, most theologians of the post-Reformation era wrote of the ideas of arche-
type and ectype in theological prolegomena. Francis Turretin (1623–1687) was one of
the few theologians who deployed the archetype-ectype thinking in constructing theo-
logical anthropology. In his Institutio Theologiae Elencticae (1679–1685), one of the
greatest works on Reformed dogmatics in the Reformed tradition, Turretin contends:
29. Willem J. Van Asselt, ‘The Fundamental Meaning of Theology: Archetypal and Ectypal
Theology in Seventeenth-century Reformed Thought’, Westminster Theological Journal
64.2 (2002), pp. 320–21. On the background to the idea of archetypal and ectypal theology,
see Richard A. Muller, Post-Reformation Reformed Dogmatics, Volume One: Prolegomena
to Theology, 2nd edn (Grand Rapids, MI: Baker, 2003), pp. 225–28.
30. Franciscus Junius, A Treatise on True Theology: With the Life of Franciscus Junius, trans.
David C. Noe (Grand Rapids, MI: Reformation Heritage Books, 2014), pp. 107–13.
31. Junius, A Treatise on True Theology, p. 117.
32. The Lutheran theologian John Gerhard uses the idea of archetypal and ectypal theology to
articulate what true theology is; Johann Gerhard, On the Nature of Theology and on
Scripture, ed. Benjamin T. G. Mayes, trans. Richard J. Dinda (Saint Louis, MO: Concordia,
2009), pp. 22–24; for a helpful analysis of Gerhard’s idea of archetypal and ectypal theology,
see Robert D. Preus, A Study of Theological Prolegomena, The Theology of Post-Reformation
Lutheranism, I (St. Louis, MO: Concordia, 1970), pp. 112–14.
Xu 651
image signifies either the archetype (archetypon) itself (after whose copy something is made) or
the things themselves in God (in the likeness of which man was made); or the ectype itself,
which is made after the copy of another thing, or the similitude itself (which is in man and
the relation to God himself). In the former sense, man is said to have been made in the
image of God; in the latter, however, the very image of God.33
It is clear that Turretin correlates the imago Dei with the notion of archetype-ectype so
as to articulate an ontological distinction yet connection between God and human beings.
Even so, he does not expand on how this ontological implication underpins the being of
humans.
The turn-of-the-century Dutch theologian Herman Bavinck (1854–1921) took a
further step to use the ontological implication of the archetype-ectype thinking to
account for the being of humans. He spells out the archetype-ectype thinking in conjunc-
tion with the imago Dei. First of all, Bavinck argues that the whole human being, encom-
passing both the soul and the body, does not have or bear the imago Dei but rather is
the imago Dei.34 In order to flesh out the ontological meaning of ‘is’, he draws on
the archetype-ectype thinking: ‘“Image” expresses that God is the archetype and the
human being is the ectype; “likeness” adds that this image corresponds in all parts to
the original’.35 It is clear that Bavinck trades on the archetype-ectype thinking to high-
light the ontological chasm between God and humans. By the notion of archetype-ectype,
he attempts not to make the human being on a par with God. He argues elsewhere that
God is ‘the imago increate or archetype’ and that the human being is ‘the imago
creata or ectype’.36
Bavinck reformulates this ontological distinction between the archetype and the
ectype with ‘being’ and ‘becoming’. He asserts: ‘The idea of God itself implies immut-
ability. … He cannot change for better or worse, for he is the absolute, the complete, the
true being. Becoming is an attribute of creatures, a form of change in space and time’.37
To Bavinck’s mind, human becoming is related to human morality insofar as the imago
Dei refers primarily to the spiritual and moral quality of human nature, albeit that the
imago Dei includes both spiritual and physical dimensions.38 As God’s ectype, human
beings should continue to become moral in order that they can correspond in all parts
to God by displaying God’s attributes. In this vein, the ontological chasm between
God and humans is concomitant with their moral connection and resemblance.
The fact that the human being is the imago Dei and the ectype that corresponds in all
parts with God also means that the human being does emulate God’s creativity in an
33. Francis Turretin, Institutes of Elenctic Theology, ed. James T. Dennison Jr, trans. George
Musgrave Giger, 3 vols. (Phillipsburg: P&R Publishing, 1992–1997), 5.10.3.
34. Herman Bavinck, Reformed Dogmatics, Volume 2: God and Creation, ed. John Bolt, trans.
John Vriend (Grand Rapids, MI: Baker, 2004), p. 530.
35. Bavinck, God and Creation, p. 532.
36. Herman Bavinck, Gereformeerde Dogmatiek, Tweede Deel, 4th edn (Kampen: J. H. Kok,
1928), p. 493.
37. Bavinck, God and Creation, p. 158; emphasis added.
38. Bavinck, God and Creation, pp. 549–54.
652 Studies in Christian Ethics 36(3)
ectypal sense. To put this viewpoint in Philip Hefner’s words, human creativity exhibits
that the human being is God’s created co-creator ‘whose purpose is to be the agency,
acting in freedom, to birth the future that is most wholesome for the nature that has
birthed us’.39 It is worth noting that, given the ontological chasm between the archetype
and the ectype, there must be essential differences between the creative activities of God
and humans––that is, God creates out of nothing, but humans create out of something.
Viewed in this light, human artefacts are always derived from what God has already
created. The qualitative distinction between divine creation and human creation turns
out that there is an essential difference between humans as the consequence of God’s cre-
ation and artefacts as the consequence of human creation. As will be seen, this distinction
between the consequences of divine and human creation lay a metaphysical and moral
foundation for the concept of the AMA’s partial moral agency.
To sum up, this archetype-ectype thinking shows the inseparable bond between ontol-
ogy and morality. Being God’s ectype carries the connotations of both simulating God’s
creation and becoming moral throughout human life. As such, human action, including
human creation in an ectypal sense, bears moral implications.
1. the major premise: human moral agency is the consequence of God’s creation;
2. the minor premise: the moral agency of artefacts is the consequence of human cre-
ative work;
3. the conclusion: artificial moral agency differs from human moral agency due to
the essential difference between divine and human creative work.
39. Philip Hefner, The Human Factor: Evolution, Culture, and Religion (Minneapolis: Fortress,
1993), p. 27. A criticism has been levelled against ‘created co-creator’ in that this idea seems
to blur ontological boundaries between the divine and the human; see, for example, Gregory
R. Peterson, ‘The Created Co-Creator: What It Is and Is Not’, Zygon: Journal of Religion and
Science 39.4 (2004), p. 829. A detailed discussion on this is beyond the scope of this article.
Yet Hefner makes it clear that ‘the co-creator has no equality with God the creator’; see The
Human Factor, pp. 38–39.
Xu 653
This syllogism takes issue with Floridi and Sanders’ computerisation of morality through
modelling and levels of abstraction in that the latter methodology is rooted in the convic-
tion that AMAs equate to HMAs.
The second syllogism is derived from human creative work and its moral significance
in relation to God’s creation:
1. the major premise: The human being is the ectype of God and thus imitates God’s
creation;
2. the minor premise: God’s creation is coupled with the mediation of morality to the
human being as his ectype;
3. the conclusion: Human creation of computational artefacts is concomitant with
the mediation of morality.
This syllogism tallies with Johnson’s argument that AMAs should conceptually be teth-
ered to HMAs. Yet, admittedly, this syllogism expands and enriches the meaning of ‘teth-
ered’. That is, the mediation of morality in the human creation of computational artefacts
conveys a more dynamic rather than mechanic connection between AMAs and HMAs. At
the same time, this syllogism is not content with the AMA as a surrogate agent. Rather,
the AMA mediates human morality.
In light of these two syllogisms, we can unpack the idea of partial artificial moral
agency in three aspects. First, partial artificial moral agency is predicated upon the fact
that the computational artefact is the ectype of humanity. The meaning of ectype epito-
mises how computational artefacts take shape in the human mind. Anne Foerst puts it
well:
Researchers under the engineering goal who attempt to construct ‘smart’ gadgets have to use a
model of intelligence that is somehow familiar to them; the obvious choice would be them-
selves, as they know their own intelligence best. Choosing oneself as a model of intelligence
for one’s project influences the whole process of construction, and self-understanding and
technological success reinforce each other.40
This is all the more so in the creation of AI (robots). In the 1980s, researchers were
unsatisfied with virtual AI systems but instead sought to design embodied AI, such as
humanoid AI robots. This progress in AI research was partly due to the failure to deal
with object manipulation, sensations, and locomotion at the time. As such, physical
embodiments become necessary for the performance of such functions by AI systems.
Needless to say, the human embodiment is the most important model for designing the
embodied AI that is capable of interacting with its environments.
The idea of computational artefacts as the ectype of humanity means that humans
mediate their moral values into these artefacts while creating them. As Hefner notes, tech-
nology is a mirror of humanity, showing human seeking for survival, the reality of human
40. Anne Foerst, God in the Machine: What Robots Teach Us about Humanity and God
(New York: Plume, 2005), p. 67.
654 Studies in Christian Ethics 36(3)
nature, the human desire for the other world, and human values.41 Understanding arte-
facts in the ectypal sense indicates that computational artefacts are not merely part of
human moral agency but also the extensions of human morality. As will be seen, this
extension implies that human pastoral care can be mediated through AI-powered pastoral
carebots. It is in this sense of extension that human-machine relationships can be properly
understood. For example, the desire for artificial companions at bottom exhibits the lack
of human companions. Seen from this perspective, the extension of human moral agency
in AMAs also helps us to explore the role of AI in pastoral care. We will turn to this
subject later.
Second, artificial moral agency is partial because it is ectypal, limited, and conse-
quently only related to particular moral issues. A clarification needs to be made here.
Conjoining the ectypal and the partial (limited) never implies that humans as the
ectype of God have only partial rather than full moral agency. As noted earlier, theologic-
ally speaking, full artificial moral agency means that human creation is on a par with
God’s creation out of nothing. Artificial moral agency as partial shows that humans
cannot fully mediate their moral agency to computational artefacts in their creative
work in the same way as God did. Partial AMA reveals the limitations of human creative
work. One of the limitations of human creation is that the personal nature of human mor-
ality cannot be programmed into artificial moral agency. Robert Sparrow, Professor of
Philosophy based in Monash Data Futures Institute in Australia, draws a distinction
between scientific and moral matters:
Scientific questions are objective in the familiar sense that the true value of scientific claims does
not depend on who is making them. This means that such questions are fundamentally imper-
sonal. … [E]thical decisions are tied to particular people—they are decisions for them in a non-
contingent sense.42
Computational artefacts are standardised and, therefore, are unable to deal with con-
textual variables and different human reactions across time. Likewise, as will be
unpacked later, AI-powered pastoral carebots are incapable of addressing all personal
dilemmas. Any attempts made to offer a standard ethical decision about all ethical dilem-
mas are oblivious to the personal nature of moral issues and thus doomed to fail.
Third, partial artificial moral agency as ectypal brings to light the fact that it is always
the HMA who is responsible for ethical decisions by virtue of the ontological connection
between the archetype and the ectype. This ontological connection is a desideratum in
response to the controversial notion of ‘responsibility gap’. In his well-known essay,
Andreas Matthias turns our attention to automated machines and AI systems (especially
Machine Learning) that do not need human interventions. He argues that the automated
operation of computational artefacts casts doubts on our understanding of moral
responsibility.
41. Philip Hefner, ‘Technology and Human Becoming’, Zygon: Journal of Religion and Science
37.3 (2002), pp. 657–60.
42. Robert Sparrow, ‘Why Machines Cannot Be Moral’, AI & Society 36 (2021), p. 689.
Xu 655
Now it can be shown that there is an increasing class of machine actions, where the traditional
ways of responsibility ascription are not compatible with our sense of justice and the moral
framework of society because nobody has enough control over the machine’s actions to be
able to assume the responsibility for them. These cases constitute what we will call the respon-
sibility gap.43
Johnson’s concept of surrogate agency cannot bracket out the questions of whether a
surrogate agent is out of control. By contrast, in light of the archetype-ectypal thinking,
this ontological connection emphasises the partial agency of the surrogate AMA such
that the responsibility gap is closed.44 In this vein, human pastoral caregivers cannot
escape pastoral responsibility. We will return to this subject shortly. Human agents
cannot escape moral responsibility by deploying computational artefacts in circumstances
to make ethical decisions. It is always the HMA who designs and uses computational arte-
facts to address ethical questions.
This triple moral implication, derived from the above two syllogisms, shows that the
idea that the AMA has partial and ectypal moral agency opens up a theological way to
deal with moral questions related to the extensive applications of computational artefacts
in human daily lives. One notable application is about AI-powered caregiving practices.
In what follows, I shall use Christian pastoral care as a case to illustrate how the AMA
involves morally in human life.
43. Andreas Matthias, ‘The Responsibility Gap: Ascribing Responsibility for the Actions of
Learning Automata’, Ethics and Information Technology 6 (2004), p. 177.
44. My stance is not geared toward an optimistic attitude toward emerging technology. Daniel
Tigard observes that techno-optimists would like to bridge the responsibility gap since
they ‘would prefer to harness the newfound benefits of technology and proceed with its
deployment’. Daniel W. Tigard, ‘There is No Techno-Responsibility Gap’, Philosophy &
Technology 34 (2021), p. 590; https://doi.org/10.1007/s13347-020-00414-7.
45. William Young, ‘Virtual Pastor: Virtualization, AI, and Pastoral Care’, Theology and Science
20.1 (2022), pp. 6–22.
656 Studies in Christian Ethics 36(3)
carebots in caregiving practices and healthcare, the idea of ectypal and partial AMA can
offer three guiding principles for coping with moral issues in relation to the deployment
of AI in religious pastoral care. I proceed to focus on carebots in Christian pastoral care.
The three principles of AI-powered Christian pastoral care are, respectively, raised in
light of the three observations made earlier on partial artificial moral agency.
The first principle is that Christian communities need be ready to deploy AI-powered
carebots into Christian pastoral care since they extend the HMA’s agency in pastoral care-
giving practices through human ectypal creativity in designing pastoral carebots. In other
words, pastoral carebots as the ectype of human pastoral caregivers extend human agency
in pastoral care. To be sure, AI-driven systems can liberate ministers from some routine
work of pastoral care. For example, some Christian believers may expect ministers to
send out Bible verses every day so that they can be strengthened to go through and
endure occasional troubles and difficulties. We can imagine that an AI-driven automated
system is capable of sending daily Bible verse that responds to one’s troubles or to topical
events that are likely to trouble us (e.g., the COVID-19 pandemic). In this way, ministers
can focus more attention on others’ critical needs of pastoral care, say, pastoral care at the
end of life.
This principle carries a crucial implication that since the AMA is the extension of the
HMA, pastoral care provided by AI-driven systems must have impacts on human minis-
ters, more so because AI-powered pastoral carebots are designed to perform and augment
caregiving practices after the model of human pastoral caregivers. In examining caregiv-
ing practices of carebots, Shannon Vallor reminds us that caregiving is not merely import-
ant for care-receivers but also has ethical significance for caregivers precisely because
caregiving practices are embedded with moral goods.46 A case in point is reciprocity
in caregiving practices. Vallor maintains that we should consider reciprocity a virtue,
‘for understanding how to reciprocate well, in the right ways, at the right times, and as
appropriate to particular circumstances and people, is part of what it means to become
a good person’.47 In this light, reciprocity as a virtue means that the caregiver’s morality
is being shaped through caregiving practices. Likewise, in Christian pastoral care. The
debates over whether or not AI-driven systems can be deployed into pastoral care
often revolve around care-receivers. Yet it is worth noting that pastoral caregivers them-
selves are being morally shaped in the course of caregiving practices. Paul the apostle
writes, ‘Rejoice with those who rejoice, weep with those who weep’ (Rom. 12:15,
NRSV). Christian pastoral care emphasises more ‘rejoice and weep with’ than ‘rejoice
and weep’ itself. In pastoral caregiving practice, caregivers and care-receivers are
united. Seen from this perspective, human pastoral caregivers cannot be completely
replaced while deploying AI systems in Christian pastoral care. It is always the HMA
as a pastoral caregiver who performs her pastoral and moral actions toward
care-receivers.
46. Shannon Vallor, ‘Carebots and Caregivers: Sustaining the Ethical Ideal of Care in the
Twenty-First Century’, Philosophy & Technology 24.3 (2011), pp. 251–56.
47. Vallor, ‘Carebots and Caregivers’, p. 257.
Xu 657
48. Aimee Van Wynsberghe, ‘Designing Robots for Care: Care Centred Value-Sensitive
Design’, Science and Engineering Ethics 19.2 (2013), p. 408.
49. Van Wynsberghe, ‘Designing Robots for Care’, p. 411.
50. Van Wynsberghe, ‘Designing Robots for Care’, pp. 415–16.
51. Van Wynsberghe, ‘Designing Robots for Care’, p. 424.
658 Studies in Christian Ethics 36(3)
Moreover, entities which do not understand the facts about human experience and mortality that
make tears appropriate will be unable to fulfil this caring role. Sometimes the only appropriate
response to another’s suffering is the acknowledgement that we too share these frailties, as for
instance, when our friend’s suffering moves us to tears. Entities which do not share these frail-
ties are therefore incapable of responding appropriately to them.52
Sparrow and Sparrow do not deny the possibility of human-level carebots altogether
but leave this question open. However, I would be less convinced that carebots are
capable of sharing human mortality and frailties based on algorithms and silicon-based
systems.53
This unique bodily feature of human caregiving practices brings to light the partiality
of the carebot as a caregiver and an AMA, laying emphasis on the responsibility that
human caregivers should take on in pastoral care. There is no responsibility gap in pas-
toral care. In this respect, Amy Michelle DeBaets reminds us that the Christian idea of
love––which underscores the mutuality in love––helps us conceive of a carebot not as
the sole caregiver. Rather, carebots should be designed to keep human-and-human rela-
tionships in healthcare and to maintain the mutual love between caregivers and
care-receivers.54 Viewed in this light, the carebot’s agency in pastoral caregiving is
partial precisely because it is the mutual love between the caregiver and the care-receiver
that needs to be nurtured through pastoral care. Whilst considering the role of carebots in
Christian pastoral care, we should ponder how the so-called responsibility gap is closed
by such mutual love and how the HMA should not escape but rather take on the respon-
sibility to provide pastoral care for others.
Conclusion
What is the moral status of computational artefacts? This article has articulated a theo-
logical account of AMA. It turns down an optimistic position that classifies the AMA
and the HMA into the same category. In the meantime, the dismissal of AMA is declined.
The theology of archetype-ectype offers an ontological lens through which to get hold
of the moral connection between the AMA and the HMA. That is, the AMA is the ectype
of and ontologically connected with the HMA, and so artificial moral agency is partial
and human moral values are mediated and extended through the computational artefacts.
This opens up a vista for further discussions over the role of computational artefacts in
human moral life. In particular, I use Christian pastoral care to illustrate that human
52. Robert Sparrow and Linda Sparrow, ‘In the Hands of Machines? The Future of Aged Care’,
Minds and Machines 16 (2006), p. 154.
53. Jobst Landgrebe and Barry Smith’s latest study provides one of the most cogent arguments
against human-level AI, showing the essential distinction between humans and AI as well as
computational artefacts; Jobst Landgrebe and Barry Smith, Why Machines Will Never Rule
the World: Artificial Intelligence without Fear (New York: Routledge, 2023).
54. Amy Michelle DeBaets, ‘The Robot Will See You Now: Reflections on Technologies in
Healthcare’, in Scott A. Midson (ed.), Love, Technology and Theology (London: T&T
Clark, 2020), pp. 93–108.
Xu 659
pastoral care is extended through the AMA’s limited pastoral caregiving practices and
that the HMA is always responsible for pastoral care. Needless to say, further steps
need to be taken to explore the deployment of AMAs into Christian pastoral care. It
should be recognised, however, that the idea of ectypal and partial artificial moral
agency offers some guiding principles for the deployment of computational artefacts
into pastoral care as well as other spheres of human life.
Acknowledgements
A short version of this article was presented at the annual conference of the Society for the Study in
Christian Ethics 2022 in Westcott House, Cambridge. I am indebted to the Alan Turing Institute for
Post-doctoral Enrichment Award, which allowed me to accomplish this study. I am also grateful for
the feedback from the two anonymous reviewers.
Funding
The author received Postdoctoral Enrichment Award from the Alan Turing Institute to conduct this
research.
ORCID iD
Ximian Xu https://orcid.org/0000-0002-3159-1224