Samir Chopra, Laurence F. White, A Legal Theory For Autonomous Artificial Agents, Rozdz. 5. Personhood For Artifcial Agents
Samir Chopra, Laurence F. White, A Legal Theory For Autonomous Artificial Agents, Rozdz. 5. Personhood For Artifcial Agents
Samir Chopra, Laurence F. White, A Legal Theory For Autonomous Artificial Agents, Rozdz. 5. Personhood For Artifcial Agents
Access provided by Indiana University Libraries (28 Sep 2017 14:09 GMT)
Chapter 5 / Personhood for Arti‹cial Agents
153
154 / A Legal Theory for Autonomous Arti‹cial Agents
sons depends then, in part, on how much ›exibility we take the law to
have in decisions regarding its ontology.
Typically, a legal person has the capacity to sue and be sued, to hold
property in her or its own name, and to enter contracts. Legal persons
also enjoy various immunities and protections in courts of law such as the
right to life and liberty, however quali‹ed. Such a statement is not typi-
cally found at a single location in a particular legal system’s code of legis-
lation, but rather describes the way the term person functions within that
legal system, and is in consonance with the way the law commonly views
those subject to it.
Not all legal persons have the same rights and obligations. Typically,
to fully enjoy legal rights and be fully subject to legal obligations, one
must be a free human of the age of majority and sound mind (i.e., be sui
juris [Garner 2004]). Some rights—such as the right to marry, to drive or
vote, to purchase alcohol or tobacco, or to sue otherwise than by means
of a parent or guardian—depend on a person either being human, or if
human having attained an age of majority (which varies across jurisdic-
tions and subject matter). For example, corporations cannot marry or
(usually) vote; children cannot vote or purchase alcohol; but even new
corporations can purchase tobacco. The enjoyment of rights and capaci-
ties by corporations is restricted by statute or case law: As well as being
able generally only to act through their agents, they have the power to
transact business only in ful‹llment of the objects speci‹ed in their char-
ter or other constitutional documents, and any other action is in theory
void or voidable for being ultra vires.4
Considering arti‹cial agents as legal persons is, by and large, a mat-
ter of decision rather than discovery, for the best argument for denying or
granting arti‹cial agents legal personality will be pragmatic rather than
conceptual: The law might or might not require this change in status
given the functionality and social role of arti‹cial agents. But pragmatism
can be wedded to normativity: the case for arti‹cial agents’ legal person-
ality can come to acquire the aura of an imperative depending on the na-
ture of our relationships with them, and the roles they are asked to ful‹ll
in our future social orderings. Thus,
in›ected by the subtle and long-held convictions and beliefs of the pre-
siding judges or concerned legislators (Menand 2002, 36). For the law,
“[N]o single principle dictates when the legal system must recognize an
entity as a legal person, nor when it must deny legal personality” (Allen
and Widdison 1996, 35).
Legal scholars have identi‹ed a raft of considerations—pragmatic
and philosophical—that the law might use in its answer to the question
of whether to accord legal personality to a new class of entity. Some the-
orists reject the need for an analysis based on some metaphysically satis-
factory conception of the person; yet others claim humanity (or mem-
bership in our species, or satisfaction of metaphysical and moral criteria)
is the basis of moral and legal claims on others and the basis of legal per-
sonality (Naf‹ne 2003).
Those theorists, such as legal positivists, who consider important ex-
amples of legal personality where the law does not require the putative
person to be human or even conscious, re›ect the classical meanings of
“person” as a mask that allows an actor to do justice to a role (Calverley
2008); other theorists, perhaps informed by a natural law sensibility, seek
to assimilate legal personality to the philosophical notion of a person
(Naf‹ne 2003). Thus in considering personhood for arti‹cial agents it is
crucial to keep in mind the kind of personality under consideration.
Arguments for advancing personhood for arti‹cial agents need not
show how they may function as persons in all the ways that persons may
be understood by a legal system, but rather that they may be understood
as persons for a particular purpose or set of legal transactions. For the law
does not always characterize entities in a particular way for all legal pur-
poses. For instance, a particular kind of object may be considered prop-
erty for the purposes of the Due Process Clause of the Fourteenth
Amendment to the U.S. Constitution, and yet not be considered prop-
erty that can be passed by will.
So, too, an entity might be considered a person for some legal pur-
poses and not for others. And being a nonperson for some legal purposes
does not automatically entail the complete nonpossession of legal rights.
While at English common law, for example, before the reforms of the
nineteenth century,6 a married woman was not, for most civil-law pur-
poses, accorded legal personality separate from that of her husband,7 nev-
ertheless, for ecclesiastical law purposes, she already had full rights to sue
and be sued in her own name, and in addition had been susceptible to
criminal prosecution in the ordinary way.8 Similarly, in the Visigothic
Personhood for Arti‹cial Agents / 157
code, slaves, who under Roman law, from which the Visigothic code de-
rived, were not considered legal persons, were nevertheless entitled to
bring complaints against freemen in certain circumstances, apparently
on their own account and not just on account of their masters.9 U.S. cor-
porations enjoy some of the rights of persons but not all (they may, for in-
stance, own stock, but not adopt children). Or the criminal code may
identify a different set of persons than inheritance law, which might in-
clude as persons fetuses.10
At ‹rst sight the Restatement (Third) of Agency stands in the way of
any argument that an arti‹cial agent could be a person. It states: “To be
capable of acting as a principal or an agent, it is necessary to be a person,
which in this respect requires capacity to be the holder of legal rights and
the object of legal duties. Accordingly, it is not possible for an inanimate
object or a nonhuman animal to be a principal or an agent under the
common-law de‹nition of agency.”11 But as noted in chapter 2, despite
appearances, the Restatement cannot be understood as shutting the door
on legal agency for arti‹cial agents. The discussions in this chapter
should serve to show that it does not present a fatal objection to person-
hood for them either.
motivation to obey the law. That motivation could be one built into the
agent’s basic drives, or dependent on other drives (such as the desire to
maximize wealth, which could result in appropriate behavior, assuming
the law is reliably enforced by monetary ‹nes or penalties). Rational
arti‹cial agents that act so as to optimize their goal-seeking behavior
would presumably not indulge in the self-destructive behavior of an
agent that disobeys the punitive force of legal sanctions. On a construal
of understanding and obedience of legal obligations as rational behavior,
this capacity appears amenable to technical solutions.
Work in deontological logics or logics of obligations suggests the pos-
sibility of agent architectures that use as part of their control mechanisms
a set of prescribed obligations, with modalities made available to the
agent under which some obligations are expressed as necessarily to be fol-
lowed; others as only possibly to be followed (von Wright 1951; Hilpinen
2001; Pacuit, Parikh, and Cogan 2006). These obligations can be made
more sophisticated by making them knowledge-dependent such that an
agent is obligated to act contingent on its knowing particular proposi-
tions (Pacuit, Parikh, and Cogan 2006). If these propositions are a body
of legal obligations, we may speak coherently of the agent taking obliga-
tory actions required by its knowledge of its legal obligations.
Similar capabilities are sought to be realized in so-called explicit eth-
ical agents (Arkoudas and Bringsjord 2005; Moor 2006; M. Anderson
and S. L. Anderson 2007). Agents similar to these could in principle be
capable of acting in accordance with norms that act as “global con-
straints on evaluations performed in the decision module” (Boman
1999), conferring duties on other agents (Gelati, Rotolo, and Sartor
2002), and functioning in an environment governed by norms (Dignum
1999). More ambitious efforts in this direction include agents designed to
function in a domain akin to Dutch administrative law, and “able to par-
ticipate in legal conversation, while . . . forced to stick to [legal] commit-
ments and conventions” (Heesen, Homburg, and Offereins 1997).
At the risk of offending humanist sensibilities, a plausible case could
be made arti‹cial agents are more likely to be law-abiding than humans
because of their superior capacity to recognize and remember legal rules
(Hall 2007). Arti‹cial agents could be highly ef‹cient act utilitarians, ca-
pable of the kinds of calculations that that moral theory requires (M. An-
derson and S. L. Anderson 2007). Once instilled with knowledge of legal
obligations and their rami‹cations, they would need to be “upgraded” to
re›ect changes in laws; more sophisticated architectures could conceiv-
Personhood for Arti‹cial Agents / 167
Susceptibility to Punishment
These considerations suggest another argument against legal personality
for arti‹cial agents: Given their limited susceptibility to punishment,
how could the legal system sanction an errant arti‹cial agent? One an-
swer can be found by considering the modern corporation, which is ac-
corded legal personality, although it cannot be imprisoned, because it can
be punished by being subjected to ‹nancial penalties. Arti‹cial agents
that controlled money independently would be susceptible to ‹nancial
sanctions, for they would be able to pay damages (for negligence or
breach of contract, for example) and civil penalties or ‹nes for breach of
the (quasi-)criminal law from their own resources.
In principle, arti‹cial agents could also be restrained by purely tech-
nical means, by being disabled, or banned from engaging in economically
rewarding work for stipulated periods. Conceivably, those who engaged
them in such work could be punished, much as those who put children to
work can be subjected to criminal penalties. Deregistration of an agent or
con‹scation of its assets might also be used as a sanction, just as winding-
up is used to end the life of companies in certain situations, or con‹sca-
168 / A Legal Theory for Autonomous Arti‹cial Agents
Contract Formation
Moving on from punishment, we note that arti‹cial agents can be capa-
ble of manifesting the intention to form contracts. When we interact
with arti‹cial agents that operate shopping websites, we are able to form
contracts because those agents, in a systematic and structured way, make
and accept offers and acceptances of goods and services in exchange for
money. Legal personality might not be necessary in order to explain, in
doctrinal terms, how this behavior gives rise to a contract between the
user and the operator of the arti‹cial agents, but there is no doubting the
ability of arti‹cial agents to bring about the formation of contracts.
Arti‹cial agents will face deeply rooted skepticism about whether such
seemingly inanimate objects could ever meet the conditions for person-
hood in the broader, philosophical sense. Objections of this kind are irrel-
evant in respect of dependent legal personality such as is possessed by cor-
porations, ships, or temples (or, perhaps, living human beings not sui juris
such as children or those not of sound mind). These objections, however,
relate squarely to the possibility of independent legal personality.
Philosophical understandings of the moral person often inform an
intuition in the legal context that “natural” legal persons are mature
adult humans and the rest mere “legal ‹ctions.” Suggestions that a par-
ticular entity’s legal personality is a legal ‹ction are often just arguments
against the possibility of its moral personality; this is best displayed in the
case of corporations, readily accepted in law as persons, but less readily so
in the philosophical sense. Philosophical theorizing about persons at-
tempts, thus, to point out human distinctiveness from mere things, for
such a distinction leads to the concept of persons as objects of ethical dis-
course and worthy of respect as subjects guided by laws and moral con-
cerns. Thus persons have a dual nature infected by relationship with the
law: while they are the subject of legal attributions of responsibility, they
enjoy the position of being the basic objects of moral concern and benev-
olence, as worthy of regard and caring (Rorty 1988).
Still, the philosophical development of various conceptions of the
metaphysical or moral person suggests that whatever the concept of per-
son, and the desirability of it including all humans, it cannot exclude be-
ings other than humans. For philosophical views of personhood often
cleave the concepts of “human” from “person.” For instance, in stating,
“All rational beings are persons” Kant made rationality, not humanity,
essential to personhood. Or consider Locke’s view that persons are ratio-
nal selves, not merely rational men, because “man” has reference to cor-
poreal form, which is not part of the meaning of “person.” A person for
Locke, rather, is “a thinking intelligent Being, that has reason and re›ec-
tion, and can consider itself as itself, the same thinking thing in different
172 / A Legal Theory for Autonomous Arti‹cial Agents
times and places; which it does only by that consciousness, which is in-
separable from thinking, and as it seems to me essential to it” (1996, book
2, chap. 27, sec. IX) .
Signi‹cantly, Locke noted person “is a forensic term, appropriating
actions and their merit; and so belongs only to intelligent agents, capable
of a law” (Locke 1996, book 2, chap. 27, sec. IX) . By “capable of a law,”
Locke suggests a person is necessarily capable of understanding its legal
obligations and any punishment that might be in›icted for breach
thereof (Naf‹ne 2003); person may be the name for the entity recognized
by others as legally or morally accountable. In Locke’s account, a person
is an enduring self-re›ective entity, one to whom responsibility and
blame can be assigned for temporally distant events. This understanding
of the person is both backward- and forward-looking in terms of ascribing
responsibility for events that occurred in the past and for expected sta-
bility in future social expectations from those around us (Rorty 1988).
Our earlier discussion of the capability of arti‹cial agents to display sen-
sitivity to, and act in conformance with, their legal obligations would
suggest that this particular philosophical requirement could be met by
them.
Thus, prima facie, we do not consider the objections to the notion of
according personality to arti‹cial agents are insurmountable, for we do
not accept a priori “a single uniform rule that the category of persons is
co-extensive with the class of human beings” (Weinreb 1998). Such re-
jections of personality for arti‹cial agents implicitly build on the chau-
vinism—grounded in a dominant ‹rst-person perspective or in (quasi-)
religious grounds—common to arguments against the possibility of
arti‹cial intelligence.31
While the philosophical cleavage between the concept of “person”
and “human” is a long-standing one (Strawson 1959; Ayer 1963), never-
theless, “If Venusians and robots come to be thought of as persons, at
least part of the argument that will establish them will be that they func-
tion as we do: that while they are not the same organisms as we are, they
are in the appropriate sense the same type of organism or entity” (Rorty
1976).
Objections to the possibility of personality for arti‹cial agents are of-
ten grounded in a particular conception of the human “type,” one the law
is not necessarily committed to. These objections highlight an important
epistemic asymmetry. We, or at least the competent computer scientists
among us, know how computers work, but we do not yet know well
Personhood for Arti‹cial Agents / 173
enough how human brains work, and neuroscience offers only partial em-
pirical con‹rmation of our best hypotheses (Machamer, Grush, and
McLaughlin 2001). We lack detailed knowledge of our cognitive archi-
tecture; arguably, we know more at the logical level than at the physical
level, as the dif‹culties of neuroscienti‹c investigations amply demon-
strate (Machamer, Grush, and McLaughlin 2001). But in the case of
arti‹cial agents, we possess ‹ne-grained knowledge of their physical and
algorithmic architecture. This familiarity breeds contempt for the
arti‹cial agent, and it is this familiarity that Dennett’s example of a
suf‹ciently complex and adaptive agent described in chapter 1 attempts
to dispel.32
Such an epistemic asymmetry leads to repeated violations of the fol-
lowing rules, originally suggested in the context of determining animal
rights: “Rule One: Only with the utmost effort can we ever hope to place
ourselves fairly in nature. Rule Two: We must be at our most skeptical
when we evaluate arguments that con‹rm the extremely high opinion
that we have of ourselves. Rule Three: We must play fair and ignore spe-
cial pleading when we assess mental abilities” (Wise 2000, 121).
In general, objections to the possibility of arti‹cial agents attain-
ing personhood are similar to general arguments against the possibility
of arti‹cial intelligence, which frequently postulate “something miss-
ing” in a computational architecture disqualifying it from being
“suf‹ciently like us.” For instance, it is the absence of these qualities
that supposedly makes arti‹cial agents not susceptible to punishment
(because they lack a moral sense) or incapable of taking discretionary
decisions (because they lack free will and autonomy). These objec-
tions ‹nd common ground in a skepticism that human attributes can
be the subjects of a naturalistic understanding.33 Our refutation of
these objections is informed by an adherence to the spirit of Wise’s
three rules.
Free Will
Perhaps the most damning such objection is that an arti‹cial agent can-
not possess free will because “it is just a programmed machine.” The
UETA, for example, notes an electronic agent is to be understood as a
“machine,” functioning as a tool for the persons using it, with “no inde-
pendent volition of its own.”34 From this claim, the case for arti‹cial
agent’s personality appears irreparably damaged, for a programmed ma-
174 / A Legal Theory for Autonomous Arti‹cial Agents
chine could presumably never display the qualities that we, as apparently
freely choosing human beings, appear to have.
There are two responses to the objection. The ‹rst is that under-
standing arti‹cial agents as the subject of the intentional stance enables
viewing them as the originators of actions (and thus as the subjects of
“volition”). Second, there is an important reductive way to view free will
that considerably demysti‹es it. An operative assumption for the concept
of free will is that “there is a well-de‹ned distinction between systems
whose choices are free and those which are not” (Sloman 1992). But a
closer examination of agent architectures reveals no one particular dis-
tinction. Instead, there are many different distinctions, all of which cor-
respond to particular design decisions that present themselves to the de-
signer of the system in question. Compare, for instance, an agent that can
simultaneously store and compare different motives with an agent that
has only one motive at a time. Or compare agents all of whose motives
are generated by a single top-level goal (e.g., “buy this book”) with agents
(such as humans) with several independent sources of motivation, for ex-
ample, thirst, hunger, sex, curiosity, ambition, or aesthetic preferences
(Sloman 1992).
Rather than speaking of a binary concept of free will, as something
that is either present or not with no shadings in between, we may speak
of systems of greater or lesser “degrees of free will” (Franklin 1995). One
way to ascertain whether an arti‹cial agent has a degree of free will is
therefore to determine to what extent it instantiates design features that
let us make these distinctions. Our assessment of ourselves as possessors
of free will is plausibly viewed as just a report on a particular positioning
of our capacities along such a spectrum of free will, for it is very doubtful
that any human decisions are free of any external in›uence whatsoever
and are entirely self-caused. Indeed, to look at the problem of free will
closely is to notice that human beings’ actions are subject to the same ob-
jections (Copeland 1993).
A plausible account of human free will is that an action is free if
caused through reasoning and deliberation on the part of the agent. In
this sense, arti‹cial agents could possess free will. For free will is compat-
ible with a kind of determinism; what is crucial is the role of second-or-
der volitions (Frankfurt 1971).35 Persons can have beliefs and desires
about their beliefs and desires (about what they might want them to be)
and can act according to these higher-level beliefs and desires; such
Personhood for Arti‹cial Agents / 175
agents must be the causal agents for their actions so guided, and it is in
this agency that their free will resides.
For an arti‹cial agent to display such attributes, it must be capable of
being described as a second-order intentional system. Most fundamen-
tally, “An apparatus that learns to make appropriate decisions in the
process of adapting to its surroundings may . . . properly be said to have
selected among the alternative choices on the basis of its own deep-
seated and indigenous beliefs and desires.”36 The decisions of such arti-
facts could be characterized as intrinsic and voluntary in the sense of be-
ing free of extrinsic coercion. Arti‹cial agents also may be said to possess
free will insofar as “had they evolved otherwise, they would presumably
have behaved differently” (Wein 1992, 153).
If an agent takes an action, we have four choices: to ascribe the
causal responsibility for the action to the agent, to its designer, to its op-
erator or user, or to no one at all. The fourth option can be ruled out un-
less we are willing to admit the existence of effects without causes; the
second seems increasingly implausible if the human designer is unaware
of the action being committed, and the range of actions demarcated for
the arti‹cial agent is suf‹ciently large and only determined by a sophisti-
cated decision procedure. In some cases, the agent might even act con-
trary to the operator or user’s implicit or explicit expectations. In these
cases, causal agency is plausibly ascribed to the agent.
An agent programmed to take an action A, which actually takes that
action, is the cause of that action. The reasons for an arti‹cial agent—ca-
pable of being the subject of the intentional stance—taking an action are
best described in terms of its own desires and beliefs. Consider the case of
actions taken by corporations. Just as we may describe actions in term of
the physical movements of human beings, we can describe corporate ac-
tions as done for reasons by human beings, and also describe the same ac-
tions as done for corporate reasons, which are qualitatively different from
whatever reasons individual humans may have for doing what they do
(French 1984, 44ff.). Human agency resides in a person’s reasons for act-
ing being the cause of her doing so (Davidson 1971; Davidson 1980).
Arti‹cial agents are capable of being described in just this way. If an en-
tity is able to reason about its past, modify its behavior, plan its future,
and learn from experience (all characteristics present in arti‹cial agent
architectures), then perhaps the reluctance when it comes to ascribing
“free will” is merely terminological.
176 / A Legal Theory for Autonomous Arti‹cial Agents
Autonomy
This discussion of free will directly impinges on the issue of autonomy, for
an argument for autonomy is an argument for free will: autonomous acts
are freely chosen acts. There is an important intuition at the core of this
requirement: that an autonomous agent is able to consult, and evaluate,
itself in its decision-making, and take corrective action when it so de-
sires. Still, it is wise not to ascribe to humans too much autonomy, for
while we ascribe to ourselves the ability to make judgments and exercise
autonomy in our decision making, it is guided and in›uenced by external
forces much like those that in›uence our supposedly free will.
Arti‹cial agents are plausibly reckoned as the originators of their ac-
tions with their autonomy more accurately recognized as a scalar con-
cept, as different agents manifest greater or lesser amounts of autonomy.
The commentary to the UETA acknowledges that “an electronic agent,
by de‹nition, is capable within the parameters of its programming of ini-
tiating, responding or interacting with other parties or their electronic
agents once it has been activated by a party, without further attention of
that party.”38 It may be plausibly argued that arti‹cial agents could make
autonomous decisions similar in all relevant respects to the ones humans
make (Wein 1992, 141).
Some philosophical de‹nitions of autonomy would set the bar too
high for many, if not most, human beings (Wise 2000, 246). For example,
Kant’s de‹nition of autonomous action requires an agent to possess the
178 / A Legal Theory for Autonomous Arti‹cial Agents
Moral Sense
Fundamentally, what the possession of a free will and autonomy are most
crucial to is the possibility of arti‹cial agents possessing a moral sense. A
Personhood for Arti‹cial Agents / 179
sense we ‹nd the reasons for doing so are similar to those applying to
arti‹cial agents: because they are the kinds of entities that can take in-
tentional actions and be thought of as intentional agents (French 1984,
90ff.).
Failures of morality on the part of arti‹cial agents could be under-
stood as failures of reasoning: the failure to hold certain beliefs or desires,
or to act consistently with those beliefs and desires. If we could use a lan-
guage of morally in›ected beliefs and desires in describing and predicting
the behavior of an arti‹cial agent, then it would make sense to discuss
the behavior of that arti‹cial agent as morally good or bad.
Perhaps to be a moral person, an entity must be capable of express-
ing regret or remorse or both and of thereby suffering punishment. But
even such attributions can be “cashed out” in intentional terms (French
1984, 90ff.). Consider regret, which can be viewed as the capacity to
view oneself as the person who did x and to feel or wish that he had not
done x. Here, the inner emotion remains inaccessible; what is accessible
is an outward manifestation (for example, the expression of regret or re-
morse), the ascription of which is made coherent by its consistency with
other ascriptions (French 1984, 90ff.). But these outward manifestations
are precisely those that would be of interest to us in the case of arti‹cial
agents. In such a context, what could it mean to forgive a computer? It
might mean that “we would not localize the explanation for malfunction
in the way the computer had adapted to its environment, but, perhaps, in
the unusual character of the circumstances or in de‹ciencies in the envi-
ronment in which it learned its typical response patterns” (Bechtel
1985a, 305).
The possibility that arti‹cial agents could possess a moral sense is not
an idle one. A large and growing body of work suggests they can be
thought of as moral agents via a variety of empirical and philosophical
considerations (Wallach and Allen 2008; Gips 1995; Floridi and Sanders
2004; Allen, Varner, and Zinser 2000; Coleman 2001). In particular, an
agent might be imagined that “will act like a moral agent in many ways”
because it is “conscious, to the extent that it summarizes its actions in a
unitary narrative, and . . . has free will, to the extent that it weighs its fu-
ture acts using a model informed by the narrative; in particular, its be-
havior will be in›uenced by reward and punishment” (Hall 2007, 348).
Ascriptions of a moral sense are, as we noted when discussing the
possible legal liability of an arti‹cial agent, often linked with the possi-
bility of ascribing them responsibility. But these considerations might be
Personhood for Arti‹cial Agents / 181
If such norming does not emerge, or does not solve the identi‹cation
problem well enough, then agents could be identi‹ed via a registry, simi-
lar to that for corporations, where “registration makes the corporation
identi‹able. For computers to be treated as legal persons, a similar system
of registration would need to be developed. . . . [A] system of registration
could require businesses who wish to rely on computer contracts to regis-
ter their computer as their ‘agent’” (Allen and Widdison 1996, 42). Such
a “Turing register” would enable the registration and recognition of
agents and their principals, much as companies are registered today
(Wettig and Zehendner 2003; Allen and Widdison 1996; Weitzenboeck
2001; Karnow 1996; Karnow 1994). The cost of establishing such a reg-
ister would be signi‹cant and would need to be weighed against the
bene‹ts of doing so (Kerr 1999; Miglio et al. 2002), a consideration pres-
ent in all public policy interventions. It may be the number and com-
plexity of arti‹cial agents, and the diversity of their socioeconomic inter-
actions, eventually makes the case for such intervention overwhelming.
In sum, none of the philosophical objections to personhood for
arti‹cial agents—most but not all of them based on a “missing some-
thing” argument—can be sustained, in the sense that arti‹cial agents can
be plausibly imagined that display that allegedly missing behavior or at-
tribute. If this is the case, then in principle arti‹cial agents should be able
to qualify for independent legal personality, since it is the closest legal
analogue to the philosophical conception of a person.
By the same token, the history of legal decisions about legal person-
hood reveals that ultimately, tests relevant to philosophical personhood
may be irrelevant to the determination of legal personality for arti‹cial
agents. What would be of determinative importance would be the sub-
stantive issue before the courts, and what the desired result might be from
a policy standpoint. For more than anything else, the jurisprudence re-
garding legal personhood appears to be crucially result-oriented.
sonhood and because legal personhood may bring rights and protections
in its wake. The most contentious debates over legal personhood arise
when considerable disagreement exists over whether the entity in ques-
tion can be regarded as human40 and whether clearly nonhuman entities
can be considered persons.
In the case of slavery, the status of that class of human beings wa-
vered between property and persons, revealing that legal rulings re›ected
social attitudes and were marked by expediency.41 Judges ruled the com-
mon-law crime of murder extended to killing slaves and while doing so,
stressed slaves’ humanity.42 The law also treated slaves as persons by
stressing their humanity when the need was felt to try them for crimes,43
despite arguments by slaves they were not legal “persons” and therefore
not subject to the criminal law.44 Judges, however, ruled the common law
of assault and battery, in the context of owners’45 and even nonowners’
assaults on slaves, did not apply. Courts argued slaves qua slaves could not
enjoy the general grants of rights and privileges that other humans en-
joyed46 because their essential natures rendered them “subject to despo-
tism”;47 that they could not be persons because it represented an “inher-
ent contradiction”;48 that perhaps they were more akin to animals, or
types of chattel or as real estate;49 and yet other courts took refuge in the
difference between humanness and legal personhood to deny legal per-
sonality to slaves.50 This variety of attitudes indicates the personhood of
slaves was a contested notion, one bent to accommodate putative social
needs.
The history of corporate personhood reveals a similar mixture of at-
titudes and motivations; the legal personality of corporations is only un-
controversial when statutes explicitly de‹ne “persons” as including cor-
porations (Note 2001). The rulings of the U.S. Supreme Court
concerning corporate personhood are accordingly notable in their vari-
ety. The Court ‹rst asserted corporate personhood with respect to prop-
erty rights in Santa Clara County v. Southern Paci‹c Railroad,51 by saying
corporations counted as “persons” within the scope of protection of the
Fourteenth Amendment’s Due Process Clause. Indeed, the Court said it
“[did] not wish to hear argument on the question whether the provision
in the Fourteenth Amendment to the Constitution . . . applies to these
corporations. We are all of [the] opinion that it does.”52
But later incarnations of the Court were not so con‹dent. Justice
Douglas, dissenting in Wheeling Steel Corp. v. Glander,53 suggested the
Fourteenth Amendment, written to eliminate race discrimination, was
184 / A Legal Theory for Autonomous Arti‹cial Agents
and signaling of social values in legal statements such as statutes and ju-
dicial opinions (Sunstein 1996; Posner 2000, 2ff.) and the ability of the
law to shape behavior by creating social norms (Sunstein 1996), person-
hood jurisprudence could be interpreted as making normative statements
about the worth of the objects included and excluded (as in slavery rul-
ings or in the status of women in nineteenth-century England) (Balkin
1997). Legal rulings from the slavery era that showed some humans were
regarded by the law as less than human, or less than full legal persons,
shaped a society’s view of humanity and re›ected a society’s prejudices
(Note 2001).
In deciding that arti‹cial agents are persons, courts or legislatures
would send a message about their commonality with us. A refusal to do
so would express worries about whether doing so might cheapen human
personhood, especially as it might be taken to mean that arti‹cial agents
would be possessing qualities that we take to be especially human.
The jurisprudence of personhood in abortion cases demonstrates the
substantive weight of legal ‹ctions, for here judges insist that persons are
legal ‹ctions (Note 2001). But if personhood could be manipulated and
interpreted simply as a legal ‹ction, no such insistence would be neces-
sary (Note 2001). For denying or granting legal personality to particular
entities indicates a position on the societal valuation of the entity in
question. If legal personhood is understood as a zero-sum game, where
personhood decisions in›uence interests other than those of the entity in
question, then the conferral of personhood on nonhuman entities risks
cheapening the personhood of natural persons;64 grants of legal personal-
ity to corporations could be viewed as cheapening the social meaning of
humans’ legal personality if “equality of constitutional rights plus an in-
equality of legislated and de facto powers leads inexorably to the su-
premacy of arti‹cial over real persons” (Mayer 1990).
Legal ambivalence over corporate personality and about human
uniqueness in an increasingly corporate world could rest on concerns
that assigning personhood to corporations may work as an illocutionary
act, bringing a particular state of affairs into existence by proclamation
(Schane 1987), and perhaps only secondarily on the associated concep-
tual dif‹culties.65 Debates about corporate personhood re›ect a tension
between “the desire to stimulate the economy by granting constitutional
protections to corporations and the fear unchecked corporate growth
may have socially deleterious effects or that unchecked recognition of
corporate personhood may cheapen our own” (Note 2001, 1766).
186 / A Legal Theory for Autonomous Arti‹cial Agents
The most salutary effect of our discussions thus far on the possibility of
personhood for arti‹cial agents might have been to point out the con-
Personhood for Arti‹cial Agents / 187
tions in such a way that the only coherent understanding of its social role
emerges by treating it as a person. The question of legal personality sug-
gests the candidate entity’s presence in our networks of legal and social
meanings has attained a level of signi‹cance that demands reclassi‹ca-
tion. An entity is a viable candidate for legal personality in this sense if
it ‹ts within our networks of social, political, and economic relations in
such a way it can coherently be a subject of legal rulings.
Thus, the real question is whether the scope and extent of arti‹cial
agent interactions have reached such a stage. Answers will reveal what
we take to be valuable and useful in our future society as well, for we will
be engaged in determining what roles arti‹cial agents should be playing
for us to be convinced the question of legal personality has become a live
issue. Perhaps arti‹cial agents can only become persons if they enter into
social relationships that go beyond purely commercial agentlike relation-
ships to genuinely personal relationships (like medical care robots or
companion robots). And even in e-commerce settings, an important part
of forming deeper commercial relationships will be whether trust will
arise between human and arti‹cial agents; users will need to be con-
vinced “an agent is capable of reliably performing required tasks” and will
pursue their interests rather than that of a third party (Serenko, Ruhi,
and Cocosila 2007).
Autopoietic legal theory, which emphasizes the circularity of legal
concepts, suggests too, that arti‹cial agents’ interactions will play a cru-
cial role in the determination of legal personality: “[E]ntities are de-
scribed as legal persons when the legal system attributes legally meaning-
ful communications to them. . . . [W]ithin the legal system, legal persons
are those entities that produce legal acts. . . . A natural person is capable
of many types of legal acts. . . . A wild animal is not capable of any . . . le-
gal acts. Hence, the legal system treats natural persons, but not wild ani-
mals, as legal persons” (Teubner 1988). If it is a suf‹cient condition for
personality that an entity engage in legal acts, then, an arti‹cial agent
participating in the formation of contracts becomes a candidate for legal
personality by virtue of its participation in those transactions.
Personhood may be acquired in the form of capacities and sensibili-
ties acquired through initiation into the traditions of thought and action
embodied in language and culture; personhood may be result of the mat-
uration of beings, whose attainment depends on the creation of an evolv-
ing intersubjectivity (Bakhurst 2005). Arti‹cial agents may be more con-
vincingly thought of as persons as their role within our lives increases and
Personhood for Arti‹cial Agents / 189
5.7. Conclusion
While arti‹cial agents are not yet regarded as moral persons, they are co-
herently becoming subjects of the intentional stance, and may be
thought of as intentional agents. They take actions that they initiate,
and their actions can be understood as originating in their own reasons.
An arti‹cial agent with the right sorts of capacities—most importantly,
that of being an intentional system—would have a strong case for legal
personality, a case made stronger by the richness of its relationships with
us and by its behavioral patterns. There is no reason in principle that
arti‹cial agents could not attain such a status, given their current capac-
ities and the arc of their continued development in the direction of in-
creasing sophistication.
The discussion of contracting suggested the capabilities of arti‹cial
agents, doctrinal convenience and neatness, and the economic implica-
tions of various choices would all play a role in future determinations of
the legal status of arti‹cial agents. Such “system-level” concerns will con-
tinue to dominate for the near future. Attributes such as the practical
ability to perform cognitive tasks, the ability to control money, and con-
siderations such as cost bene‹t analysis, will further in›uence the deci-
sion whether to accord legal personality to arti‹cial agents. Such cost-
bene‹t analysis will need to pay attention to whether agents’ principals
will have enough economic incentive to use arti‹cial agents in an in-
creasing array of transactions that grant agents more ‹nancial and deci-
sion-making responsibility, whether principals will be able, both techni-
cally and economically, to grant agents adequate capital assets to be full
economic and legal players in tomorrow’s marketplaces, whether the use
of such arti‹cial agents will require the establishment of special registers
or the taking out of insurance to cover losses arising from malfunction in
contractual settings, and even the peculiar and specialized kinds and
costs of litigation that the use of arti‹cial agents will involve. Factors
190 / A Legal Theory for Autonomous Arti‹cial Agents