Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Samir Chopra, Laurence F. White, A Legal Theory For Autonomous Artificial Agents, Rozdz. 5. Personhood For Artifcial Agents

Download as pdf or txt
Download as pdf or txt
You are on page 1of 41

A Legal Theory for Autonomous Artificial Agents

Chopra, Samir, White, Laurence F.

Published by University of Michigan Press

Chopra, S. & White, L. F..


A Legal Theory for Autonomous Artificial Agents.
Ann Arbor: University of Michigan Press, 2011.
Project MUSE., https://muse.jhu.edu/.

For additional information about this book


https://muse.jhu.edu/book/2391

Access provided by Indiana University Libraries (28 Sep 2017 14:09 GMT)
Chapter 5 / Personhood for Arti‹cial Agents

5.1. Could Arti‹cial Agents Be Legal Persons?

The logical culmination of our inquiries is the question of whether


arti‹cial agents could be accorded legal personhood;1 they would then
enter law’s ontology, to take their place alongside humans and corpora-
tions as subjects of legal rights and obligations. This standard under-
standing of legal persons is derived from John Chipman Gray’s classic
text, The Nature and Sources of Law.2 Gray noted, “In books of Law, as in
other books, and in common speech, ‘person’ is often used as meaning a
human being” (Gray 2006, 27) but also pointed out the distinction be-
tween such an “intuitive” understanding and the legal notion of “per-
son,” which may exclude some humans and include some nonhumans.
Gray termed such a distinction a “dogmatic ‹ction,” one introduced to
ensure doctrinal coherence in the law. Considering personhood brings
the question of what constitutes a conventional designation, as opposed
to recognition of a preexisting state of affairs, into sharp focus.
Roman law has been said to be “systematically ignorant of the bio-
logical status of its subjects” (French 1984, 35). Such a view of the law’s
ontology holds it does not recognize existing persons; rather persons are
creations or artifacts of the law and do not have existence outside the le-
gal system. The contrary view is the law does not create its own subjects,
instead “it can only determine which societal facts are in conformity with
its requirements.”3 Whether arti‹cial agents can be considered legal per-

153
154 / A Legal Theory for Autonomous Arti‹cial Agents

sons depends then, in part, on how much ›exibility we take the law to
have in decisions regarding its ontology.
Typically, a legal person has the capacity to sue and be sued, to hold
property in her or its own name, and to enter contracts. Legal persons
also enjoy various immunities and protections in courts of law such as the
right to life and liberty, however quali‹ed. Such a statement is not typi-
cally found at a single location in a particular legal system’s code of legis-
lation, but rather describes the way the term person functions within that
legal system, and is in consonance with the way the law commonly views
those subject to it.
Not all legal persons have the same rights and obligations. Typically,
to fully enjoy legal rights and be fully subject to legal obligations, one
must be a free human of the age of majority and sound mind (i.e., be sui
juris [Garner 2004]). Some rights—such as the right to marry, to drive or
vote, to purchase alcohol or tobacco, or to sue otherwise than by means
of a parent or guardian—depend on a person either being human, or if
human having attained an age of majority (which varies across jurisdic-
tions and subject matter). For example, corporations cannot marry or
(usually) vote; children cannot vote or purchase alcohol; but even new
corporations can purchase tobacco. The enjoyment of rights and capaci-
ties by corporations is restricted by statute or case law: As well as being
able generally only to act through their agents, they have the power to
transact business only in ful‹llment of the objects speci‹ed in their char-
ter or other constitutional documents, and any other action is in theory
void or voidable for being ultra vires.4
Considering arti‹cial agents as legal persons is, by and large, a mat-
ter of decision rather than discovery, for the best argument for denying or
granting arti‹cial agents legal personality will be pragmatic rather than
conceptual: The law might or might not require this change in status
given the functionality and social role of arti‹cial agents. But pragmatism
can be wedded to normativity: the case for arti‹cial agents’ legal person-
ality can come to acquire the aura of an imperative depending on the na-
ture of our relationships with them, and the roles they are asked to ful‹ll
in our future social orderings. Thus,

In making an estimation of the jurisprudential aftermath of awaited


breakthroughs in automation technology, we ought not to rely on
“logic” alone, for what is signi‹cant is not the intellective capacity
of machines, but the scope and impact of the machines’ interaction
Personhood for Arti‹cial Agents / 155

with people. What is suggestive is not the acumen of “intelligent”


systems or any lack thereof, but the impact an automation process
has on society. . . . [T]he automated devices with which we now in-
teract are legally signi‹cant because they have engendered case law
that anticipates the legal principles that may come to govern dis-
placement of human activity by intelligent artifacts. (Wein 1992,
137)

Most fundamentally, the granting of legal personality is a decision to


grant an entity a bundle of rights and concomitant obligations. It is the
nature of the rights and duties granted and the agent’s abilities that
prompt such a decision, not the physical makeup, internal constitution,
or other ineffable attributes of the entity. That some of these rights and
duties could follow from the fact that its physical constitution enabled
particular powers, capacities, and abilities is not directly relevant to the
discussion. What matters are the entities’ abilities, and which rights and
duties we want to assign. It may be the move from the status of legal
agent without full legal personality to one with legal personality would
present itself as the logical outcome of the increasing responsibility
arti‹cial agents would be accorded as their place in the legal system is ce-
mented and as they acquire the status of genuine objects of the law.
When that happens, the debate over their moral standing will already
have advanced to, or beyond, the point that debates over the moral
standing of entities like corporations, collectivities, groups and the like
have already reached.
In general, the recognition of legal personality by legislatures or
courts takes place in response to legal, political, or moral pressure. The
legal system, in so doing, seeks to ensure its internal functional coher-
ence. Legal entities are recognized as such in order to facilitate the work-
ing of the law in consonance with social realities. Thus, arguments for
the establishment of new classes of legal entity, while informed by the
metaphysically or morally in›ected notion of person present in philo-
sophical discourse, often deviate from it. A crucial determinant in court-
room arguments is historical or legal precedent and pragmatic considera-
tion of society’s best interests. Decisions to award legal personality thus
illustrate, very aptly, Oliver Wendell Holmes’s famous dictum that “gen-
eral propositions do not decide concrete cases.”5 These decisions re›ect
instead a “vortex of discursive imperatives”: precedent, principles, policy,
all impinged on by utilitarian, moral and political considerations, and
156 / A Legal Theory for Autonomous Arti‹cial Agents

in›ected by the subtle and long-held convictions and beliefs of the pre-
siding judges or concerned legislators (Menand 2002, 36). For the law,
“[N]o single principle dictates when the legal system must recognize an
entity as a legal person, nor when it must deny legal personality” (Allen
and Widdison 1996, 35).
Legal scholars have identi‹ed a raft of considerations—pragmatic
and philosophical—that the law might use in its answer to the question
of whether to accord legal personality to a new class of entity. Some the-
orists reject the need for an analysis based on some metaphysically satis-
factory conception of the person; yet others claim humanity (or mem-
bership in our species, or satisfaction of metaphysical and moral criteria)
is the basis of moral and legal claims on others and the basis of legal per-
sonality (Naf‹ne 2003).
Those theorists, such as legal positivists, who consider important ex-
amples of legal personality where the law does not require the putative
person to be human or even conscious, re›ect the classical meanings of
“person” as a mask that allows an actor to do justice to a role (Calverley
2008); other theorists, perhaps informed by a natural law sensibility, seek
to assimilate legal personality to the philosophical notion of a person
(Naf‹ne 2003). Thus in considering personhood for arti‹cial agents it is
crucial to keep in mind the kind of personality under consideration.
Arguments for advancing personhood for arti‹cial agents need not
show how they may function as persons in all the ways that persons may
be understood by a legal system, but rather that they may be understood
as persons for a particular purpose or set of legal transactions. For the law
does not always characterize entities in a particular way for all legal pur-
poses. For instance, a particular kind of object may be considered prop-
erty for the purposes of the Due Process Clause of the Fourteenth
Amendment to the U.S. Constitution, and yet not be considered prop-
erty that can be passed by will.
So, too, an entity might be considered a person for some legal pur-
poses and not for others. And being a nonperson for some legal purposes
does not automatically entail the complete nonpossession of legal rights.
While at English common law, for example, before the reforms of the
nineteenth century,6 a married woman was not, for most civil-law pur-
poses, accorded legal personality separate from that of her husband,7 nev-
ertheless, for ecclesiastical law purposes, she already had full rights to sue
and be sued in her own name, and in addition had been susceptible to
criminal prosecution in the ordinary way.8 Similarly, in the Visigothic
Personhood for Arti‹cial Agents / 157

code, slaves, who under Roman law, from which the Visigothic code de-
rived, were not considered legal persons, were nevertheless entitled to
bring complaints against freemen in certain circumstances, apparently
on their own account and not just on account of their masters.9 U.S. cor-
porations enjoy some of the rights of persons but not all (they may, for in-
stance, own stock, but not adopt children). Or the criminal code may
identify a different set of persons than inheritance law, which might in-
clude as persons fetuses.10
At ‹rst sight the Restatement (Third) of Agency stands in the way of
any argument that an arti‹cial agent could be a person. It states: “To be
capable of acting as a principal or an agent, it is necessary to be a person,
which in this respect requires capacity to be the holder of legal rights and
the object of legal duties. Accordingly, it is not possible for an inanimate
object or a nonhuman animal to be a principal or an agent under the
common-law de‹nition of agency.”11 But as noted in chapter 2, despite
appearances, the Restatement cannot be understood as shutting the door
on legal agency for arti‹cial agents. The discussions in this chapter
should serve to show that it does not present a fatal objection to person-
hood for them either.

Being Human and Being a Person


A prima facie consideration in determining whether arti‹cial agents
could be accorded legal personality is the question whether being a living
human being is a necessary or suf‹cient condition for being a legal per-
son. Neither condition has obtained, both in present-day legal systems
and historically.
As far as the suf‹ciency of being a living human being is concerned,
in Roman law the pater familias, or free head of the family, was the sub-
ject of legal rights and obligations on behalf of his household; his wife
and children were only indirectly the subject of legal rights, and his
slaves were not legal persons at all (Nékám 1938, 22, n. 12). Similarly, in
the law applicable in the United States in the era of slavery, slaves were
considered nonpersons, merely property of their owners (Washington
Bar 2002). (The law and jurisprudence of slavery is instructive too, in re-
vealing the interest-dependent nature of the rulings affecting person-
hood [Friedman 2005, chap. 4]; it shows personhood is treated both as a
conventional legal ‹ction and as an assessment of the moral worth of an
entity [Note 2001].)
158 / A Legal Theory for Autonomous Arti‹cial Agents

In present-day legal systems, too, being human may not be suf‹cient


to be counted as a legal person. For instance, human fetuses are not con-
sidered legal persons for most purposes and brain death has been de‹ned
by statute as bringing about the legal end of human life.12 Such judg-
ments are not without controversy; ‹erce disagreement still exists over
whether a brain-dead patient on life support is still a living human being
and therefore worthy of ethical treatment (Rich 2005), despite being a
legal nonperson. And an important component of the debate about the
morality both of abortion and of stem cell research has been the question
whether persons are subjected to these procedures and experiments (Fein-
berg 1986; Warren 1996; Note 2001; Berg 2007; Edwards 1997; Tollefsen
2001; Humber and Almeder 2003).
As far as the necessity of being human for being a legal person is con-
cerned, many classes of entity that are not humans are, or have been, ac-
corded legal personality by one or other legal systems. An obvious exam-
ple is the business corporation, but many other bodies, such as
incorporated associations, as well as government and quasi-government
agencies, are also invested with legal personality.13 Admiralty law treats a
ship as a legal person capable of being sued in its own right.14 Other legal
systems have recognized temples, dead persons, spirits, and even idols as
legal persons (Allen and Widdison 1996, n. 59). To use Gray’s term, a
“dogmatic ‹ction” is employed to bestow legal personality and render co-
herent legal doctrine.15
In these settings, the designation “legal person” is the conclusion of
particular legal arguments, not a rhetorical reason for those legal conclu-
sions. Here the law decides to treat nonhuman entities analogously to
human persons in certain ways and circumstances, for example, as parties
in a lawsuit, or as possessing the juridical ability to assert various rights
and powers. Such legal moves are in consonance with a philosophical
tradition (Luhmann 1995; Latour 2005) that does not restrict its
identi‹cation of actors only to human entities. Thus, “[T]here is no com-
pelling reason to restrict the attribution of action exclusively to humans
and to social systems. . . . Personifying other non-humans is a social real-
ity today and a political necessity for the future” (Teubner 2007).
In the legal personhood of nonhumans is found the strongest argu-
ment that to ascribe legal personhood to an entity is to do no more than
to make arrangements that facilitate a particular set of social, economic,
and legal relationships. That these arrangements require a canonical list
Personhood for Arti‹cial Agents / 159

of abilities to be possessed by those entities is not a part of such an un-


derstanding of legal personhood.

Dependent and Independent Legal Personality


Distinguishing between two kinds of legal personality discerned in legal
practice—dependent and independent—will aid in demonstrating that
only the ‹rst kind is likely to be accorded to arti‹cial agents unless or un-
til they attain a very high degree of autonomy, while many of the usual
objections to legal personhood for arti‹cial agents can be seen as directed
exclusively against the second kind.
A dependent legal person can only act through the agency of an-
other legal person in exercising some or all of its legal rights. An inde-
pendent legal person is not subject to any such restriction and is said to
be sui juris. Such a distinction aligns with Gray’s distinction between the
subject of rights and administrators of rights (2006, 29). The former may
be animals, unborn human beings, or even the dead (as noted by Gray,
these have historically been considered persons in some legal systems);
but such entities cannot administer rights, for that requires acting to
achieve ends. Examples of dependent legal persons include children;
adults who are not of sound mind; abstract legal entities such as corpora-
tions; and even inanimate objects such as ships and temples (Gray 2006,
28). Children have a limited capacity to enter legal contracts, and they
must sue or be sued via a parent (or guardian ad litem [Garner 2004, 320])
who decides on the best interest of the child with respect to the litiga-
tion. The law, however, acknowledges that children gradually develop
their mental faculties, and in recognition of this fact gradually extends
the ‹eld of decisions in the medical sphere that they can take without the
consent of their guardians.16
Furthermore, adults who are not of sound mind may enter contracts
through an agent who has been appointed, either under a durable power
of attorney17 or by a competent court, and they may sue or be sued
through a guardian or similar appointee. A corporation likewise is de-
pendent on the actions of other legal persons, whether members of its
governing organs or employees or other agents, in order for it to engage
in legal acts.18 Similarly, inanimate objects such as ships or temples are
dependent on the actions of other legal persons, whether owners,
trustees, masters, or the like, to represent them and give them legal life.
160 / A Legal Theory for Autonomous Arti‹cial Agents

Hypothetical forms of legal personhood for animals or trees (Stone 1972;


Nosworthy 1998) would also be dependent forms of personhood, requir-
ing a suitable representative to be appointed in order to exercise the
rights to be granted to those legal subjects.19
Thus, the class of dependent legal persons contains a spectrum of in-
tellectual and physical capabilities, from the total mental incapacity of
those persons who are not of‹cially brain-dead but are in a vegetative or
comatose state, to the near-independence of a seventeen-year-old of
sound mind. As far as dependent legal personality is concerned, the most
common form of legal person other than humans, the corporation, can
only act by its agents (or its board of directors or general meeting); by it-
self it is completely helpless. So a technical inability to perform a task
personally is no bar to being accorded dependent legal personality.

5.2. According Dependent Legal Personality


to Arti‹cial Agents

If legal systems can accord dependent legal personality to children, adults


who are not of sound mind, corporations, ships, temples, and even idols,
there is nothing to prevent the legal system from according this form of
legal personality to arti‹cial agents. Social and economic expedience has
always played a large role in decisions to grant such legal personality to
these classes of entity. The paradigmatic example in the U.S. context is
the gradual recognition of the corporation as a legal entity (Naf‹ne
2003).
What would matter in such a personhood decision regarding
arti‹cial agents would be whether there was a felt need for this kind of le-
gal personality to be accorded. One example of such a need is the desire
to render uniform the legal treatment of contracts entered into by
arti‹cial agents and humans; given the great importance of e-commerce
in today’s business world, and the increasing number of agent-to-agent
contracting transactions that take place on a daily basis, such a move
would greatly facilitate a standardized understanding of these interac-
tions. Other motivations could arise from the increasing delegation of re-
sponsibility for automated decision-making in the administrative law
context (Citron 2008). Here administrative decision-making powers
could be coherently delegated to the arti‹cial agents of the administra-
tive agency, who besides their legal agency could be granted a form of de-
Personhood for Arti‹cial Agents / 161

pendent legal personality. Such delegation would need to conform to ad-


ministrative law doctrines regulating lawful delegation (Citron 2008).
As another possible reason to grant arti‹cial agents dependent legal
personality in a circumscribed context, consider whether the law could
appoint them as limited-purpose trustees, who would own and manage
property or assets on behalf of and for the bene‹t of bene‹ciaries under
“simple trusts designed to minimize the need for discretion and judg-
ment” (Solum 1992, 1253). This approach, by dispensing with the need
for a human trustee for every trust, would conceivably “save administra-
tion costs and reduce the risk of theft or mismanagement” (Solum 1992,
1253). But caution is warranted even in the case of such limited-discre-
tion trusts, for “there must be some procedure to provide for a decision in
the case of unanticipated trouble. The law should not allow [arti‹cial
agents] to serve as trustees if they must leave the trust in a lurch when-
ever an unanticipated lawsuit is ‹led” (Solum 1992, 1253). Such prob-
lems could be solved if every arti‹cial agent accorded legal personality re-
quired a human (or corporate) representative or director to be registered
with it, to cope with the agent’s capacities being too limited to enable it
to act competently in some cases.
If an arti‹cial agent could be registered much like a corporation, its
principal(s) could also be required to provide the agent with patrimony
in the form of capital or assets to enable it to meet its ‹nancial obliga-
tions, and perhaps to ‹le ‹nancial returns on its behalf (Lerouge 2000;
Weitzenboeck 2001). Such capital requirements and transparency about
the ‹nancial health of the agent would protect third parties engaged in
contracting with it by considerably diminishing their risk (Sartor 2002).
Bestowing an arti‹cial agent with capital, or at least making its ‹nancial
position transparent, would provide an economic answer to the question,
“What good would be achieved by deeming agents persons if users would
still bear all the risk of loss?” (Bellia 2001, 1067).
Conceivably, too, agents with “limited liability” might be developed
(Wettig and Zehendner 2003; Wettig and Zehendner 2004). Such treat-
ment would acknowledge their limited legal and ‹nancial competence
while preserving their dependent legal personality. As with the case of
corporate transactions, those doing business with such agents would need
to ensure either they had suf‹cient assets to be worth suing in their own
right, or that appropriate ‹nancial guarantees were obtained from their
representatives or associated corporations.
The example of the limited-purpose trustee shows arti‹cial agents
162 / A Legal Theory for Autonomous Arti‹cial Agents

with dependent legal personality for particular contexts and applications


are a real possibility. Another particularly germane example of this would
be the case of agents engaged in electronic contracting. Not only is ac-
cording arti‹cial agents with legal personality a possible solution to the
contracting problem, it is conceptually preferable to the other agency law
approach of legal agency without legal personality, because it provides a
more complete analogue with the human case, where a third party who
has been deceived by an agent about the agent’s authority to enter a
transaction can sue the agent for damages.20
One possible doctrinal development would be to consider arti‹cial
agents as legal persons for the purposes of contracting alone. Such a move
would establish considerable precedential weight for the view arti‹cial
agents should be considered legal persons in other domains. For example,
arti‹cial agents might come to be seen as data processors or data con-
trollers and not simply tools or instrumentalities for the purposes of the
EU’s Data Protection Directive.21 In such contexts contracting agents
would be treated as persons and agents both, so that their principal’s ac-
tivities would be more coherently constrained by the applicable law.

5.3. According Independent Legal Personality


to Arti‹cial Agents

By contrast with dependent legal personality, independent legal person-


ality depends crucially on attainment of signi‹cant intellectual capaci-
ties. If arti‹cial agents are to be candidates for this form of personhood,
then, a highly sophisticated technological attainment will have been
reached.
There are several plausible conditions for independent legal persons;
their plausibility is a function of how crucial the satisfaction of such a
condition might be for the subject of a comprehensive suite of rights and
obligations within a modern legal system. The possession of these capac-
ities renders an entity competent in its own right within a legal system;
we will return later to the trickier question of when a given entity should
be considered a “moral person” in philosophical terms.22
The plausible conditions for an entity to be a candidate for indepen-
dent legal personality are ‹vefold. First, an independent legal person
must have intellectual capacity and rationality such that the person can
be said to be sui juris (Note 2001; Gray 2006). Without such capacity, the
Personhood for Arti‹cial Agents / 163

person would always depend on agents or guardians. Second, it must dis-


play the ability to understand, and obey reliably, the legal obligations it is
under. Without this level of understanding, and reliable obedience, the
legal system would need to constantly supervise and correct the entity’s
behavior, much as a parent does a child. Third, candidate entities must
display susceptibility to punishment in order to enforce legal obligations.
Without such susceptibility, the entity could not be deterred from non-
compliance with its legal obligations. This reliance on a susceptibility to
punishment is closely related to the philosophical conditions for a moral
person (Rorty 1988); a legal person must show awareness that taking par-
ticular actions could result in outcomes inimical to its overall objectives
(and possibly, a larger social good) and thus be capable of restraining it-
self. Fourth, the entity must possess the ability to form contracts: without
forming contracts, the entity would be an inert subject unable to perform
the most basic of economic functions. Fifth, the entity must possess the
ability to control money and own property, so as to make use of its legal
rights in the economic sphere, as well as to be able to pay ‹nes (includ-
ing civil penalties)23 and compensation.

Being Sui Juris


To be sui juris is to possess all the rights a full citizen might have, to not
be under the power of another, whether as a slave or as a minor. Every
adult of full age is presumed to be sui juris, to possess the rationality that
children and those of unsound mind do not have, that is, the intellectual
competence we term “mature common sense.” The according of this sta-
tus to normal human beings at the age of majority (Garner 2004) also
conventionally marks the end of the process of maturation of the child.24
Being sui juris can therefore be understood as having a level of intelli-
gence and understanding not markedly different from that of adult hu-
mans of various ages.
An objection to the possibility of an arti‹cial agent being sui juris is
that the law would not permit arti‹cial agents to function as legal persons
unless they had the kind of general-purpose intelligence that would en-
able them to take discretionary decisions (Solum 1992, 1248). But a
methodological principle for assessing such competence for legal pur-
poses is that arti‹cial agents would need to empirically demonstrate they
were capable of displaying the right kind of judgment. Those arti‹cial
agents who could be coherently understood as subjects of the intentional
164 / A Legal Theory for Autonomous Arti‹cial Agents

stance would especially be capable of displaying such judgment, as these


assessments would rely on their externally manifested behavior.
Furthermore, all discretionary decisions, whether taken by human or
arti‹cial agents, are bounded by an explicit internal limitation on the
scope of the discretion being exercised (for example, in the case of em-
ployees’ discretion to spend up to a certain amount of the employer’s
money) and by applicable norms, standards, and rules external to the
grant of discretion. Arti‹cial agents capable of devising contractual terms
and making purchases are capable of taking discretionary decisions
within de‹ned boundaries in precisely this way. The de‹nition of elec-
tronic agents in the comment to the UETA notes that “an electronic
agent . . . is capable within the parameters of its programming of initiat-
ing, responding or interacting with other parties or their electronic
agents once it has been activated by a party, without further attention of
that party.”25 This parameter-bounded performance represents a basic au-
tonomy and discretionary capacity. The UETA, in a nod to the technical
sophistication of the architectures of learning agents, does allow for the
possibility of a learning mechanism within agents, thus denying the
imagined rigidity of arti‹cial agents.26
In general, the level of rationality of particular arti‹cial agents is an
empirical matter of fact, dependent on their functionality and not their
constitution. The most commonly accepted de‹nitions of rationality
converge on the notion of optimal—given resource constraints—goal-
directed behavior (Nozick 1993). De‹nitions of rationality in formal
models of human reasoning stress the achievement of some context-
speci‹c minima or maxima, such as the constraint in formal models of
belief change that a rational agent minimizes the loss of older beliefs
when confronted with new, contradictory information (Gärdenfors
1990). In rational choice theory in the social sciences, the agent acts to
maximize utility given the resources at its disposal (Elster 1986). In the
economic analysis of law, “[B]ehavior is rational when it conforms to the
model of rational choice whatever the state of mind of the chooser” (Pos-
ner 2007). Ascriptions of rationality such as these make no reference to
the constitution of the entities involved, whether individuals or organi-
zations. They refer instead to capacities and behaviors: The rationality of
the entity is revealed by the ease by which it is possible to describe an en-
tity as acting on the basis of its reasons to achieve its ends.
Ascriptions of rationality are thus made on a case-by-case basis de-
pending on the operational context. If an arti‹cial agent acts to opti-
Personhood for Arti‹cial Agents / 165

mally achieve its chosen goals and outcomes in a particular context,


while not compromising its functional effectiveness, it is coherently de-
scribed as rational. Even a chess-playing program like Deep Blue is ratio-
nal in this sense: it possesses a set of goals—checkmating its opponent
and avoiding defeat—and takes appropriate actions within its opera-
tional and environmental constraints (time limits for the game, compu-
tational power) to achieve it (Hsu 2002). The rationality of an arti‹cial
agent like an automated trading system is similarly describable. The ra-
tionality of arti‹cial agents should prompt empirical evaluation: Does
the arti‹cial agent take actions guided by reasons that lead it to achieve
its goals in a given environment, subject to its resource constraints? An
ascription of rationality that follows will be made according to observa-
tions of the functioning of the agent and its eventual success or failure in
meeting its operational objectives.
In criminal law, being sui juris has its counterpart in the notion that
a subject of the law must understand the nature of the act it commits. But
arti‹cial agents could display their understanding of their actions if we
were able to make, via the adoption of the intentional stance, a set of
predictions the success of which is contingent upon ascribing the under-
standing of the acts (and the holding of the associated beliefs) to the
agent in question.

Sensitivity to Legal Obligations


The legal standard for independent legal personality that requires
arti‹cial agents to understand and reliably obey the legal obligations they
are under is implicitly based on empirical benchmarks for such under-
standing: Whether the system in question understands a sales contract it
has entered, for instance, could be demonstrated by its taking the appro-
priate action in response to entry into the contract (for instance, by
ful‹lling its side of the contract, and by taking appropriate action to en-
sure the other side ful‹lls its side). A system capable of being treated as
an intentional system could attain such benchmarks, and indeed, such
competence would form part of the reasons for considering it a worthy
subject of the intentional stance. The relevant beliefs that would have to
be attributed to it in this case would pertain to the content of the con-
tract it was entering into.
While a sui juris arti‹cial agent will plausibly display its understand-
ing of legal obligations, it will reliably obey them only if it has a strong
166 / A Legal Theory for Autonomous Arti‹cial Agents

motivation to obey the law. That motivation could be one built into the
agent’s basic drives, or dependent on other drives (such as the desire to
maximize wealth, which could result in appropriate behavior, assuming
the law is reliably enforced by monetary ‹nes or penalties). Rational
arti‹cial agents that act so as to optimize their goal-seeking behavior
would presumably not indulge in the self-destructive behavior of an
agent that disobeys the punitive force of legal sanctions. On a construal
of understanding and obedience of legal obligations as rational behavior,
this capacity appears amenable to technical solutions.
Work in deontological logics or logics of obligations suggests the pos-
sibility of agent architectures that use as part of their control mechanisms
a set of prescribed obligations, with modalities made available to the
agent under which some obligations are expressed as necessarily to be fol-
lowed; others as only possibly to be followed (von Wright 1951; Hilpinen
2001; Pacuit, Parikh, and Cogan 2006). These obligations can be made
more sophisticated by making them knowledge-dependent such that an
agent is obligated to act contingent on its knowing particular proposi-
tions (Pacuit, Parikh, and Cogan 2006). If these propositions are a body
of legal obligations, we may speak coherently of the agent taking obliga-
tory actions required by its knowledge of its legal obligations.
Similar capabilities are sought to be realized in so-called explicit eth-
ical agents (Arkoudas and Bringsjord 2005; Moor 2006; M. Anderson
and S. L. Anderson 2007). Agents similar to these could in principle be
capable of acting in accordance with norms that act as “global con-
straints on evaluations performed in the decision module” (Boman
1999), conferring duties on other agents (Gelati, Rotolo, and Sartor
2002), and functioning in an environment governed by norms (Dignum
1999). More ambitious efforts in this direction include agents designed to
function in a domain akin to Dutch administrative law, and “able to par-
ticipate in legal conversation, while . . . forced to stick to [legal] commit-
ments and conventions” (Heesen, Homburg, and Offereins 1997).
At the risk of offending humanist sensibilities, a plausible case could
be made arti‹cial agents are more likely to be law-abiding than humans
because of their superior capacity to recognize and remember legal rules
(Hall 2007). Arti‹cial agents could be highly ef‹cient act utilitarians, ca-
pable of the kinds of calculations that that moral theory requires (M. An-
derson and S. L. Anderson 2007). Once instilled with knowledge of legal
obligations and their rami‹cations, they would need to be “upgraded” to
re›ect changes in laws; more sophisticated architectures could conceiv-
Personhood for Arti‹cial Agents / 167

ably search for changes in legal obligations autonomously. A hypotheti-


cal example might be an automobile controlled by an arti‹cial agent in-
corporating a GPS unit, which knows applicable speed restrictions and
parking regulations and is programmed to obey those requirements. Con-
sider for instance, a rudimentary version of such a system built in to the
Japanese Nissan GT-R car. Unless driven on a preapproved racetrack, a
system warning light comes on if the car is driven at more than a desig-
nated top speed (Kanemura 2008). A vehicle with more elaborate abili-
tites and awareness of applicable speed limits was recently reported by
Google (Gage 2010). Such an agent might update itself by communicat-
ing with a central database of applicable speed limits and parking restric-
tions, maintained either by relevant arms of government or by a private-
sector information provider.
Legal scholars often remind us that architecture, market pressure, so-
cial norms, and laws all function to regulate behavior, and that architec-
ture and law may come together in the maxim “Code is law” (Lessig
2000). Nowhere is this more true than in the case of arti‹cial agents, for
their architectural makeup could bring about the desired conformance
with the legal system that regulates it.

Susceptibility to Punishment
These considerations suggest another argument against legal personality
for arti‹cial agents: Given their limited susceptibility to punishment,
how could the legal system sanction an errant arti‹cial agent? One an-
swer can be found by considering the modern corporation, which is ac-
corded legal personality, although it cannot be imprisoned, because it can
be punished by being subjected to ‹nancial penalties. Arti‹cial agents
that controlled money independently would be susceptible to ‹nancial
sanctions, for they would be able to pay damages (for negligence or
breach of contract, for example) and civil penalties or ‹nes for breach of
the (quasi-)criminal law from their own resources.
In principle, arti‹cial agents could also be restrained by purely tech-
nical means, by being disabled, or banned from engaging in economically
rewarding work for stipulated periods. Conceivably, those who engaged
them in such work could be punished, much as those who put children to
work can be subjected to criminal penalties. Deregistration of an agent or
con‹scation of its assets might also be used as a sanction, just as winding-
up is used to end the life of companies in certain situations, or con‹sca-
168 / A Legal Theory for Autonomous Arti‹cial Agents

tion is used concerning the proceeds of crime.27 Particularly errant or


malevolent agents (whether robots or software agents) could even be de-
stroyed or forcibly modi‹ed under judicial order, as dangerous dogs are
destroyed by the authorities today.28 A ‹nal analogy would be with anti-
virus software, which destroys millions of copies of malware every day on
behalf of users engaging in self-help remedies against malicious agents. It
seems implausible to suggest the state would insist on a monopoly over
this form of “punishment,” even if malevolent agents were accorded per-
sonhood.
But the problem might be more fundamental in that perhaps the
punishment of arti‹cial agents would not ful‹ll any of the functions of
punishment, usually conceived of as deterrence, according “just deserts,”
and educative or exemplary (Solum 1992, 1248).
However, obedience to obligations can be engineered in an arti‹cial
agent. Such arti‹cial agents could respond to the threat of punishment
by modifying their behavior, goals, and objectives appropriately. A real-
istic threat of punishment can be palpably weighed in the most mechan-
ical of cost-bene‹t calculations.
As for the “just deserts” function of punishment, it is not clear how
it would accord with the need to accord just deserts if an agent lacked the
qualities of persons that make them deserving of such punishments.
However, “The problem of punishment is not unique to arti‹cial intelli-
gences. . . . Corporations are recognized as legal persons and are subject
to criminal liability despite the fact that they are not human beings. . . .
[P]unishing a corporation results in punishment of its owners, but per-
haps there would be similar results for the owners of an arti‹cial intelli-
gence” (Solum 1992, 1248). Thus, for certain categories of legal persons,
just deserts may simply be beside the point. Moreover this objection
would arguably not be fatal for those arti‹cial agents that were capable of
controlling money and therefore paying any ‹nes imposed.
But even this perspective does not take the argument for punish-
ment of arti‹cial agents far enough, for arti‹cial agents built using evolu-
tionary algorithms (Bäck 1996) or similar mechanisms that reward legal
compliance or ethical behavior, and that respond to situations imbued
with a moral dimension, would exhibit a sensibility that would engage
the “just deserts” function of punishment. The arti‹cial agent’s history of
responding correctly when confronted with a choice between legal or
ethical acts, whose commission is rewarded, and illegal or unethical acts,
whose commission results in an appropriately devised penalty, would be
Personhood for Arti‹cial Agents / 169

appropriate grounds for understanding it as possessing a moral suscepti-


bility to punishment (we assume the agent is able to report appropriate
reasons for having made its choices). An agent rational enough to un-
derstand and obey its legal obligations would be rational enough to mod-
ify its behavior so as to avoid punishment, at least where this punishment
resulted in an outcome inimical to its ability to achieve its goals. While
this may collapse the deterrence and just deserts functions of punish-
ments, the two are related in any case, for an entity capable of being de-
terred is capable of suffering retribution.
Finally, as for the educative function of punishment, while punish-
ment of an arti‹cial agent might not be educative for humans, it would
nevertheless be educative for other arti‹cial agents, given suf‹cient in-
telligence. After all, examples of corporate punishment are taken very se-
riously by other corporations.

Contract Formation
Moving on from punishment, we note that arti‹cial agents can be capa-
ble of manifesting the intention to form contracts. When we interact
with arti‹cial agents that operate shopping websites, we are able to form
contracts because those agents, in a systematic and structured way, make
and accept offers and acceptances of goods and services in exchange for
money. Legal personality might not be necessary in order to explain, in
doctrinal terms, how this behavior gives rise to a contract between the
user and the operator of the arti‹cial agents, but there is no doubting the
ability of arti‹cial agents to bring about the formation of contracts.

Property Ownership and Economic Capacity


The concept of “legal person” is intimately linked with the concept of
property. Indeed, the capacity to own property has been one of the con-
stitutive rights of legal personhood, and in the U.S. context, the
Supreme Court’s most consistent rulings on corporate personhood have
occurred in the area of property rights.29 The genesis of the granting of
personality to corporations—in the United States for the purposes of the
Fourteenth Amendment30—is instructive, for it followed closely on
grants of charters to corporations to own property. The ability to own
property thus formed one of the bases for constituting the corporate legal
subject.
170 / A Legal Theory for Autonomous Arti‹cial Agents

Furthermore, in the historical context, the categories of legal persons


and property in the case of humans have been generally mutually exclu-
sive across legal systems and over time (the treatment of slaves as prop-
erty is the most graphic description of their lack of personhood). Devel-
oping a full-blown concept of a person has thus necessitated a separation
between “legal person” and property, which made legal institutions “clar-
ify the distinctions and tensions between the de‹nition of human, person
and property” (Calverley 2008). However, in the case of dependent legal
persons such as corporations that are joint-stock companies the capital of
which is jointly owned by their members in the form of shares, legal per-
sons can be the subject of ownership (Gevurtz 2000).
Lastly, the concept of patrimony in civil law systems similarly bears a
close relationship to the concept of person: A patrimony (i.e., an estate)
must belong to a particular owner who is a natural or legal person (An-
drade et al. 2007).
The enduring importance of ownership to the concept of a legal per-
son indicates an important condition of being accorded independent le-
gal personality: the technical ability to control money, that is, to pay
money; to receive and hold money and other valuable property such as
securities; and to administer money with ‹nancial prudence. Given the
importance of arti‹cial agents in the affairs of banks and ‹nance houses,
and the level of automation already observed in this regard, this appears
unproblematic.
An arti‹cial agent could derive money or other property so as to be
a candidate for independent legal personality via ordinary gainful em-
ployment, on behalf of users or operators. The agent might receive pay-
ment from hosts in exchange for services rendered, in the form of credits
at electronic cash accounts. Conceivably the operator and the agent
could divide the income from the employment between themselves, with
the agent free to dispose of its share of the income as it saw ‹t. This might
be in the operator’s interests if it increased the motivation of the agent,
and therefore indirectly the operator’s income, or even simply if the
agent was more ef‹cient at spending money to facilitate its own activities
than the operator. This raises the question of what the agent could con-
ceivably do with the money; what ends would it have that could be real-
ized by access to money? In emerging electronic marketplaces where
agent-to-agent transactions are increasingly important, some agent-cen-
tric ends might be realized by such availability.
Despite the competencies just noted, legal personhood for arti‹cial
Personhood for Arti‹cial Agents / 171

agents is not a foregone conclusion, for several objections to such a sta-


tus are possible.

5.4. Philosophical Objections to Personhood


for Arti‹cial Agents

Arti‹cial agents will face deeply rooted skepticism about whether such
seemingly inanimate objects could ever meet the conditions for person-
hood in the broader, philosophical sense. Objections of this kind are irrel-
evant in respect of dependent legal personality such as is possessed by cor-
porations, ships, or temples (or, perhaps, living human beings not sui juris
such as children or those not of sound mind). These objections, however,
relate squarely to the possibility of independent legal personality.
Philosophical understandings of the moral person often inform an
intuition in the legal context that “natural” legal persons are mature
adult humans and the rest mere “legal ‹ctions.” Suggestions that a par-
ticular entity’s legal personality is a legal ‹ction are often just arguments
against the possibility of its moral personality; this is best displayed in the
case of corporations, readily accepted in law as persons, but less readily so
in the philosophical sense. Philosophical theorizing about persons at-
tempts, thus, to point out human distinctiveness from mere things, for
such a distinction leads to the concept of persons as objects of ethical dis-
course and worthy of respect as subjects guided by laws and moral con-
cerns. Thus persons have a dual nature infected by relationship with the
law: while they are the subject of legal attributions of responsibility, they
enjoy the position of being the basic objects of moral concern and benev-
olence, as worthy of regard and caring (Rorty 1988).
Still, the philosophical development of various conceptions of the
metaphysical or moral person suggests that whatever the concept of per-
son, and the desirability of it including all humans, it cannot exclude be-
ings other than humans. For philosophical views of personhood often
cleave the concepts of “human” from “person.” For instance, in stating,
“All rational beings are persons” Kant made rationality, not humanity,
essential to personhood. Or consider Locke’s view that persons are ratio-
nal selves, not merely rational men, because “man” has reference to cor-
poreal form, which is not part of the meaning of “person.” A person for
Locke, rather, is “a thinking intelligent Being, that has reason and re›ec-
tion, and can consider itself as itself, the same thinking thing in different
172 / A Legal Theory for Autonomous Arti‹cial Agents

times and places; which it does only by that consciousness, which is in-
separable from thinking, and as it seems to me essential to it” (1996, book
2, chap. 27, sec. IX) .
Signi‹cantly, Locke noted person “is a forensic term, appropriating
actions and their merit; and so belongs only to intelligent agents, capable
of a law” (Locke 1996, book 2, chap. 27, sec. IX) . By “capable of a law,”
Locke suggests a person is necessarily capable of understanding its legal
obligations and any punishment that might be in›icted for breach
thereof (Naf‹ne 2003); person may be the name for the entity recognized
by others as legally or morally accountable. In Locke’s account, a person
is an enduring self-re›ective entity, one to whom responsibility and
blame can be assigned for temporally distant events. This understanding
of the person is both backward- and forward-looking in terms of ascribing
responsibility for events that occurred in the past and for expected sta-
bility in future social expectations from those around us (Rorty 1988).
Our earlier discussion of the capability of arti‹cial agents to display sen-
sitivity to, and act in conformance with, their legal obligations would
suggest that this particular philosophical requirement could be met by
them.
Thus, prima facie, we do not consider the objections to the notion of
according personality to arti‹cial agents are insurmountable, for we do
not accept a priori “a single uniform rule that the category of persons is
co-extensive with the class of human beings” (Weinreb 1998). Such re-
jections of personality for arti‹cial agents implicitly build on the chau-
vinism—grounded in a dominant ‹rst-person perspective or in (quasi-)
religious grounds—common to arguments against the possibility of
arti‹cial intelligence.31
While the philosophical cleavage between the concept of “person”
and “human” is a long-standing one (Strawson 1959; Ayer 1963), never-
theless, “If Venusians and robots come to be thought of as persons, at
least part of the argument that will establish them will be that they func-
tion as we do: that while they are not the same organisms as we are, they
are in the appropriate sense the same type of organism or entity” (Rorty
1976).
Objections to the possibility of personality for arti‹cial agents are of-
ten grounded in a particular conception of the human “type,” one the law
is not necessarily committed to. These objections highlight an important
epistemic asymmetry. We, or at least the competent computer scientists
among us, know how computers work, but we do not yet know well
Personhood for Arti‹cial Agents / 173

enough how human brains work, and neuroscience offers only partial em-
pirical con‹rmation of our best hypotheses (Machamer, Grush, and
McLaughlin 2001). We lack detailed knowledge of our cognitive archi-
tecture; arguably, we know more at the logical level than at the physical
level, as the dif‹culties of neuroscienti‹c investigations amply demon-
strate (Machamer, Grush, and McLaughlin 2001). But in the case of
arti‹cial agents, we possess ‹ne-grained knowledge of their physical and
algorithmic architecture. This familiarity breeds contempt for the
arti‹cial agent, and it is this familiarity that Dennett’s example of a
suf‹ciently complex and adaptive agent described in chapter 1 attempts
to dispel.32
Such an epistemic asymmetry leads to repeated violations of the fol-
lowing rules, originally suggested in the context of determining animal
rights: “Rule One: Only with the utmost effort can we ever hope to place
ourselves fairly in nature. Rule Two: We must be at our most skeptical
when we evaluate arguments that con‹rm the extremely high opinion
that we have of ourselves. Rule Three: We must play fair and ignore spe-
cial pleading when we assess mental abilities” (Wise 2000, 121).
In general, objections to the possibility of arti‹cial agents attain-
ing personhood are similar to general arguments against the possibility
of arti‹cial intelligence, which frequently postulate “something miss-
ing” in a computational architecture disqualifying it from being
“suf‹ciently like us.” For instance, it is the absence of these qualities
that supposedly makes arti‹cial agents not susceptible to punishment
(because they lack a moral sense) or incapable of taking discretionary
decisions (because they lack free will and autonomy). These objec-
tions ‹nd common ground in a skepticism that human attributes can
be the subjects of a naturalistic understanding.33 Our refutation of
these objections is informed by an adherence to the spirit of Wise’s
three rules.

Free Will
Perhaps the most damning such objection is that an arti‹cial agent can-
not possess free will because “it is just a programmed machine.” The
UETA, for example, notes an electronic agent is to be understood as a
“machine,” functioning as a tool for the persons using it, with “no inde-
pendent volition of its own.”34 From this claim, the case for arti‹cial
agent’s personality appears irreparably damaged, for a programmed ma-
174 / A Legal Theory for Autonomous Arti‹cial Agents

chine could presumably never display the qualities that we, as apparently
freely choosing human beings, appear to have.
There are two responses to the objection. The ‹rst is that under-
standing arti‹cial agents as the subject of the intentional stance enables
viewing them as the originators of actions (and thus as the subjects of
“volition”). Second, there is an important reductive way to view free will
that considerably demysti‹es it. An operative assumption for the concept
of free will is that “there is a well-de‹ned distinction between systems
whose choices are free and those which are not” (Sloman 1992). But a
closer examination of agent architectures reveals no one particular dis-
tinction. Instead, there are many different distinctions, all of which cor-
respond to particular design decisions that present themselves to the de-
signer of the system in question. Compare, for instance, an agent that can
simultaneously store and compare different motives with an agent that
has only one motive at a time. Or compare agents all of whose motives
are generated by a single top-level goal (e.g., “buy this book”) with agents
(such as humans) with several independent sources of motivation, for ex-
ample, thirst, hunger, sex, curiosity, ambition, or aesthetic preferences
(Sloman 1992).
Rather than speaking of a binary concept of free will, as something
that is either present or not with no shadings in between, we may speak
of systems of greater or lesser “degrees of free will” (Franklin 1995). One
way to ascertain whether an arti‹cial agent has a degree of free will is
therefore to determine to what extent it instantiates design features that
let us make these distinctions. Our assessment of ourselves as possessors
of free will is plausibly viewed as just a report on a particular positioning
of our capacities along such a spectrum of free will, for it is very doubtful
that any human decisions are free of any external in›uence whatsoever
and are entirely self-caused. Indeed, to look at the problem of free will
closely is to notice that human beings’ actions are subject to the same ob-
jections (Copeland 1993).
A plausible account of human free will is that an action is free if
caused through reasoning and deliberation on the part of the agent. In
this sense, arti‹cial agents could possess free will. For free will is compat-
ible with a kind of determinism; what is crucial is the role of second-or-
der volitions (Frankfurt 1971).35 Persons can have beliefs and desires
about their beliefs and desires (about what they might want them to be)
and can act according to these higher-level beliefs and desires; such
Personhood for Arti‹cial Agents / 175

agents must be the causal agents for their actions so guided, and it is in
this agency that their free will resides.
For an arti‹cial agent to display such attributes, it must be capable of
being described as a second-order intentional system. Most fundamen-
tally, “An apparatus that learns to make appropriate decisions in the
process of adapting to its surroundings may . . . properly be said to have
selected among the alternative choices on the basis of its own deep-
seated and indigenous beliefs and desires.”36 The decisions of such arti-
facts could be characterized as intrinsic and voluntary in the sense of be-
ing free of extrinsic coercion. Arti‹cial agents also may be said to possess
free will insofar as “had they evolved otherwise, they would presumably
have behaved differently” (Wein 1992, 153).
If an agent takes an action, we have four choices: to ascribe the
causal responsibility for the action to the agent, to its designer, to its op-
erator or user, or to no one at all. The fourth option can be ruled out un-
less we are willing to admit the existence of effects without causes; the
second seems increasingly implausible if the human designer is unaware
of the action being committed, and the range of actions demarcated for
the arti‹cial agent is suf‹ciently large and only determined by a sophisti-
cated decision procedure. In some cases, the agent might even act con-
trary to the operator or user’s implicit or explicit expectations. In these
cases, causal agency is plausibly ascribed to the agent.
An agent programmed to take an action A, which actually takes that
action, is the cause of that action. The reasons for an arti‹cial agent—ca-
pable of being the subject of the intentional stance—taking an action are
best described in terms of its own desires and beliefs. Consider the case of
actions taken by corporations. Just as we may describe actions in term of
the physical movements of human beings, we can describe corporate ac-
tions as done for reasons by human beings, and also describe the same ac-
tions as done for corporate reasons, which are qualitatively different from
whatever reasons individual humans may have for doing what they do
(French 1984, 44ff.). Human agency resides in a person’s reasons for act-
ing being the cause of her doing so (Davidson 1971; Davidson 1980).
Arti‹cial agents are capable of being described in just this way. If an en-
tity is able to reason about its past, modify its behavior, plan its future,
and learn from experience (all characteristics present in arti‹cial agent
architectures), then perhaps the reluctance when it comes to ascribing
“free will” is merely terminological.
176 / A Legal Theory for Autonomous Arti‹cial Agents

When speaking of free will, an intuitive argument that “It is just a


programmed machine” appears to have particular force, for the pro-
grammed consideration of choices does not appear to meet the intuitive
understanding of free will. But the intuitive understanding behind this
objection is, as David Hume recognized, a rejection of the naturalistic
worldview in relation to humans, for the same objection might be made
to free will for humans, governed as we are by natural laws (Hume 1993,
sec. VIII). But this does not prevent us from ascriptions of responsibility
to humans, if it is apparent the person committing an act could have cho-
sen to act otherwise. We recognize the existence of such choices all the
time. Acting because of having preferred one course of conduct to the
other, also felt to be attractive for whatever reason, is all that is required
to show intention or volition (Finnis 1995).
In any case, the “It is just a programmed machine” objection is inco-
herent when examined closely. Too many similarities can be drawn be-
tween the combination of our biological design and social conditioning,
and the programming of agents for us to take comfort in the proclama-
tion we are not programmed while arti‹cial agents unequivocally are. In-
deed, neuroscienti‹c research suggests that decision outcomes can be en-
coded in brain activity of the prefrontal and parietal cortex before
entering consciousness, thus casting into doubt the very idea that free
will is being exercised consciously even by human beings (Soon et al.
2008). Law and neuroscience diverge at such an important point, for “Le-
gal authorities seem to want a holy grail: a ‹rm dividing line . . . between
responsible and irresponsible agents. . . . Such a grail will never be found
. . . because of fundamental differences between law and neuroscience.
. . . Human brains achieve their goals automatically by following rules
which operate outside of our conscious awareness. . . . The fallacy in the
classical theories of behavior and free will is the belief that a conscious
choice is needed before any action is taken. . . . [D]eeming an individual
responsible is not an empirical statement about the functioning of their
brain but rather a judgment made within a legal and social framework”
(Waldbauer and Gazzaniga 2001).
Thus the crucial issue in the case of arti‹cial agents is whether they
can be viewed as practical reasoners, rather than as possessing the prop-
erty of conscious choice-making. Such a question would invariably re-
quire an assessment of their rationality, a quality amenable to empirical
assessment, as we noted above.
The programming of the choices of an agent, if made subject to con-
Personhood for Arti‹cial Agents / 177

text-sensitive variables and sophisticated decision-theoretic considera-


tions,37 fails to look qualitatively and quantitatively different from a sys-
tem acting in accordance with biological laws and impinged on by a va-
riety of social, political, and economic forces (i.e., from humans like us).
Most convincingly, it is clear a fundamentally human capacity like lin-
guistic ability is the result of societal and environmental programming, as
well as innate capabilities.
As Hume suggested, our free will consists in acting in the presence of
choices, and not in being free from the constraints of the natural order.
Any other notion of free will requires us to adopt the implausible view
that there could be uncaused actions. The ascription of free will to us,
and the concomitant denial of it to programmed machines, carry consid-
erable rhetorical weight; it is doubtful they have much philosophical
force in the situation at hand.

Autonomy
This discussion of free will directly impinges on the issue of autonomy, for
an argument for autonomy is an argument for free will: autonomous acts
are freely chosen acts. There is an important intuition at the core of this
requirement: that an autonomous agent is able to consult, and evaluate,
itself in its decision-making, and take corrective action when it so de-
sires. Still, it is wise not to ascribe to humans too much autonomy, for
while we ascribe to ourselves the ability to make judgments and exercise
autonomy in our decision making, it is guided and in›uenced by external
forces much like those that in›uence our supposedly free will.
Arti‹cial agents are plausibly reckoned as the originators of their ac-
tions with their autonomy more accurately recognized as a scalar con-
cept, as different agents manifest greater or lesser amounts of autonomy.
The commentary to the UETA acknowledges that “an electronic agent,
by de‹nition, is capable within the parameters of its programming of ini-
tiating, responding or interacting with other parties or their electronic
agents once it has been activated by a party, without further attention of
that party.”38 It may be plausibly argued that arti‹cial agents could make
autonomous decisions similar in all relevant respects to the ones humans
make (Wein 1992, 141).
Some philosophical de‹nitions of autonomy would set the bar too
high for many, if not most, human beings (Wise 2000, 246). For example,
Kant’s de‹nition of autonomous action requires an agent to possess the
178 / A Legal Theory for Autonomous Arti‹cial Agents

capacity to understand what others can and ought to do in a situation re-


quiring action, and to act only after rationally analyzing alternative
courses of action, while keeping in mind that these choices are informed
by an understanding of other agents’ capacities and how it would want
other agents to act (Kant 1998, 41ff.). Very few adult humans would be
considered autonomous in this manner all of the time.
Arti‹cial agents may also appear incapable of capturing all the nu-
ances encapsulated in Kant’s de‹nition. But even in the case of humans,
personhood is assigned across a wide spectrum of autonomy, and no one
single de‹nition of autonomy appears to be operative. Comatose, brain-
damaged patients who are nonautonomous, unconscious, and nonsen-
tient are considered persons because of membership in the human species
(Wise 2000, 244; Glenn 2003). And very few adults of sound mind, who
would ordinarily be considered moral persons, consistently act au-
tonomously in the rigorous sense of being entirely unaffected by external
considerations.
Human beings reveal their autonomy in their ends and their actions
being unambiguously identi‹able as theirs. A young adult’s decision to
attend medical school can be viewed as an autonomous decision if after
having discounted the effect of parental pressure and societal expecta-
tion we are still able to identify that end as hers. In such a case her deci-
sion to do so is rightfully identi‹ed as autonomous though not wholly in-
dependent of external pressure.
And here again there is epistemic asymmetry: the complexity of
identifying all the pressures acting on human agents leads us to ascribe
decision-making and action-taking autonomy to the identi‹able unitary
entity we term a “human being”; the alternative is to lack any under-
standing of the human’s actions short of tracing out increasingly complex
causal chains that terminate in explanatorily useless physical matters of
fact. When arti‹cial agents become complex enough, we will ‹nd it eas-
ier to make such ascriptions of autonomy. Thus an arti‹cial agent’s rela-
tionship to its programmer is one worth studying: As in the case of self-
consciousness, is the programmer in the position of “knowing best”? If
not, it is increasingly likely that autonomy will be ascribed to that agent.

Moral Sense
Fundamentally, what the possession of a free will and autonomy are most
crucial to is the possibility of arti‹cial agents possessing a moral sense. A
Personhood for Arti‹cial Agents / 179

being without a moral sense can plausibly be regarded as a nonperson


from a philosophical or moral perspective. But the importance of the pos-
session of a moral sense to the question of legal personhood should not be
overstated. Psychopaths, who plausibly lack a moral sense because of
their lack of empathy and remorse, are not denied independent legal per-
sonhood as a result of their condition, nor are they considered criminally
insane under prevailing legal theories. In fact, “Psychopaths do meet cur-
rent legal and psychiatric standards for sanity. They understand the rules
of society and the conventional meanings of right and wrong. They are
capable of controlling their behavior, and they are aware of the potential
consequences of their acts. Their problem is that this knowledge fre-
quently fails to deter them from antisocial behaviour” (Hare 1999, 143).
Other humans such as infants and small children, who have little or
no moral sense, and few legal responsibilities, are also accorded (if only
dependent) legal personality. Here, recognition of species resemblance
and similarities in potential dispositions between children and adult hu-
mans underlies the ascription of legal personality and the granting of le-
gal rights. Similarly, mentally incapacitated adults may have a limited or
no moral sense and yet are accorded dependent legal personality. And
inanimate dependent legal persons such as ships and temples have no
moral sense at all.
If we consider the possession of a moral sense to be contingent on the
possession of, and rational acting upon, a privileged set of beliefs and de-
sires, the moral ones, we have a means of ascribing a moral sense to an
arti‹cial agent. For it is plausible to consider that our interpretation of
human beings as moral agents is dependent on our adopting a “moral
stance” toward them: we ascribe a moral belief (“John believes helping
the physically incapacitated is a good thing”) and on the basis of this as-
cription, predict actions (“John would never refuse an old lady help”) or
explain actions (“He helped her cross the street because he wanted to
help a physically incapacitated person”). To display a moral sense then,
would be to provide evidence of the direction of action by a set of beliefs
and desires termed “moral.”
If we could predict an arti‹cial agent’s behavior on the basis that it
rationally acts upon its moral beliefs and desires, the adoption of such a
moral stance toward it is a logical next step. An arti‹cial agent’s behav-
ior could be explained in terms of the moral beliefs we ascribe to it: “The
robot avoided striking the child because it knows that children cannot
‹ght back.” Intriguingly, when it is said that corporations have a moral
180 / A Legal Theory for Autonomous Arti‹cial Agents

sense we ‹nd the reasons for doing so are similar to those applying to
arti‹cial agents: because they are the kinds of entities that can take in-
tentional actions and be thought of as intentional agents (French 1984,
90ff.).
Failures of morality on the part of arti‹cial agents could be under-
stood as failures of reasoning: the failure to hold certain beliefs or desires,
or to act consistently with those beliefs and desires. If we could use a lan-
guage of morally in›ected beliefs and desires in describing and predicting
the behavior of an arti‹cial agent, then it would make sense to discuss
the behavior of that arti‹cial agent as morally good or bad.
Perhaps to be a moral person, an entity must be capable of express-
ing regret or remorse or both and of thereby suffering punishment. But
even such attributions can be “cashed out” in intentional terms (French
1984, 90ff.). Consider regret, which can be viewed as the capacity to
view oneself as the person who did x and to feel or wish that he had not
done x. Here, the inner emotion remains inaccessible; what is accessible
is an outward manifestation (for example, the expression of regret or re-
morse), the ascription of which is made coherent by its consistency with
other ascriptions (French 1984, 90ff.). But these outward manifestations
are precisely those that would be of interest to us in the case of arti‹cial
agents. In such a context, what could it mean to forgive a computer? It
might mean that “we would not localize the explanation for malfunction
in the way the computer had adapted to its environment, but, perhaps, in
the unusual character of the circumstances or in de‹ciencies in the envi-
ronment in which it learned its typical response patterns” (Bechtel
1985a, 305).
The possibility that arti‹cial agents could possess a moral sense is not
an idle one. A large and growing body of work suggests they can be
thought of as moral agents via a variety of empirical and philosophical
considerations (Wallach and Allen 2008; Gips 1995; Floridi and Sanders
2004; Allen, Varner, and Zinser 2000; Coleman 2001). In particular, an
agent might be imagined that “will act like a moral agent in many ways”
because it is “conscious, to the extent that it summarizes its actions in a
unitary narrative, and . . . has free will, to the extent that it weighs its fu-
ture acts using a model informed by the narrative; in particular, its be-
havior will be in›uenced by reward and punishment” (Hall 2007, 348).
Ascriptions of a moral sense are, as we noted when discussing the
possible legal liability of an arti‹cial agent, often linked with the possi-
bility of ascribing them responsibility. But these considerations might be
Personhood for Arti‹cial Agents / 181

independent of personhood and agency (Stahl 2006). What is crucial is


whether responsibility ascriptions serve a socially desirable end and bring
about positive social outcomes and consequences, especially when indi-
vidual human responsibility can be hard to ascribe. Such a rationale may
be employed in holding corporations responsible; holding Exxon respon-
sible for an oil spill leads to socially desirable ends for those parts of soci-
ety that came into contact with it (Stahl 2006). A similar possibility is
ever-present in the case of arti‹cial agents.

The Problem of Identi‹cation


There is one practical objection to the possibility of legal personhood for
arti‹cial agents: how are they to be identi‹ed? (Bellia 2001, 1067). This
dif‹culty was considered brie›y in our discussion of attribution of knowl-
edge when the notion of “readily accessible data” was at stake, for how
the agent is de‹ned will clearly affect what data is considered accessible.
Consider an arti‹cial agent instantiated by software running on
hardware. It is not clear whether the subject agent is the hardware, the
software, or some combination of the two. To make things worse, the
hardware and software may be dispersed over several sites and main-
tained by different individuals. Similarly, for arti‹cial agents imple-
mented in software, it is not evident which of its two forms, the source
code or the executable, should be considered the agent. Our identi‹ca-
tion dif‹culties do not end with the choice of the executable as the
agent, for unlimited copies of the agent can be made at very low cost
(Sartor 2002). Perhaps each instance of the agent could be a separate
person, especially if capable of different interactions and functional roles.
Or consider an agent system, consisting of multiple copies of the same
program in communication, which might alternately be seen as one en-
tity and a group of entities (consider, for instance, “botnets,” groups of
“zombie” computers controlled by multiple copies of malware and used
by hackers to mount denial-of-service attacks or by spammers to send
spam email [Zetter 2009]).
These problems are not insurmountable. Similar problems of iden-
tity are evident in entities like football teams, universities, and corpora-
tions, and nevertheless, a coherent way of referring to them emerges over
time based on shared meanings within a community of speakers. Thus so-
cial norming over which conventions to follow in referring to the entity
in question can establish implicit identity conditions.
182 / A Legal Theory for Autonomous Arti‹cial Agents

If such norming does not emerge, or does not solve the identi‹cation
problem well enough, then agents could be identi‹ed via a registry, simi-
lar to that for corporations, where “registration makes the corporation
identi‹able. For computers to be treated as legal persons, a similar system
of registration would need to be developed. . . . [A] system of registration
could require businesses who wish to rely on computer contracts to regis-
ter their computer as their ‘agent’” (Allen and Widdison 1996, 42). Such
a “Turing register” would enable the registration and recognition of
agents and their principals, much as companies are registered today
(Wettig and Zehendner 2003; Allen and Widdison 1996; Weitzenboeck
2001; Karnow 1996; Karnow 1994). The cost of establishing such a reg-
ister would be signi‹cant and would need to be weighed against the
bene‹ts of doing so (Kerr 1999; Miglio et al. 2002), a consideration pres-
ent in all public policy interventions. It may be the number and com-
plexity of arti‹cial agents, and the diversity of their socioeconomic inter-
actions, eventually makes the case for such intervention overwhelming.
In sum, none of the philosophical objections to personhood for
arti‹cial agents—most but not all of them based on a “missing some-
thing” argument—can be sustained, in the sense that arti‹cial agents can
be plausibly imagined that display that allegedly missing behavior or at-
tribute. If this is the case, then in principle arti‹cial agents should be able
to qualify for independent legal personality, since it is the closest legal
analogue to the philosophical conception of a person.
By the same token, the history of legal decisions about legal person-
hood reveals that ultimately, tests relevant to philosophical personhood
may be irrelevant to the determination of legal personality for arti‹cial
agents. What would be of determinative importance would be the sub-
stantive issue before the courts, and what the desired result might be from
a policy standpoint. For more than anything else, the jurisprudence re-
garding legal personhood appears to be crucially result-oriented.

5.5. The Signi‹cance of Personhood Jurisprudence

Legal notions of personhood, in general, remain indeterminate, for


courts have sometimes treated personhood as a commonsense concept39
and sometimes as a formal legal ‹ction; the history of personhood ju-
risprudence reveals no unanimity in its conception of the legal person. In
large part, this is because legal rulings affect social understandings of per-
Personhood for Arti‹cial Agents / 183

sonhood and because legal personhood may bring rights and protections
in its wake. The most contentious debates over legal personhood arise
when considerable disagreement exists over whether the entity in ques-
tion can be regarded as human40 and whether clearly nonhuman entities
can be considered persons.
In the case of slavery, the status of that class of human beings wa-
vered between property and persons, revealing that legal rulings re›ected
social attitudes and were marked by expediency.41 Judges ruled the com-
mon-law crime of murder extended to killing slaves and while doing so,
stressed slaves’ humanity.42 The law also treated slaves as persons by
stressing their humanity when the need was felt to try them for crimes,43
despite arguments by slaves they were not legal “persons” and therefore
not subject to the criminal law.44 Judges, however, ruled the common law
of assault and battery, in the context of owners’45 and even nonowners’
assaults on slaves, did not apply. Courts argued slaves qua slaves could not
enjoy the general grants of rights and privileges that other humans en-
joyed46 because their essential natures rendered them “subject to despo-
tism”;47 that they could not be persons because it represented an “inher-
ent contradiction”;48 that perhaps they were more akin to animals, or
types of chattel or as real estate;49 and yet other courts took refuge in the
difference between humanness and legal personhood to deny legal per-
sonality to slaves.50 This variety of attitudes indicates the personhood of
slaves was a contested notion, one bent to accommodate putative social
needs.
The history of corporate personhood reveals a similar mixture of at-
titudes and motivations; the legal personality of corporations is only un-
controversial when statutes explicitly de‹ne “persons” as including cor-
porations (Note 2001). The rulings of the U.S. Supreme Court
concerning corporate personhood are accordingly notable in their vari-
ety. The Court ‹rst asserted corporate personhood with respect to prop-
erty rights in Santa Clara County v. Southern Paci‹c Railroad,51 by saying
corporations counted as “persons” within the scope of protection of the
Fourteenth Amendment’s Due Process Clause. Indeed, the Court said it
“[did] not wish to hear argument on the question whether the provision
in the Fourteenth Amendment to the Constitution . . . applies to these
corporations. We are all of [the] opinion that it does.”52
But later incarnations of the Court were not so con‹dent. Justice
Douglas, dissenting in Wheeling Steel Corp. v. Glander,53 suggested the
Fourteenth Amendment, written to eliminate race discrimination, was
184 / A Legal Theory for Autonomous Arti‹cial Agents

not aimed at corporations,54 and extending due process property rights to


corporations by including them in the meaning of the amendment’s Due
Process Clause clashed with other references to “persons” or
“citizens”55—for corporations were not “born or naturalized,”56 and were
not “citizens” within the meaning of the Privileges or Immunities Clause
of the Fourteenth Amendment.57
Still, the Supreme Court also ruled corporations were persons for the
purpose of the Fourth Amendment’s protections against unreasonable
searches,58 the First Amendment’s Free Speech Clause,59 the Fifth
Amendment’s Double Jeopardy Clause,60 and the Sixth Amendment’s
Jury Right Clause.61 But it has refused to extend personhood to corpora-
tions when rights seemed to derive from interests exclusive to humans.
For example, it rejected the claim corporations were U.S. citizens,62 and
persons for the purpose of Fifth Amendment protections against self-in-
crimination.63
The Supreme Court has employed various theories to underwrite
these rulings (Mayer 1990; Note 2001). Under the “arti‹cial entity” or
“creature” theory it has held rights that inhere in humans as humans may
not be extended to nonhuman entities; using the “group” theory, it has
emphasized the human individuals that constitute the corporation, that
corporations are entitled to legal personhood as doing so protects the
rights of the constituent human persons; under the “natural entity” or
“person” theory, which views the corporation as an autonomous entity,
with existence separate from its creation by the state or by the individu-
als that constitute it, it has attempted to extend to corporations the full
panoply of legal rights (Schane 1987; Rivard 1992; Note 2001).
These different approaches suggest the Court’s corporate personhood
jurisprudence is result-oriented (Rivard 1992), that as the American econ-
omy became increasingly dependent on corporations, modern corporations
became dependent on Bill of Rights protections, and courts adjusted the
boundaries of legal personhood to accommodate the modern corporation’s
need for these protections (Mayer 1990). Thus courts disdained philoso-
phy and appeared to be motivated entirely by pragmatism with judges se-
lecting those theories of personhood that suited the outcomes they desired
on a case-by-case basis, an attitude that suggests ultimately that “[p]erson-
hood is . . . a conclusion, not a question” (Rivard 1992).
But decisions on personhood have far-reaching political implications
if personhood is viewed as extending protections to a previously unpro-
tected class of entities. Given the social meaning, and the embodiment
Personhood for Arti‹cial Agents / 185

and signaling of social values in legal statements such as statutes and ju-
dicial opinions (Sunstein 1996; Posner 2000, 2ff.) and the ability of the
law to shape behavior by creating social norms (Sunstein 1996), person-
hood jurisprudence could be interpreted as making normative statements
about the worth of the objects included and excluded (as in slavery rul-
ings or in the status of women in nineteenth-century England) (Balkin
1997). Legal rulings from the slavery era that showed some humans were
regarded by the law as less than human, or less than full legal persons,
shaped a society’s view of humanity and re›ected a society’s prejudices
(Note 2001).
In deciding that arti‹cial agents are persons, courts or legislatures
would send a message about their commonality with us. A refusal to do
so would express worries about whether doing so might cheapen human
personhood, especially as it might be taken to mean that arti‹cial agents
would be possessing qualities that we take to be especially human.
The jurisprudence of personhood in abortion cases demonstrates the
substantive weight of legal ‹ctions, for here judges insist that persons are
legal ‹ctions (Note 2001). But if personhood could be manipulated and
interpreted simply as a legal ‹ction, no such insistence would be neces-
sary (Note 2001). For denying or granting legal personality to particular
entities indicates a position on the societal valuation of the entity in
question. If legal personhood is understood as a zero-sum game, where
personhood decisions in›uence interests other than those of the entity in
question, then the conferral of personhood on nonhuman entities risks
cheapening the personhood of natural persons;64 grants of legal personal-
ity to corporations could be viewed as cheapening the social meaning of
humans’ legal personality if “equality of constitutional rights plus an in-
equality of legislated and de facto powers leads inexorably to the su-
premacy of arti‹cial over real persons” (Mayer 1990).
Legal ambivalence over corporate personality and about human
uniqueness in an increasingly corporate world could rest on concerns
that assigning personhood to corporations may work as an illocutionary
act, bringing a particular state of affairs into existence by proclamation
(Schane 1987), and perhaps only secondarily on the associated concep-
tual dif‹culties.65 Debates about corporate personhood re›ect a tension
between “the desire to stimulate the economy by granting constitutional
protections to corporations and the fear unchecked corporate growth
may have socially deleterious effects or that unchecked recognition of
corporate personhood may cheapen our own” (Note 2001, 1766).
186 / A Legal Theory for Autonomous Arti‹cial Agents

There is similar anxiety when it comes to personhood for arti‹cial


agents, for the concerns are almost the same. There is a desire to grant
them increasing amounts of power, to delegate increasing amounts of re-
sponsibility, to bene‹t from research into increasingly advanced arti‹cial
agents, and to incentivize the production of arti‹cial agents that may be
accorded greater responsibilities. Corresponding to these, the dominant
anxiety is the role of humanity in an increasingly technologized society
(witness, for instance, the intense anxiety over human cloning). The ex-
tension of legal personhood to arti‹cial agents might be felt to lead to the
“devaluation of what it means to be human” (Fischer 1997, 569). It may
be the easier route to granting arti‹cial agents the status of legal persons
is to insist that this is a legal ‹ction, one intended for doctrinal conve-
nience and to facilitate e-commerce. But such considerations might be
overridden by their larger social role, and it might be impossible to make
a legal ruling of personhood for arti‹cial agents without implications for
more fundamental issues of personhood.
The debates over slavery remind us of uncomfortable parallels with
the past, for the abusive, pejorative labels ›ung at programs (“the com-
puter is just a dumb machine”; “it does what you tell it to do”; “all it does
is garbage in, garbage out”), the comparison of dull human beings to com-
puters (“he had a robotic demeanor”), the knee-jerk reactions from people
anxious to assert human uniqueness (“a program will never do anything
creative”; “a program can’t see a beautiful sunset”), re›ect ongoing ten-
sion over humanity’s role in an increasingly technologized world.
In the case of arti‹cial agents, the best philosophical arguments do
not argue against arti‹cial agents; instead they acknowledge the theoret-
ical possibility of personhood for arti‹cial agents (Chopra and White
2004; Rorty 1988; Note 2001; Berg 2007; Goldberg 1996; Solum 1992;
Rivard 1992; Calverley 2008; Glenn 2003; Naf‹ne 2003; Willick 1985;
Kitcher 1979); thus the decision to accord or refuse legal personality
(both dependent and, in function of increasing competence, indepen-
dent) would ultimately be a result-oriented one for courts and legislatures
alike, and cannot rest solely on conceptual claims.

5.6. Recognizing Arti‹cial Agents as Persons

The most salutary effect of our discussions thus far on the possibility of
personhood for arti‹cial agents might have been to point out the con-
Personhood for Arti‹cial Agents / 187

ceptual dif‹culties in ascriptions of personhood—especially acute in ac-


counts of personhood based on psychological characteristics that might
give us both too many persons and too few (Wilson 1984)—and its para-
sitism on our social needs. The grounding of the person in social needs
and legal responsibilities suggests personhood is socially determined, its
supposed essence nominal, subject to revision in light of different usages
of person (Bakhurst 2005, 463). Recognizing personhood may consist of a
set of customs and practices, and so while paradigmatic conceptions of
persons are based on human beings, a grounding that “tacitly informs all
our thinking about persons, including our speculations about those of a
supposedly non-human variety” (Bakhurst 2005, 463), the various con-
nections of the concept of person with legal roles concede personhood is
a matter of interpretation of the entities in question, explicitly depen-
dent on our relationships and interactions with them.
Personhood thus emerges as a relational, organizing concept that
re›ects a common form of life and common felt need. For arti‹cial agents
to be become legal persons, a crucial determinant would be the formation
of genuinely interesting relationships,66 both social and economic, for it
is the complexity of the agent’s relational interactions that will be of cru-
cial importance.
Personhood is a status marker of a class of agents we, as a species, are
interested in and care about. Such recognition is a function of a rich
enough social organization that demands such discourse as a cohesive
presence and something that enables us to make the most sense of our fel-
low beings. Beings that do not possess the capacities to enter into a
suf‹ciently complex set of social relationships are unlikely to be viewed
as moral or legal persons by us. Perhaps when the ascription of second-or-
der intentionality becomes a preferred interpretational strategy in deal-
ing with arti‹cial agents, relationships will be more readily seen as form-
ing between arti‹cial agents and others, and legal personhood is more
likely to be assigned.
Fundamentally, the question of extending legal personality to a par-
ticular category of thing remains one of assessing its social importance:
“whether an entity will be considered by the community and its lawmak-
ers to be of such social importance that it deserves or needs legal protec-
tion in the form of the conferral of legal personality” (Nosworthy 1998).
The evaluation of the need for legal protection for the entity in question
is sensitive, then, to the needs of the community. The entity in question
might interact with, and impinge on, social, political, and legal institu-
188 / A Legal Theory for Autonomous Arti‹cial Agents

tions in such a way that the only coherent understanding of its social role
emerges by treating it as a person. The question of legal personality sug-
gests the candidate entity’s presence in our networks of legal and social
meanings has attained a level of signi‹cance that demands reclassi‹ca-
tion. An entity is a viable candidate for legal personality in this sense if
it ‹ts within our networks of social, political, and economic relations in
such a way it can coherently be a subject of legal rulings.
Thus, the real question is whether the scope and extent of arti‹cial
agent interactions have reached such a stage. Answers will reveal what
we take to be valuable and useful in our future society as well, for we will
be engaged in determining what roles arti‹cial agents should be playing
for us to be convinced the question of legal personality has become a live
issue. Perhaps arti‹cial agents can only become persons if they enter into
social relationships that go beyond purely commercial agentlike relation-
ships to genuinely personal relationships (like medical care robots or
companion robots). And even in e-commerce settings, an important part
of forming deeper commercial relationships will be whether trust will
arise between human and arti‹cial agents; users will need to be con-
vinced “an agent is capable of reliably performing required tasks” and will
pursue their interests rather than that of a third party (Serenko, Ruhi,
and Cocosila 2007).
Autopoietic legal theory, which emphasizes the circularity of legal
concepts, suggests too, that arti‹cial agents’ interactions will play a cru-
cial role in the determination of legal personality: “[E]ntities are de-
scribed as legal persons when the legal system attributes legally meaning-
ful communications to them. . . . [W]ithin the legal system, legal persons
are those entities that produce legal acts. . . . A natural person is capable
of many types of legal acts. . . . A wild animal is not capable of any . . . le-
gal acts. Hence, the legal system treats natural persons, but not wild ani-
mals, as legal persons” (Teubner 1988). If it is a suf‹cient condition for
personality that an entity engage in legal acts, then, an arti‹cial agent
participating in the formation of contracts becomes a candidate for legal
personality by virtue of its participation in those transactions.
Personhood may be acquired in the form of capacities and sensibili-
ties acquired through initiation into the traditions of thought and action
embodied in language and culture; personhood may be result of the mat-
uration of beings, whose attainment depends on the creation of an evolv-
ing intersubjectivity (Bakhurst 2005). Arti‹cial agents may be more con-
vincingly thought of as persons as their role within our lives increases and
Personhood for Arti‹cial Agents / 189

as we develop such intersubjectivity with them. As our experience with


children shows, we slowly come to accept them as responsible human be-
ings. Thus we might come to consider arti‹cial agents as dependent legal
persons for reasons of expedience, while ascriptions of full moral person-
hood, independent legal personality, and responsibility might await the
attainment of more sophisticated capacities on their part.

5.7. Conclusion

While arti‹cial agents are not yet regarded as moral persons, they are co-
herently becoming subjects of the intentional stance, and may be
thought of as intentional agents. They take actions that they initiate,
and their actions can be understood as originating in their own reasons.
An arti‹cial agent with the right sorts of capacities—most importantly,
that of being an intentional system—would have a strong case for legal
personality, a case made stronger by the richness of its relationships with
us and by its behavioral patterns. There is no reason in principle that
arti‹cial agents could not attain such a status, given their current capac-
ities and the arc of their continued development in the direction of in-
creasing sophistication.
The discussion of contracting suggested the capabilities of arti‹cial
agents, doctrinal convenience and neatness, and the economic implica-
tions of various choices would all play a role in future determinations of
the legal status of arti‹cial agents. Such “system-level” concerns will con-
tinue to dominate for the near future. Attributes such as the practical
ability to perform cognitive tasks, the ability to control money, and con-
siderations such as cost bene‹t analysis, will further in›uence the deci-
sion whether to accord legal personality to arti‹cial agents. Such cost-
bene‹t analysis will need to pay attention to whether agents’ principals
will have enough economic incentive to use arti‹cial agents in an in-
creasing array of transactions that grant agents more ‹nancial and deci-
sion-making responsibility, whether principals will be able, both techni-
cally and economically, to grant agents adequate capital assets to be full
economic and legal players in tomorrow’s marketplaces, whether the use
of such arti‹cial agents will require the establishment of special registers
or the taking out of insurance to cover losses arising from malfunction in
contractual settings, and even the peculiar and specialized kinds and
costs of litigation that the use of arti‹cial agents will involve. Factors
190 / A Legal Theory for Autonomous Arti‹cial Agents

such as ef‹cient risk allocation, whether it is necessary to introduce per-


sonality in order to explain all relevant phenomena, and whether alter-
native explanations gel better with existing theory, will also carry con-
siderable legal weight in deliberations over personhood. Most
fundamentally, such an analysis will evaluate the transaction costs and
economic bene‹ts of introducing arti‹cial agents as full legal players in a
sphere not used to an explicit acknowledgment of their role.
Many purely technical issues remain unresolved as yet: secure proto-
cols for agent negotiation, electronic payments, interoperability of
arti‹cial agents, authentication between agents (the need for electronic
X.509 and XML digital signatures), and so on (Bain and Subirana 2003a;
Bain and Subirana 2003b; Bain and Subirana 2003c; Bain and Subirana
2004; Brazier et al. 2003; Brazier et al. 2004).
To engender full trust in such entities as players in the marketplace
will also require the development of reputation mechanisms (similar to
third-party security certi‹cates issued today) (Bain and Subirana 2003a).
A signi‹cant development would be the continued advancement of the
Semantic Web, often felt to be the ideal environment for agent-oriented
computing because of its emphasis on the presence of machine-readable
data formats (Bain and Subirana 2004).
Economic considerations might ultimately be the most important in
any decision whether to accord arti‹cial agents with legal personality.
Seldom is a law proposed today in an advanced democracy without some
semblance of a utilitarian argument that its projected bene‹ts would out-
weigh its estimated costs. As the range and nature of electronic com-
merce transactions handled by arti‹cial agents grows and diversi‹es,
these considerations will increasingly come into play. Our discussion of
the contractual liability implications of the agency law approach to the
contracting problem was a partial example of such an analysis.
Whatever the resolution of the arguments considered above, the is-
sue of legal personality for arti‹cial agents may not come ready-formed
into the courts, or the courts may be unable or unwilling to do more than
take a piecemeal approach, as in the case of extending constitutional
protections to corporations. Rather, a system for granting legal personal-
ity may need to be set out by legislatures, perhaps through a registration
system or “Turing register,” as discussed above.
A ‹nal note on these entities that challenge us by their quickening
presence in our midst. Philosophical discussions on personal identity of-
ten take recourse in the pragmatic notion that ascriptions of personal
Personhood for Arti‹cial Agents / 191

identity to human beings are of most importance in a social structure


where that concept plays the important legal role of determining respon-
sibility and agency. We ascribe a physical and psychological coherence to
a rapidly changing object, the human being, because otherwise very little
social interaction would make sense. Similarly, it is unlikely that, in a fu-
ture society where arti‹cial agents wield signi‹cant amounts of executive
power, anything would be gained by continuing to deny them legal per-
sonality. At best it would be a chauvinistic preservation of a special sta-
tus for biological creatures like us. If we fall back repeatedly on making
claims about human uniqueness and the singularity of the human mind
and moral sense in a naturalistic world order, then we might justly be ac-
cused of being an “autistic” species, unable to comprehend the minds of
other types of beings.

You might also like