Military_frameworks_technological_know_h-mccarthy
Military_frameworks_technological_know_h-mccarthy
Military_frameworks_technological_know_h-mccarthy
To cite this Article Kaag, John and Kaufman, Whitley(2009) 'Military frameworks: technological know-how and the
legitimization of warfare', Cambridge Review of International Affairs, 22: 4, 585 — 606
To link to this Article: DOI: 10.1080/09557570903325496
URL: http://dx.doi.org/10.1080/09557570903325496
The publisher does not give any warranty express or implied or make any representation that the contents
will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses
should be independently verified with primary sources. The publisher shall not be liable for any loss,
actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly
or indirectly in connection with or arising out of the use of this material.
Cambridge Review of International Affairs,
Volume 22, Number 4, December 2009
Abstract It is the elusive target of policymakers, ethicists and military strategists: the
target of a ‘just war’. Since the advent of precision-guided munitions in the mid-1970s,
commentators claimed that surgical-strike technology would advance the cause of jus in
bello, ending the longstanding tension between effective military engagement and
Downloaded By: [Harvard College] At: 12:46 16 July 2010
morality. Today, many policymakers accept that the ethical dilemmas that arise in the
‘fog of war’ can be negotiated by the technical precision of weaponry. This is, at best, only
partially accurate. At worst, its misplaced optimism risks numbing the moral sense of
strategists and, just as importantly, the sensibilities of the general populace. We argue that
the development of precision guided munitions (PGM), stand-off weaponry and military
robotics may force policymakers and strategists to experience new ethical tensions with an
unprecedented sensitivity and may require them to make specific policy adjustments. In the
move toward more quantitative approaches to political science and international affairs it
is often forgotten that military ethics, and the ethics of military technologies, turn on the
question of human judgment. We argue that the ethical implications of revolution in
military affairs (RMA) are best investigated by way of a detailed discussion of the tenuous
relationship between ethical decision-making and the workings of military technology.
1
The theoretical foundations of this study were first briefly outlined in John Kaag
(2008). The current article, however, departs from that article in significant ways in its
detailed and exclusive focus on the ethical implications of military (rather than homeland
security) technologies. The issue of intelligence-gathering and PGM technologies, first
broached by Kaag (2008), have been developed more fully in the seventh section of the
current article.
turns on precision rather than magnitude, now threatens our moral sensibilities.
This danger manifests itself in two distinct, yet related, ways.
First, we risk confusing technical capabilities and normative judgments by
assuming that precision weaponry facilitates ethical decision-making. Here
‘facilitate’, derived from facilitis, means to make easier. Second, we are in danger of
allowing techne to facilitate ethics in a more dramatic sense. Here we might
consider facilitis as stemming from the verb facere, meaning to do or make. We risk
our ethical standards when military technologies are purported to make the
thoughtful determinations that have always been the sine qua non of ethics.
The employment of robotics on the battlefield stands as an extreme case of this
problem. Military robotics remains in its early form of research and development,
but recent reports on battle-ready robots should give ethicists pause. In effect,
strategists and theorists have begun to argue that we make the issue of military
ethics an easy one by placing ‘ethical mechanisms’ in our machinery thereby
shifting moral responsibility onto techne itself. We argue that the implementation
of these robotics must be preceded by a careful reminder of what ethical judgment
Downloaded By: [Harvard College] At: 12:46 16 July 2010
entails, that warfare must be regarded as a strictly human activity and that moral
responsibility can never be transferred to the technology that is employed therein.
be cultivated in light of this fact. It is not the case that our precision must be
refined in order to account for a particular ethical judgment; rather Aristotle
insists that ‘it is the nature of these [ethical] matters’ to remain more complex than
any set of rubrics generated by techne (1137b17). This is one reason for Plato to
argue that Gorgias, and all rhetoricians who attempt to make ethics into a type of
science, are unable to claim expert status in the field of ethics. There are no experts
in a field that is defined by new and changing situations. Heidegger tried to
extend this point in the 1950s when technocrats began to make their way into the
circles of power in Washington and in Europe. At this time, there was a strong yet
misguided belief that strategic experts, detached from the emotional setting of the
battlefield, could wage successful and just wars.
Aristotle is dubious, stating that cases concerning the human good ‘do not fall
under any science [techne] or under a given rule but the individual himself in each
case must be attuned to what suits the occasion’ (1104a10). Moral behaviour
happens in situ, in a human interaction with a particular setting and circumstance.
In her analysis of The Nicomachean ethics, Martha Nussbaum explains that
Downloaded By: [Harvard College] At: 12:46 16 July 2010
From Kahn’s statement, it follows that if strategists make and discuss objectively
quantitative estimates of casualties and destruction in a given attack, they are
more likely to avoid the tragedies of war (Ghamari-Tabrizi 2005, 203– 204). In one
sense, Kahn’s position seems to make sense. Tragedies present people being
destroyed by forces that are beyond their control. The development of military
technology and the corresponding ability to accurately estimate casualties allow
military planners to order aerial strikes with a greater sense of the their
consequences, thereby achieving a greater degree of control over a given situation
in the field.
In another sense, however, Kahn’s position on techne and quantitative estimates
appears to miss its mark. As Nussbaum and others have noted, tragedy shows good
people doing bad things. Sometimes the act that the individual intentionally did is
not the same as the thing that is actually accomplished. Regardless of the degree of
precision, strategists must continue to be aware of the possible, indeed inevitable,
disjunction between the intended consequences of attacks and the outcomes of
military confrontation in actu. As Clausewitz noted in the early 1800s, theoretical or
ideal plans of attack, despite their specificity and precision, will remain out of synch
with the sui generis circumstances of particular campaigns. Techne cannot
overcome this Clausewitzian ‘friction’, a term that becomes the central theme of
On war. For the sake of our discussion of precision-guided munitions, it is worth
noting that the concept of ‘friction’ (Gesamtbegriffe einer allgemeinen Friktion)
590 John Kaag and Whitley Kaufman
is coupled with the phrase ‘the fog of war,’ for the friction between plans and actions
turns on the inevitable limitations of human foresight (Clauswitz 1980, 264; Watts
2004). A reliance on mathematics and technical precision does not help us out of
ambiguous judgements, for, as Clausewitz explains, ‘The road of reason . . . seldom
allows itself to be reduced to a mathematical line by principles and opinions . . . The
actor in War therefore soon finds he must trust himself to the delicate tact of
judgement’ (Clausewitz 1980, 314). Clausewitz believed that strategists who
assume that scientific approaches to military strategy can fully overcome fog and
friction are making a serious tactical error; we merely extend this point by
suggesting that such an assumption results in serious moral hazards as well.
In addition to this point concerning unintended consequences, tragedy presents
an even more disturbing situation, namely the occurrence of a tragic conflict. In
such intractable instances, an audience looks on as an action is committed by a
person whose ethical character would, under other circumstances, reject such an
act. Antigone would not normally disobey the mandate of the state, but in her
particular situation she is forced to do so in order to fulfil her sense of filial piety.
Downloaded By: [Harvard College] At: 12:46 16 July 2010
Tragedy in this case performs the conflict between ideals in light of given
circumstances. Antigone’s act is not physically compelled, nor is its enactment due
to the ignorance or misinformation. ‘Fog’ is not to blame in this case. Instead, this
type of tragedy turns on the inherent pitfalls of moral decision-making, pitfalls that
cannot be obviated by the discussion of quantitative measurements. Tragic conflicts
are interesting and instructive because they remind their audiences that ethical
decision-making is an unshakably human endeavour that occurs in unique and
ever-changing situations. The meaning of right and wrong in these particular
situations cannot be determined by a general metric applied to all cases but only by
way of a unique interpretation of virtue; indeed, these theatrical scenes continue to
fascinate audiences for the sole reason that they are not to be ‘figured out’ in a
scientific manner. In spite of this fact, it seems that Kahn’s position on the general
ethics of techne received wide acceptance in the wake of the attacks of September 11,
2001 and as the US global war on terror (GWT) quickly got underway.
Before advancing our argument, it seems wise to pause for a moment to
consider the implications of the assertion that has just been made. We hold, in the
spirit of Aristotle and Augustine, that the ethics of war turn on the issue of human
judgement, and that judgment, by virtue of its sui generis circumstances and
emotionally laden character, should be regarded as indeterminate. To say that
judgement is indeterminate (aorista) is in no way to succumb to the type of
relativism that bars the way of making normative claims. In our reading, Aristotle
is no relativist. Instead, he is a middle-range theorist who believes that good
judgement can be achieved only through a human’s practised attentiveness to
particular situations, a knowledge of previous forms of judgement, and the
refinement of ideals that, while never fully attained, can serve as causes to pursue.
Aristotle’s understanding of virtue has been widely criticized for not providing
solid guidelines for right action. As JL Mackie noted in 1977,
[T]hough Aristotle’s account is filled out with detailed descriptions of many of the
virtues, moral as well as intellectual, the air of indeterminacy persists. We learn the
names of the pairs of contrary vices that contrast with each of the virtues, but very
little about where or how to draw the dividing lines, where or how to fix the mean.
As Sidgwick says, he ‘only indicates the whereabouts of virtue’. (Mackie 1977, 186)
Military frameworks 591
Mackie is right in the sense that Aristotle’s Ethics are not going to set out a hard
and fast set of guidelines for ethical conduct. He is wrong, however, to suggest
that we should disparage or dismiss Aristotle on these grounds. While Aristotle
might be reticent to prescribe certain rules to guide our action, he is quite happy to
tell us what is not permissible in making ethical judgements—such as making ethics
into a techne. It is the prohibition against the mechanization of judgement that
serves as the theoretical groundwork for our current project.
So what is this essence of technology and why must tacticians re main wide-eyed
to the dangers that accompany this essence? In ‘The question concerning
technology’, Heidegger reminds his audience that technology should be
understood in two distinct, yet related, ways. First, we must regard it as an
instrumentum, as a mere means to an end. Instruments are not ends in themselves,
but are rather employed at the service of human objectives. This brings us to the
second way of understanding technical capabilities: these capabilities must
always be understood as associated with human purposes and pursuits
According to Heidegger, technology has always been associated with episteme,
as a means of knowing and as a means of being at home in the world. Modern
technology, however, differs from previous forms of instrumentum in the way that
it pursues knowledge and makes its home in modernity (Heidegger 1993).
The goal of modern technology is the establishment of order and limits
Downloaded By: [Harvard College] At: 12:46 16 July 2010
rhetoric as a tool to ‘single out’ individuals as potential targets. In light of this fact,
strategists now face the temptation of relying on technical precision to make moral
distinctions in the targeting cycle.
Heidegger suggests that such a danger is real and present. Modernity has
already allowed technology to reveal the meaning of the natural world; we are
suggesting that technocrats who optimistically speak of ‘military transformation’
would allow PGM to reveal important meanings in the worlds of security, politics
and warfare. For Heidegger, scientific and empirical manipulations ‘designate
nothing less than the way in which everything presences that is wrought upon by
the revealing that challenges’ (Heidegger 1993, 323). The risks of this sort of
manipulation are front and centre in Heidegger’s later work, especially in ‘The
question concerning technology’, the ‘Letter on humanism’ and ‘The turning’.
Heidegger believes that in modernity’s approach to understanding nature we
have reduced it to its instrumental uses. The river is no longer understood as free
flowing, but rather is only understood as the amount of electricity it can generate
when it is dammed up. That is to say that in the face of technological manipulation
Downloaded By: [Harvard College] At: 12:46 16 July 2010
the river becomes merely or solely (bloss) a source of power. Similarly, the open
plateau is no longer understood in its openness, but only as ‘being-cordoned-off’
for the purposes of farming; the tree is not understood in its bare facticity, but only
as a form of cellulose that can be used and employed. While Heidegger seems to
flirt with romanticism in his comments, he does make a sound point: the
technologies that are used to put nature in order become the only means of
understanding nature’s emergence. This discussion concerning the ‘enframing’
of nature may appear far afield from a discussion of the ethical implications of
technologies of violence, oppression and militarism. Appearances can be
deceiving. Heidegger believes that the unquestioned technological manipulations
that place nature on hand and under our control are the same sort of
manipulations that allow mass atrocities to occur on the social and political scene.
That is precisely the claim that we are making in this paper in regard to the
advancement of surgical strike capabilities. In a quotation that is often cited, and
even more often misunderstood, Heidegger states that ‘Agriculture is now a
motorized food industry, the same thing in its essence as the production of corpses
in the gas chambers and the extermination camps, the same thing as blockades
and the reduction of countries to famine, the same thing as the manufacture of
hydrogen bombs’ (cited in Spanos 1993, 315). Heidegger has been criticized since
the early 1950s for this comment, for it seems to trivialize the brutality of the
Holocaust by making a comparison between genocide and agriculture.
While this cryptic remark deserves scrutiny along these lines, it does seem to
suggest that being mesmerized by technological expediency can blind us to, or
distract us from, other ways of knowing that do not turn on the rhetoric of utility.
This is the case in the use of PGM as much as it is the case in the employment of
atomic weapons. The promise of the hydrogen bomb is to create an amount of
destruction that is orders of magnitude greater than conventional or fission
bombs. Such power can tempt engineers and strategists to develop and test these
weapons without attending to the on-the-ground implications of these devices.
Combating this form of moral myopia is, in a certain sense, rather easy, for the
developers of these weapons did not purport to save lives, but rather to destroy
them. The case of PGM is slightly different. The promise of PGM is to kill or
neutralize the greatest number of targets while minimizing the risk to innocents
594 John Kaag and Whitley Kaufman
and modern military personnel. Such a promise seems like a good one (if the
targets are justly selected), but unfortunately this is a promise that technology
itself cannot keep. Only human beings can make good on this ethical commitment.
Despite this fact, the development of precision technologies has enabled the
rhetoric of safe, cheap and efficient ‘small wars’. As Michael Adas argues, the
elision between surgical strike technology and the rhetoric of efficient warfare is
just the most recent version of the longstanding partnership between technology
and imperialism (Adas 2006). This reliance on technical capabilities is not easily
criticized by ethicists, since the development of these weapons is made in the name
of ethics. When the mouthpieces of war machines coopt the language of ethics and
justice, ethicists face greater and more nuanced challenges.
In light of this discussion, a related question arises: Do military professionals
understand the moral challenges of particular battle-spaces by way of ethical
training or only through the technical frameworks of the weaponry employed?
Heidegger restates the broad point concerning technological ‘enframing’ in his
later writing: ‘When modern physics exerts itself to establish the world’s formula,
Downloaded By: [Harvard College] At: 12:46 16 July 2010
what occurs thereby is this: the being of entities has resolved itself into the method
of the totally calculable’ (Heidegger 1998, 327). The current revolution in military
affairs driven by the US DoD has encouraged modern physics and technology to
exert itself in order to establish a formula for modern warfare. The 2006
Quadrennial Defence Review (QDR), which provides objectives and projections
for US strategy, aims at ‘minimizing costs to the United States while imposing
costs on adversaries, in particular by sustaining America’s scientific and
technological advantage over potential competitors’ (QDR 2006, 5). This comment
indicates that US military tactics are tacitly employing an egoistic version of a
utilitarian standard as their ethical norm (the ‘good’ is achieved in minimizing
costs while maximizing benefits to allies and ‘friends’). Many disadvantages of
this moral framework have been repeatedly voiced by critics of utilitarianism—
one of which is the fact that the metric of utility changes in reference to US military
personnel, innocent civilians and enemy combatants. There is, however, one
supposed advantage of utilitarianism, namely that is that it is ‘totally calculable’.
The calculations of utilitarian measurements are allied closely with the
calculations of technical precision, and, in the case of the QDR objective,
strategists seem to indicate that ‘technological advantage’ can aid in making this
moral calculation of cost– benefit analysis. The philosophical underpinnings of the
QDR are reflected in, and seem to motivate, the research and development of
technologies such as military robotics that will fully replace the human soldier in
battlefield situations.
include the Air Force Predator Drones, a type of unmanned aerial vehicle (UAV)
with both surveillance and combat capability, from which important al-Qaeda
operatives have been killed using Hellfire missiles. Of equal importance is the
land-based bomb disposal robot, crucial against improvised explosive devices
(IEDs) in Iraq, the cause of the large majority of US casualties. Robots are
currently used to disarm bombs, explore caves and buildings and scout dangerous
areas so that human soldiers can be spared from these dangerous tasks. Some
Israeli military robots are equipped with submachine guns and with robotic arms
capable of throwing grenades; however, as with US robots, the decision whether
to use these weapons is in the hands of a remote human operator. While these
remotely operated machines are important technological advances and in some
ways are already dramatically changing the way war is fought, it is misleading to
call them battlefield ‘robots’, nor do they appear to raise especially complex or
novel ethics or policy questions beyond what has already been discussed in
reference to PGM.
However, we now face (so we are told) the prospect of genuinely autonomous
Downloaded By: [Harvard College] At: 12:46 16 July 2010
robot soldiers and vehicles, those that involve ‘artificial intelligence’ (AI) and
hence do not need human operators. The Future Combat Systems Project, already
underway at a projected cost of US$300 billion, aims to develop a ‘robot army’ by
2012, including a variety of unmanned systems with the capacity to use lethal
force against enemies, requiring the ability to locate, identify an enemy, determine
the enemy’s level of dangerousness and use the appropriate level of force to
neutralize the target, though it is unclear what degree of autonomy these
unmanned systems will have. The US military is now one of the major sources of
funding for robotics and artificial intelligence research (Sparrow 2007, 62). While
at present true robot soldiers remain mere vapourware, this has not stopped
enthusiasts of futuristic warfare from speculating about the imminent
transformation of war. John Pike, recently writing in the Washington Post, declares
that ‘Soon—years, not decades from now—American armed robots will patrol on
the ground as well [as in the air], fundamentally transforming the face of battle’
(2009, B03). Wallach and Allen tell us that current technology is ‘converging on the
creation of (ro)bots whose independence from direct human oversight, and whose
potential impact on human well-being, are the stuff of science fiction’ (Wallach
and Allen 2008, 3). According to a 2005 article in the New York Times, ‘The
Pentagon predicts that robots will be a major fighting force in the American
military in less than a decade, hunting and killing enemies in combat’ (Weiner
2005). Whereas Isaac Asimov’s famous Laws of Robotics mandated that no robot
may injure a human, these robots will, in contrast, be programmed for the very
opposite purpose: to harm and kill human beings, ie, the enemy.
The deployment of genuinely autonomous armed robots in battle, capable of
making independent decisions as to the application of lethal force without human
control, and often without any direct human oversight at all, would constitute a
genuine military as well as moral revolution. It would involve entrusting the
ultimate ethical question to a machine—who should live and who should die?
Of course, machines already make lethal ‘decisions’. An ordinary land mine, for
example, uses lethal force against soldiers or vehicles by detecting their presence
based on pressure, sound or magnetism; advanced mines even are capable of
distinguishing between enemy and friendly vehicles. However, the very lack of
discrimination of anti-personnel mines is the reason that the 1999 Ottawa Treaty
596 John Kaag and Whitley Kaufman
prohibited the use of such weapons, since they do not reliably distinguish
between soldiers and civilians (or even animals) and can be deadly long after the
conflict is finished. Hence the development of genuinely robotic lethal decision-
makers, capable of making rational decisions as to what constitutes a legitimate
target, would in theory surmount this objection and would constitute an
unprecedented step in military technology.
A machine capable of making reliable moral judgments would presumably
require ‘strong AI’, that is achieve actual intelligence equivalent to or superior to
our own, a project that to date remains yet a speculative possibility. It seems
therefore quite premature to consider the ethical ramifications of genuinely
autonomous lethal robot soldiers. Indeed, the very project threatens to be self-
defeating if the underlying motivation for robot soldiers is to replace humans in
situations that are dangerous or otherwise undesirable. For a machine that
achieved equivalent mental capacity to human beings could arguably claim
equivalent moral status as well and as such have equal right to be protected from
the dangers of warfare (it should be noted that the Czech word from which we
Downloaded By: [Harvard College] At: 12:46 16 July 2010
derive ‘robot’ means serf or slave). Of course the robots might be better suited for
dangerous missions, having built-in armour and weaponry to protect them.
However, some proponents (such as Arkin) call for designing these robots without
an instinct of self-preservation; even if this is possible, the denial of a right of self-
protection to a moral agent it is itself ethically problematic. Alternatively, it is
possible that such autonomous machines would lack some crucial element
required for attaining moral status and hence could be treated as mere machines
not protected by the rights of soldiers. However, we do not even know whether a
being is capable of moral decision making without being itself a moral agent.
It thus seems pointless even to try to answer such questions at this stage until we
even know whether such beings are possible and what they would be like
(e.g. whether they would they have desires and purposes just like us, or whether
they would be capable of suffering) (Sparrow 2007, 71—73). A prior moral issue
involves asking just what the goals are in developing such robot soldiers:
To protect humans from harm? To save money? To wage war more effectively?
To make war more ethical and humane to both sides? Clearly, the purpose with
which we engage on this project will influence the nature of the robots created and
their ethical legitimacy.
The rhetoric and the predictions for an imminent AI ‘robot army’ run so far
ahead of any actual engineering capabilities for the near future that it seems that
the disproportionate attention is more a product of the seductive fascination of
technology than of realistic engineering possibility. These robot soldiers offer the
dream of a transformed way of waging war. In a 2005 New York Times article,
Gordon Johnson of the Joint Forces Command at the Pentagon is quoted as stating
the advantages of robots over human soldiers: ‘They don’t get hungry. They’re not
afraid. They don’t forget their orders. They don’t care if the guy next to them has
been shot. Will they do a better job than humans? Yes’ (Weiner 2005). Roboticist
Ronald Arkin hopes that robots, with their (hypothetically) superior perceptual
skills, will be better able to discriminate in the fog of war and also make ethically
superior decisions (Arkin 2007, 6). John Pike suggests that the very existence of
war and genocide is to be blamed on human weakness, and makes utterly
fantastic claims for the ability of robot soldiers to usher in a new millennium of
permanent peace, including the end of genocide as well. For Pike, the problem
Military frameworks 597
with human soldiers is not merely their physical limitations and their cost but
even more fundamentally their psychological limitations, including particularly
their vulnerability to human emotions such as sympathy and compassion that
makes them hesitant to kill. Pike cites the celebrated 1947 study by SLA Marshall
as support for the proposition that most soldiers will not even fire their weapons
at the enemy. However, Marshall’s evidence has long been discredited as
speculative at best, and as sheer invention at the worst. Note that even if soldiers
are hesitant to fire, it is unclear whether that hesitation is due to sympathy, fear or
even mundane factors such as the need to clean one’s weapon.
The widespread fascination with the possibility of robot soldiers and the
credulous acceptance in the media of claims about their imminent arrival, long
before there is any realistic possibility of producing them, suggests that what is
really at work is what historian David Noble has labelled the ‘religion of
technology’ (Noble 1999). Noble argues that the Western (and especially
American) obsession with technology has long been a sort of secular religion.
That is, its aims (however purportedly scientific) have paralleled the religious goal
Downloaded By: [Harvard College] At: 12:46 16 July 2010
2
‘Coming to the battlefield’, Washington Post, 4 June 2009, B03.
598 John Kaag and Whitley Kaufman
would be the utter ruthless efficiency with which they would be able to take
human life.
Arkin follows a long tradition in artificial intelligence of locating human
limitations in our emotions that prevent us from reasoning clearly; for him
emotions distort our reasoning faculty and produce biases such as the ‘scenario
fulfilment’ fallacy in which people select information to conform to their pre-
existing expectations (Arkin 2007, 6). Robotics appears to offer a path toward
escaping the fog of war through eliminating those elements of the human thought
process that interfere with the clarity and precision of reason. This outlook
reflects the influence of Cartesian dualism and its radical distinction between
reason/mind and emotion/body. This extreme and implausible dualism seems to
be motivated by the technophiles’ goal of separating out the sources of ambiguity
in human judgment from those elements that can be made clear and distinct, so
that a perfect reasoning machine can be made that is not subject to human foibles.
In fact, it remains an open question, to put it mildly, whether an autonomous
intelligent agent could be created without endowing it with emotions comparable
Downloaded By: [Harvard College] At: 12:46 16 July 2010
Mechanizing judgment
Advocates of robot soldiers will no doubt argue that the problem lies in the
ambiguity of the prior rules; by more precisely specifying who is a legitimate
target, we can avoid the need for flexibility. But such a claim is unconvincing, for
the above example arguably demonstrates intrinsic ambiguity in morality rather
than ambiguity due to perceptual limitations or lack of clarity in the rules. For
whether someone counts as a legitimate target is necessarily a matter of degree; at
one end of the spectrum is the man firing the gun; at the other end is the civilian
playing no role in the attack. In between is a continuum of cases varying by the
level of involvement or support being provided in the attack. While radioing in
directions to a mortar team is probably sufficient to render one a combatant
(despite not carrying arms), other cases are not so easy, for instance civilians who
merely warn the mortar crew that Americans are coming, or civilians who provide
food or water to the crew, or even just merely give them words of support. It is
unlikely that any set of rules can be prescribed in advance to determine when
lethal force is permissible.
Nor is this the end of the intrinsic moral ambiguity of such situations. David
Bellavia recounts an incident in the Iraq War in which the Mahdi Militia in Iraq
600 John Kaag and Whitley Kaufman
were using a small child of five or six as a forward observer. In such a situation,
even though there was no doubt about the boy’s role and his essential function in
targeting the Americans, nonetheless the American soldiers declined to target the
child on moral grounds; as Bellavia and Bruning explains, ‘Nobody wants a child
on his conscience’ (Bellavia and Bruning 2007, 10). The fact that a robot soldier
would presumably lack a ‘conscience’ and be able to kill the five-year-old is hardly
evidence of its superiority to human soldiers, at least in moral terms. Moreover,
the age problem raises yet another continuum problem; at what age does a person
become sufficiently morally accountable to be a legitimate target? It is unlikely
that any rule can be formulated in advance to cover such situations; the ability to
respond flexibly and contextually to such ambiguity is a reflection of the human
capacity to exercise moral judgment in complex situations.
Nor can the problem of perceptual ambiguity be eliminated by deploying
robots in place of humans, despite frequent assertions to the contrary. As Max
Boot states, ‘[t]he US military operates a bewildering array of sensors to cut
through the fog of war’ (Boot 2003). There is no doubt that machines can
Downloaded By: [Harvard College] At: 12:46 16 July 2010
what sort of research this would be, and it seems more likely that any such
‘research’ would rather confirm the essential complexity and ambiguity of
judgments of proportionality (Arkin 2007, 12). The very project of formalizing
ethics for use by autonomous entities thus would seem to beg the question in a
grand fashion. Of course, the goal of such researchers may be the far more modest
one of experimenting to see if an algorithmic ethical system can produce morally
satisfactory decisions. However, Arkin’s initial and implausible assumption that
one can evade the problem of moral controversy by following the ‘agreed upon
and negotiated’ laws of war suggests that the project is more than merely
provisional, but reflects a deep ideological commitment to the denial of
moral ambiguity and the attainability of a technologically reproducible ethics
(Arkin 2007, 9). If so, the very project of robotic ethics would produce an
ethical system that is precise, determinate and clear—yet morally unacceptable.
The danger is that what is ‘operationalizable’ will become what is morally
permissible by means of the technological imperative.
Indeed, the inherent controversy and ambiguity of moral judgment would
Downloaded By: [Harvard College] At: 12:46 16 July 2010
even hard moral choices, conducted by superior and infallible machines. Wallach
and Allen express the concern that we have started on the ‘slippery slope toward
the abandonment of moral responsibility by human decision makers’
(Wallach and Allen 2008, 40). Or even worse, it may be that the value of robot
soldiers is that they will be unconstrained by human weaknesses such as
compassion that limit military effectiveness, and hence ruthless and uncon-
strained in their use of force against the enemy.
There is an alternative view of the role of robots in war, though it has had far
less attention because it is less dramatic and glamorous. Instead of envisioning
robots as idealized replacements for human soldiers, one might see the role of
robotics as assisting human decision-making capacity. As Roger Clarke argues,
‘The goal should be to achieve complementary intelligence rather than to continue
pursuing the chimera of unneeded artificially intelligence.’ While computers excel
at computational problems, humans are unsurpassed in what one might broadly
call ‘common sense’, as Clarke explains to include ‘unstructured’ or ‘open-
textured’ decision-making requiring judgment rather than calculation
Downloaded By: [Harvard College] At: 12:46 16 July 2010
(Clarke 1993, 64). Humans are, as Wallach and Allen assert, ‘far superior to
computers in managing information that is incomplete, contradictory, or
unformatted, and in making decisions when the consequences of actions cannot
be determined’ (Wallach and Allen 2008, 142). In other words, human superiority
will remain in the field of ethics itself and above all the ethics of the battlefield,
where situations are complex, changing and unpredictable, and the rules
themselves open-ended. None of this is to deny the crucial role for remote-
controlled or semi-autonomous robot units taking over tasks that are especially
dangerous; but moral judgments about the taking of life or even the destruction of
property must remain the domain of the human soldier, at least for the foreseeable
future. For all that technology can do to improve human life, there is no reason at
present to believe that it can solve ethical problems that have challenged humans
for thousands of years, or to eliminate the fog of war.
must be understood in the wider scope of the global war on terror. On 27 December
2001, a week before Wolfowitz’s comment, Rumsfeld (at that point, Wolfowitz’s
immediate superior) announced the establishment of Guantanamo Bay as a
holding site for detainees. The interrogation techniques used at this site, many of
which were previously used to ‘break’ trainees at the Armed Forces SERE
(Survival Evasion Resistance Escape) School, were sanctioned by DoD officials in
the months surrounding Wolfowitz’s comments concerning the use of PGM.
We are not claiming that the development of military technologies directly
cause abuse or torture; this would be overstating the point. We are, however,
suggesting that the demand to use precision-guided munitions and military
robotics in a moral way will place unprecedented pressure on interrogators to
garner the intelligence to identify appropriate targets. This may have already
resulted in compromising the standards set by the Geneva Convention for the
treatment of prisoners of war, or, more likely, the wholesale dismissal of these
standards. Exposing the relationship between military technology and interrog-
ation practices is not meant to shift responsibility away from the strategists and
commanders who enact morally questionable policies. Instead, we echo writers
such as Bauman and Coward by observing that the structures of technology and
bureaucracy can contribute to the articulation of new forms violence while
masking the unique and deeply problematic character of this violence (Coward
2009, 45). There is in this case a complex symbiosis between stand-off and
precision weaponry and the intelligence-gathering techniques that might inform
its use. This point is driven home when we come to recognize that even the
initiation of the Iraq War, undoubtedly the most technologically advanced war
ever waged, was motivated by allegedly savage interrogation procedures.
Stephen Gray, who investigated Central Intelligence Agency (CIA) detention
centres, alleges that the supposed connection between al-Qaeda and Saddam
Hussein was corroborated by intelligence gathered from Iban al Shakh al Libby,
who provided this information only after being tortured in prisons in Egypt
(Agence France Press 2006).
Much more could be said about this topic in light of the history of philosophy.
For example, Friedrich Schiller wrote his Aesthetic letters in 1794, in the midst of
another age of war. He suggests that human beings forfeit their humanity in two
604 John Kaag and Whitley Kaufman
distinct ways. On the one hand, they could turn to savagery in which one
prioritizes feeling and emotion over reason and science, in his words,
‘when feeling predominates over principle’. On the other, they could
become barbarians and could prioritize science and techne at the expense of
human feeling and sentiment, in Schiller’s words, ‘when principle destroys
feeling’ (Schiller 2004, 34). Schiller’s warning comes home to us when we examine
the relationship between advanced military technologies, forms of techne that aim
to remove all human feeling and sentiment from the battlefield, and recent
methods of intelligence-gathering, methods that appear to break basic ethical
principles. Indeed, such an investigation may expose the unique way in which
barbarism and savagery enable one another in the course of modern warfare.
Conclusion
In the dialogue Protagoras, Plato recounts the myth of Prometheus bringing
technology to humankind. The gift of techne threatened to result in the destruction
Downloaded By: [Harvard College] At: 12:46 16 July 2010
of all humans, since humans lacked any standards for the proper use of these
dangerous powers. Zeus, fearing the possible extermination of humans, sent
Hermes to deliver them the gift of justice (dike) to bring order and conciliation to
men as a necessary supplement to technology. Moreover, Zeus insisted that the
knowledge of justice be distributed among all people, and not given merely to a
small number of experts, for, he says, ‘cities cannot be formed if only a few have a
share of these as of other arts [technon]’ (Plato 1990, 321– 323).
Plato’s warning about the relation between techne and ethics is even more valid
in an age when technology can cause far more damage far more quickly than was
imaginable to the ancient Greeks. The seductive power of technology promises
‘war on the cheap’, cheap both in blood and in treasure, and, even more
importantly, it holds out the possibility of a war purified of all moral tragedy.
Technology perpetually threatens to coopt ethics. Efficient means tend to become
ends in themselves by means of the ‘technological imperative’ in which it becomes
perceived as morally permissible to use a tool merely because we have it (often by
means of the fallacious argument that if we don’t use it someone else will). Or the
very ease of striking a target becomes the rationale for doing so; the technology
determines what counts as a legitimate military target rather than vice versa.
The allure of the technocratic ideal reverses Plato’s warning by promising that
ethics can be made into a field of expert knowledge, circumventing the difficult
process of moral deliberation and judgment. The fantasy of ‘robot soldiers’ is but
the extreme of all of these trends; here moral choice is taken out of the hands of
human soldiers and indeed of humans altogether, and the technocratic expert is
the technology itself, the machine making accurate moral choices. We have argued
here that technology can never eliminate the challenge of difficult moral choices
and moral dilemmas, though it is in the very nature of technology to continually
tempt us to think it can do so. This dangerous illusion results in inappropriately
low thresholds for the decision to go to war, a failure to engage in moral
deliberation on such tricky moral issues as targeted assassination, and the
paradox of pushing us into even greater moral wrongs such as torture in order to
provide the precise intelligence needed for technology to be successful. Techne is
even a threat to democracy itself, insofar as it permits leaders to manipulate the
Military frameworks 605
public with the promise of a perfectly just war due to modern intelligence and
‘smart’ weaponry. But moral judgement will always be difficult and controversial
in all circumstances, and above all in war, where the cost in human life and
welfare is so high and where ‘collateral damage’ is inevitable. Technology has
great potential to make war less destructive and to avoid harming innocent
bystanders. Yet technology can never be a substitute for ethics itself; the decision
to go to war, and the means of fighting war, will always belong in human hands.
References
Adas, Michael (2006) Dominance by design: technological imperatives and America’s civilizing
mission (Cambridge, Massachusetts: Belknap Press of Harvard University Press)
Agence France Press (2006) ‘Confession that formed the base for invasion of Iraq was
gathered under torture’, 27 October, ,http://www.commondreams.org/headlines06/
1027-04.htm., accessed 23 January 2009
Aristotle (2002) The Nicomachean ethics, transl Saul Broadie and C Rowe (Oxford: Oxford
University Press)
Downloaded By: [Harvard College] At: 12:46 16 July 2010
Kaag, John (2008) ‘Another question concerning technology: the ethical implications of
homeland defense and security technologies’, Homeland Security Affairs, 4:1
Kahn, Herman (1960) On thermonuclear war (Princeton, New Jersey: Princeton University
Press)
Mackie, J (1977) Ethics: inventing right and wrong (New York: Penguin)
Moravec, Hans (2000) Robot: from mere machine to transcendent mind (New York: Oxford
University Press)
Noble, David (1999) The religion of technology (New York: Penguin Books)
Nussbaum, Martha (2001) The fragility of goodness (Cambridge, UK: Cambridge University
Press)
Pike, John (2009) ‘Coming to the battlefield: stone-cold robot killers’, Washington Post,
4 January 2009.
Plato (1990) Protagoras, transl Walter Lamb (London: Loeb Classics)
Ramsey, Paul (2002) The just war: force and political responsibility (New York: Rowman &
Littlefield)
Russell, Frederick (1977) The just war in the Middle Ages (Cambridge, UK: Cambridge
University Press)
Russell, Frederick (1987) ‘Love and hate in medieval warfare: the contribution of Saint
Augustine’, Nottingham Medieval Studies, 31, 108– 124
Downloaded By: [Harvard College] At: 12:46 16 July 2010
Schiller, Friedrich (2004) Aesthetic education of man, trans. by R. Snell (New York: Courier
Publications)
Singer, Peter (2009) Wired for war: the robotics revolution and conflict in the 21st century
(New York: Penguin Press)
Soloman, Lewis D. (2007) Paul D Wolfowitz: visionary intellectual, policymaker and strategist
(New York: Greenwood).
Spanos, William (1993) Heidegger and criticism (Minneapolis: University of Minnesota Press)
Sparrow, Robert (2007) ‘Killer robots’, Journal of Applied Philosophy, 24:1, 62 – 77
US Department of Defense (2002) ‘Deputy Secretary Wolfowitz’s interview with the
New York Times’, news transcript, 7 January, ,http://www.defenselink.mil/
transcripts/transcript.aspx?transcriptid¼2039., accessed 3 January 2009
Wallach, Wendell and Colin Allen (2008) Moral machines (New York: Oxford University
Press)
Watts, Barry (2004) Clauswitzian friction and future war (Washington: Institute for National
Strategic Studies)
Weiner, Tim (2005) ‘New model army soldier rolls closer to battle’, New York Times,
16 February 2005
Wright, Evan (2008) Generation kill (New York: Berkley Caliber)