Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unit II - Ethical Initiatives in Ai Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

CCS345-ETHICS AND AI

UNIT II
ETHICAL INITIATIVES IN AI

Syllabus

International ethical initiatives-Ethical harms and concerns-Case study:


healthcare robots, Autonomous Vehicles, Warfare and weaponization.
International ethical initiatives

The potential effects AI

 The fundamental human rights of citizens within a society to the security and
utilisation of gathered data;
 from the bias and discrimination unintentionally embedded into an AI by a
homogenous group of developers,
 To a lack of public awareness and understanding about the consequences of their
choices and usage of any given AI, leading to ill-informed decisions and subsequent
harm.

Many independent initiatives have been launched internationally to explore these – and other
– ethical quandaries. The initiatives explored in this section are outlined in Table1

III YEAR/V SEM/CSE Page 1


CCS345-ETHICS AND AI

III YEAR/V SEM/CSE Page 2


CCS345-ETHICS AND AI

Ethical harms and concerns tackled by these initiatives

All of the initiatives listed above agree that AI should be researched, developed, designed,
deployed, monitored, and used in an ethical manner – but each has different areas of priority.
This section will include analysis and grouping of the initiatives above, by type of issues they
aim to address, and then outline some of the proposed approaches and solutions to protect
from harms.

A number of key issues emerge from the initiatives, which can be broadly split into the
following categories:

1. Human rights and well-being

Is AI in the best interests of humanity and human well-being?

2. Emotional harm

Will AI degrade the integrity of the human emotional experience, or facilitate

emotional ormental harm?

3. Accountability and responsibility

Who is responsible for AI, and who will be held accountable for its actions?

4. Security, privacy, accessibility, and transparency

How do we balance accessibility and transparency with privacy and security,


especially whenit comes to data and personalisation?
5. Safety and trust

III YEAR/V SEM/CSE Page 3


CCS345-ETHICS AND AI

What if AI is deemed untrustworthy by the public, or acts in ways that threaten the safety of
either itself or others?

6. Social harm and social justice

How do we ensure that AI is inclusive, free of bias and discrimination, and aligned
with public morals and ethics?

7. Financial harm

How will we control for AI that negatively affects economic opportunity and employment,
and either takes jobs from human workers or decreases the opportunity and quality of these
jobs?

8. Lawfulness and justice

How do we go about ensuring that AI - and the data it collects - is used, processed,

and managed in a way that is just, equitable, and lawful, and subject to appropriate

governance and regulation? What would such regulation look like? Should AI be

granted 'personhood'?

9. Control and the ethical use – or misuse – of AI

How might AI be used unethically - and how can we protect against this? How do we

ensurethat AI remains under complete human control, even as it develops and 'learns'?

10. Environmental harm and sustainability

How do we protect against the potential environmental harm associated with

thedevelopment and use of AI? How do we produce it in a sustainable way?

11. Informed use

What must we do to ensure that the public is aware, educated, and informed about their

use ofinteraction with AI?

12. Existential risk

How do we avoid an AI arms race, pre-emptively mitigate and regulate potential harm,
and ensure that advanced machine learning is both progressive and manageable?

Overall, these initiatives all aim to identify and form ethical frameworks and systems
that establish human beneficence at the highest levels, prioritise benefit to both human
society and the environment (without these two goals being placed at odds), and
mitigate the risks and negative impacts associated with AI.

III YEAR/V SEM/CSE Page 4


CCS345-ETHICS AND AI

General principles for the ethical and values-based design, development, and implementation
of autonomous and intelligent systems (as defined by the IEEE's Ethically Aligned Design
First Edition March 2019)

Areas of key impact comprise sustainable development; personal data rights and agency over
digital identity; legal frameworks for accountability; and policies for education and
awareness. They fall under the three pillars of the Ethically Aligned Design conceptual
framework: Universal human values; political self-determination and data agency; and
technical dependability.

Harms in detail taking each of these harms in turn, this section explores how they are being
conceptualized by initiatives and some of the challenges that remain. Human rights and well-
being All initiatives adhere to the view that AI must not impinge on basic and fundamental
human rights, such as human dignity, security, privacy, freedom of expression and
information, protection of personal data, equality, solidarity and justice (European

III YEAR/V SEM/CSE Page 5


CCS345-ETHICS AND AI

Parliament, Council and Commission, 2012).

Emotional harm

What is it to be human? AI will interact with and have an impact on the human emotional
experience in ways that have not yet been qualified; humans are susceptible to emotional
influence both positively and negatively, and 'affect' – how emotion and desire influence
behaviour – is a core part of intelligence. Affect varies across cultures, and, given different
cultural sensitivities and ways of interacting, affective and influential AI could begin to
influence how people view society itself. The IEEE recommend various ways to mitigate this
risk, including the ability to adapt and update AI norms and values according to who they are
engaging with, and the sensitivities of the culture in which they are operating.

There are various ways in which AI could inflict emotional harm, including false intimacy,
overattachment, objectification and commodification of the body, and social or sexual
isolation. These are covered by various of the aforementioned ethical initiatives, including the
Foundation for Responsible Robotics, Partnership on AI, the AI Now institute (especially
regarding affect computing), the Montréal Declaration, and the European Robotics Research
Network (EURON)

Roadmap (for example, their section on the risks of humanoids).

Environmental harm and sustainability

The production, management, and implementation of AI must be sustainable and avoid

environmental harm. This also ties in to the concept of well-being; a key recognised aspect of
wellbeing is environmental, concerning the air, biodiversity, climate change, soil and water
quality, and so on (IEEE, 2019). The IEEE (EAD, 2019) state that AI must do no harm to
Earth's natural systems or exacerbate their degradation, and contribute to realising sustainable
stewardship, preservation, and/or the restoration of Earth's natural systems. The UNI Global
Union state that AI must put people and the planet first, striving to protect and even enhance
our planet's biodiversity and ecosystems (UNI Global Union, n.d.). The Foundation for
Responsible Robotics identifies a number of potential uses for AI in coming years, from
agricultural and farming roles to monitoring of climate change and protection of endangered
species. These require responsible, informed policies to govern AI and robotics, say the
Foundation, to mitigate risk and support ongoing innovation and development.

Informed use: public education and awareness

Members of the public must be educated on the use, misuse, and potential harms of AI, via
civic participation, communication, and dialogue with the public. The issue of consent – and
how much an individual may reasonably and knowingly give – is core to this. For example,
the IEEE raise several instances in which consent is less clear-cut than might be ethical: what
if one's personal data are used to make inferences they are uncomfortable with or unaware of?
Can consent be given when a system does not directly interact with an individual? This latter
issue has been named the 'Internet of Other People's Things' (IEEE, 2019). Corporate

III YEAR/V SEM/CSE Page 6


CCS345-ETHICS AND AI

environments also raise the issue of power imbalance; many employees do not have clear
consent on how their personal data – including those on health – is used by their employer.
To remedy this, the IEEE (2017) suggest employee data impact assessments to deal with these
corporate nuances and ensure that no data is collected without employee consent. Data must
also be only gathered and used for specific, explicitly stated, legitimate purposes,
kept up-to-date, lawfully processed, and not kept for a longer period than
necessary. In cases where subjects do not have a direct relationship with the system gathering
data, consent must be dynamic, and the system designed to interpret data preferences and
limitations on collection and use.

Case studies

Case study: healthcare robots

Artificial Intelligence and robotics are rapidly moving into the field of healthcare and will
increasingly play roles in diagnosis and clinical treatment. For example, currently, or in the
near future, robots will help in the diagnosis of patients; the performance of simple surgeries;
and the monitoring of patients' health and mental wellness in short and long-term care
facilities. They may also provide basic physical interventions, work as companion carers,
remind patients to take their medications, or help patients with their mobility. In some
fundamental areas of medicine, such as medical image diagnostics, machine learning has
been proven to match or even surpass our ability to detect illnesses.

Embodied AI, or robots, are already involved in a number of functions that affect people's
physical safety. In June 2005, a surgical robot at a hospital in Philadelphia malfunctioned
during prostate surgery, injuring the patient. In June 2015, a worker at a Volkswagen plant in
Germany was crushed to death by a robot on the production line. In June 2016, a Tesla car
operating in autopilot mode collided with a large truck, killing the car's passenger (Yadron
and Tynan, 2016).

As robots become more prevalent, the potential for future harm will increase, particularly in
the case of driverless cars, assistive robots and drones, which will face decisions that have
real consequences for human safety and well-being. The stakes are much higher with
embodied AI than with mere software, as robots have moving parts in physical space (Lin et
al., 2017). Any robot with moving physical parts poses a risk, especially to vulnerable people
such as children and the elderly.

Safety

Again, perhaps the most important ethical issue arising from the growth of AI and robotics in
healthcare is that of safety and avoidance of harm. It is vital that robots should not harm
people, and that they should be safe to work with. This point is especially important in areas
of healthcare that deal with vulnerable people, such as the ill, elderly, and children.

Digital healthcare technologies offer the potential to improve accuracy of diagnosis and
treatments, but to thoroughly establish a technology's long-term safety and performance
investment in clinical trials is required. The debilitating side-effects of vaginal mesh implants

III YEAR/V SEM/CSE Page 7


CCS345-ETHICS AND AI

and the continued legal battles against manufacturers (The Washington Post, 2019), stand as
an example against shortcutting testing, despite the delays this introduces to innovating
healthcare. Investment in clinical trials will be essential to safely implement the healthcare
innovations that AI systems offer.

User understanding

The correct application of AI by a healthcare professional is important to ensure patient


safety. For instance, the precise surgical robotic assistant 'the da Vinci' has proven a useful
tool in minimising surgical recovery, but requires a trained operator (The Conversation,
2018).

A shift in the balance of skills in the medical workforce is required, and healthcare providers
are preparing to develop the digital literacy of their staff over the next two decades (NHS'
Topol Review,2009). With genomics and machine learning becoming embedded in diagnoses
and medical decision-making, healthcare professionals need to become digitally literate to
understand each technological tool and use it appropriately. It is important for users to trust
the AI presented but to be aware of each tool's strengths and weaknesses, recognising when
validation is necessary. For instance, a generally accurate machine learning study to predict
the risk of complications in patients with pneumonia erroneously considered those with
asthma to be at low risk. It reached this conclusion because asthmatic pneumonia patients
were taken directly to intensive care, and this higher-level care circumvented complications.
The inaccurate recommendation from the algorithm was thus overruled (Pulmonology
Advisor, 2017).

However, it's questionable to what extent individuals need to understand how an AI system
arrived at a certain prediction in order to make autonomous and informed decisions. Even if
an in-depth understanding of the mathematics is made obligatory, the complexity and learned
nature of machine learning algorithms often prevent the ability to understand how a
conclusion has been made from a dataset — a so called 'black box' (Schönberger, 2019). In
such cases, one possible route to ensure safety would be to license AI for specific medical
procedures, and to 'disbar' the AI if a certain number of mistakes are made (Hart, 2018).

Data protection

Personal medical data needed for healthcare algorithms may be at risk. For instance, there are
worries that data gathered by fitness trackers might be sold to third parties, such as insurance
companies, who could use those data to refuse healthcare coverage (National Public Radio,
2018).

Hackers are another major concern, as providing adequate security for systems accessed by a
range of medical personnel is problematic (Forbes, 2018).

Pooling personal medical data is critical for machine learning algorithms to advance
healthcare interventions, but gaps in information governance form a barrier against
responsible and ethical

III YEAR/V SEM/CSE Page 8


CCS345-ETHICS AND AI

data sharing. Clear frameworks for how healthcare staff and researchers use data, such as
genomics, in a way that safeguards patient confidentiality is necessary to establish public
trust and enable advances in healthcare algorithms (NHS' Topol Review, 2009).

Legal responsibility

Although AI promises to reduce the number of medical mishaps, when issues occur, legal
liability must be established. If equipment can be proven to be faulty then the manufacturer is
liable, but it is often tricky to establish what went wrong during a procedure and whether
anyone, medical personnel or machine, is to blame. For instance, there have been lawsuits
against the da Vinci surgical assistant (Mercury News, 2017), but the robot continues to be
widely accepted (The Conversation, 2018).

In the case of 'black box' algorithms where it is impossible to ascertain how a conclusion is
reached, it is tricky to establish negligence on the part of the algorithm's producer (Hart,
2018).

For now, AI is used as an aide for expert decisions, and so experts remain the liable party in
most cases. For instance, in the aforementioned pneumonia case, if the medical staff had
relied solely on the AI and sent asthmatic pneumonia patients home without applying their
specialist knowledge,then that would be a negligent act on their part (Pulmonology Advisor,
2017; International Journal of Law and Information Technology, 2019).

Soon, the omission of AI could be considered negligence. For instance, in less developed
countries with a shortage of medical professionals, withholding AI that detects diabetic eye
disease and so prevents blindness, because of a lack of ophthalmologists to sign off on a
diagnosis, could be considered unethical (The Guardian, 2019; International Journal of Law
and Information Technology, 2019).

Bias

Non-discrimination is one of the fundamental values of the EU (see Article 21 of the EU


Charter of Fundamental Rights), but machine learning algorithms are trained on datasets that
often have proportionally less data available about minorities, and as such can be biased
(Medium, 2014). This can mean that algorithms trained to diagnose conditions are less likely
to be accurate for ethnic patients; for instance, in the dataset used to train a model for
detecting skin cancer, less than 5 percent of the images were from individuals with dark skin,
presenting a risk of misdiagnosis for people of colour (The Atlantic, 2018).

To ensure the most accurate diagnoses are presented to people of all ethnicities, algorithmic
biasesmust be identified and understood. Even with a clear understanding of model design
this is a difficult task because of the aforementioned 'black box' nature of machine learning.
However,various codes of conduct and initiatives have been introduced to spot biases earlier.
For instance,

The Partnership on AI, an ethics-focused industry group was launched by Google, Facebook,

III YEAR/V SEM/CSE Page 9


CCS345-ETHICS AND AI

Amazon, IBM and Microsoft (The Guardian, 2016) — although, worryingly, this board is not
very diverse.

Equality of access

Digital health technologies, such as fitness trackers and insulin pumps, provide patients with
the opportunity to actively participate in their own healthcare. Some hope that these
technologies will help to redress health inequalities caused by poor education,
unemployment, and so on. However,there is a risk that individuals who cannot afford the
necessary technologies or do not have the required 'digital literacy' will be excluded, so
reinforcing existing health inequalities (The Guardian,2019).
The UK's National Health Services' Widening Digital Participation programme is one
example of how a healthcare service has tried to reduce health inequalities, by helping
millions of people in the UK who lack the skills to access digital health services. Programmes
such as this will be critical in ensuring equality of access to healthcare, but also in increasing
the data from minority groups needed to prevent the biases in healthcare algorithms discussed
above.

Quality of care

'There is remarkable potential for digital healthcare technologies to improve accuracy of


diagnoses and treatments, the efficiency of care, and workflow for healthcare professionals'
(NHS'Topol Review, 2019).

If introduced with careful thought and guidelines, companion and care robots, for example,
could improve the lives of the elderly, reducing their dependence, and creating more
opportunities for social interaction. Imagine a home-care robot that could: remind you to take
your medications; fetch items for you if you are too tired or are already in bed; perform
simple cleaning tasks; and help you stay in contact with your family, friends and healthcare
provider via video link.

However, questions have been raised over whether a 'cold', emotionless robot can really
substitute for a human's empathetic touch. This is particularly the case in long-term caring of
vulnerable and often lonely populations, who derive basic companionship from caregivers.
Human interaction is particularly important for older people, as research suggests that an
extensive social network offers protection against dementia. At present, robots are far from
being real companions. Although they can interact with people, and even show simulated
emotions, their conversational ability is still extremely limited, and they are no replacement
for human love and attention. Some might go as far as saying that depriving the elderly of
human contact is unethical, and even a form of cruelty.

And does abandoning our elderly to cold machine care objectify (degrade) them, or
humancaregivers? It's vital that robots don't make elderly people feel like objects, or with
even less control over their lives than when they were dependent on humans — otherwise
they may feel like they are 'lumps of dead matter: to be pushed, lifted, pumped or drained,
without proper reference to the fact that they are sentient beings' (Kitwood 1997).

III YEAR/V SEM/CSE Page 10


CCS345-ETHICS AND AI

In principle, autonomy, dignity and self-determination can all be thoroughly respected by a


machine application, but it's unclear whether application of these roles in the sensitive field of
medicine will be deemed acceptable. For instance, a doctor used a telepresence device to give
a prognosis of death to a Californian patient; unsurprisingly the patient's family were
outraged by this impersonal approach to healthcare (The Independent, 2019). On the other
hand, it's argued that new technologies, such as health monitoring apps, will free up staff time
for more direct interactions with patients, and so potentially increase the overall quality of
care (The Guardian, Press Association, Monday 11 February 2019).

Deception

A number of 'carebots' are designed for social interactions and are often touted to provide an
emotional therapeutic role. For instance, care homes have found that a robotic seal pup's
animallike interactions with residents brightens their mood, decreases anxiety and actually
increases the sociability of residents with their human caregivers. However, the line between
reality and imagination is blurred for dementia patients, so is it dishonest to introduce a robot
as a pet and encourage a social-emotional involvement? (KALW, 2015) And if so, is if
morally justifiable?

Companion robots and robotic pets could alleviate loneliness amongst older people, but this
would require them believing, in some way, that a robot is a sentient being who cares about
them and has feelings — a fundamental deception. Turkle et al. (2006) argue that 'the fact
that our parents,grandparents and children might say 'I love you' to a robot who will say 'I
love you' in return, does not feel completely comfortable; it raises questions about the kind of
authenticity we require of our technology'. Wallach and Allen (2009) agree that robots
designed to detect human social gestures and respond in kind all use techniques that are
arguably forms of deception. For an individual to benefit from owning a robot pet, they must
continually delude themselves about the real nature of their relation with the animal. What's
more, encouraging elderly people to interact with robot toys has the effect of infantilising
them.

Autonomy

It's important that healthcare robots actually benefit the patients themselves, and are not just

designed to reduce the care burden on the rest of society — especially in the case of care and

companion AI. Robots could empower disabled and older people and increase their
independence;in fact, given the choice, some might prefer robotic over human assistance for
certain intimate tasks such as toileting or bathing. Robots could be used to help elderly people
live in their own homes for longer, giving them greater freedom and autonomy. However,
how much control, or autonomy,should a person be allowed if their mental capability is in
question? If a patient asked a robot to throw them off the balcony, should the robot carry out
that command?

Liberty and privacy


III YEAR/V SEM/CSE Page 11
CCS345-ETHICS AND AI

As with many areas of AI technology, the privacy and dignity of users' needs to be carefully
considered when designing healthcare service and companion robots. Working in people's
homes means that robots will be privy to private moments such as bathing and dressing; if
these moments are recorded, who should have access to the information, and how long should
recordings be kept?

The issue becomes more complicated if an elderly person's mental state deteriorates and they
become confused — someone with Alzheimer's could forget that a robot was monitoring
them, and could perform acts or say things thinking that they are in the privacy of their own
home. Home-carerobots need to be able to balance their user's privacy and nursing needs, for
example by knocking and awaiting an invitation before entering a patient's room, except in a
medical emergency.
To ensure their charge's safety, robots might sometimes need to act as supervisors, restricting
their freedoms. For example, a robot could be trained to intervene if the cooker was left on, or
the bath was overflowing. Robots might even need to restrain elderly people from carrying
out potentially dangerous actions, such as climbing up on a chair to get something from a
cupboard. Smart homes with sensors could be used to detect that a person is attempting to
leave their room, and lock the door, or call staff — but in so doing the elderly person would
be imprisoned.

Moral agency

'There's very exciting work where the brain can be used to control things, like maybe they've
lost the use of an arm…where I think the real concerns lie is with things like behavioural
targeting: going straight to the hippocampus and people pressing 'consent', like we do now,
for data access'. (John Havens)

Robots do not have the capacity for ethical reflection or a moral basis for decision-making,
and thus humans must currently hold ultimate control over any decision-making. An example
of ethical reasoning in a robot can be found in the 2004 dystopian film 'I, Robot', where Will
Smith's character disagreed with how the robots of the fictional time used cold logic to save
his life over that of a child's. If more automated healthcare is pursued, then the question of
moral agency will require closer attention. Ethical reasoning is being built into robots, but
moral responsibility is about more than the application of ethics — and it is unclear whether
robots of the future will be able to handle the complex moral issues in healthcare (Goldhill,
2016).

Trust

Larosa and Danks (2018) write that AI may affect human-human interactions and
relationships within the healthcare domain, particularly that between patient and doctor, and
potentially disrupt the trust we place in our doctor.

'Psychology research shows people mistrust those who make moral decisions by calculating
costs and benefits — like computers do' (The Guardian, 2017). Our distrust of robots may

III YEAR/V SEM/CSE Page 12


CCS345-ETHICS AND AI

also come from the number of robots running amok in dystopian science fiction. News stories
of computer mistakes — for instance, of an image-identifying algorithm mistaking a turtle for
a gun (The Verge, 2017) —alongside worries over the unknown, privacy and safety are all
reasons for resistance against the uptake of AI (Global News Canada, 2016).

Firstly, doctors are explicitly certified and licensed to practice medicine, and their license
indicates that they have specific skills, knowledge, and values such as 'do no harm'. If a robot
replaces a doctor for a particular treatment or diagnostic task, this could potentially threaten
patient-doctor trust, as the patient now needs to know whether the system is appropriately
approved or 'licensed' for the functions it performs.

Secondly, patients trust doctors because they view them as paragons of expertise. If doctors
were seen as 'mere users' of the AI, we would expect their role to be downgraded in the
public's eye,undermining trust.
Thirdly, a patient's experiences with their doctor are a significant driver of trust. If a patient
has an open line of communication with their doctor, and engages in conversation about care
and treatment, then the patient will trust the doctor. Inversely, if the doctor repeatedly ignores
the patient's wishes, then these actions will have a negative impact on trust. Introducing AI
into this dynamic could increase trust — if the AI reduced the likelihood of misdiagnosis, for
example, or improved patient care. However, AI could also decrease trust if the doctor
delegated too much diagnostic or decision-making authority to the AI, undercutting the
position of the doctor as an authority on medical matters.

As the body of evidence grows to support the therapeutic benefits for each technological
approach,and as more robotic interacting systems enter the marketplace, then trust in robots is
likely to increase. This has already happened for robotic healthcare systems such as the da
Vinci surgical robotic assistant (The Guardian, 2014).

Employment replacement

As in other industries, there is a fear that emerging technologies may threaten employment
(The Guardian, 2017), for instance, there are carebots now available that can perform up to a
third of nurses' work (Tech Times, 2018). Despite these fears, the NHS' Topol Review (2009)
concluded that 'these technologies will not replace healthcare professionals but will enhance
them ('augment them'), giving them more time to care for patients'. The review also outlined
how the UK's NHS will nurture a learning environment to ensure digitally capable
employees.

Case study: Autonomous Vehicles

Autonomous Vehicles (AVs) are vehicles that are capable of sensing their environment and
operating with little to no input from a human driver. While the idea of self-driving cars has
been around since at least the 1920s, it is only in recent years that technology has developed
to a point where AVs are appearing on public roads.

According to automotive standardisation body SAE International (2018), there are six levels
of driving automation:
III YEAR/V SEM/CSE Page 13
CCS345-ETHICS AND AI

Some of the lower levels of automation are already well-established and on the market, while
higher level AVs are undergoing development and testing. However, as we transition up the
levels and put more responsibility on the automated system than the human driver, a number
of ethical issues emerge.

Societal and Ethical Impacts of AVs

'We cannot build these tools saying, 'we know that humans act a certain way, we're going to
kill them –here's what to do'.' (John Havens)

Public safety and the ethics of testing on public roads

At present, cars with 'assisted driving' functions are legal in most countries. Notably, some
Tesla models have an Autopilot function, which provides level 2 automation (Tesla, nd).
Drivers are legally allowed to use assisted driving functions on public roads provided they
remain in charge of the vehicle at all times. However, many of these assisted driving
functions have not yet been subject to independent safety certification, and as such may pose
a risk to drivers and other road users.

In Germany, a report published by the Ethics Commission on Automated Driving highlights


that it is the public sector's responsibility to guarantee the safety of AV systems introduced

III YEAR/V SEM/CSE Page 14


CCS345-ETHICS AND AI

and licensed on public roads, and recommends that all AV driving systems be subject to
official licensing and monitoring (Ethics Commision, 2017)
Near-miss accidents

At present, there is no system in place for the systematic collection of near-miss accidents.
While it is possible that manufacturers are collecting this data already, they are not under any
obligation to do so — or to share the data. The only exception at the moment is the US state
of California, which requires all companies that are actively testing AVs on public roads to
disclose the frequency at which human drivers were forced to take control of the vehicle for
safety reasons (known as 'disengagement').

Case study: Warfare and weaponisation

Although partially autonomous and intelligent systems have been used in military technology
since at least the Second World War, advances in machine learning and AI signify a turning
point in the use of automation in warfare.

AI is already sufficiently advanced and sophisticated to be used in areas such as satellite


imagery analysis and cyber defence, but the true scope of applications has yet to be fully
realised. A recent report concludes that AI technology has the potential to transform warfare
to the same, or perhaps even a greater, extent than the advent of nuclear weapons, aircraft,
computers and biotechnology (Allen and Chan, 2017). Some key ways in which AI will
impact militaries are outlined below.

Lethal autonomous weapons As automatic and autonomous systems have become more
capable, militaries have become more willing to delegate authority to them. This is likely to
continue with the widespread adoption of AI,leading to an AI inspired arms-race. The
Russian Military Industrial Committee has already approved an aggressive plan whereby
30% of Russian combat power will consist of entirely remote-controlled and autonomous
robotic platforms by 2030. Other countries are likely to set similar goals. While the United
States Department of Defense has enacted restrictions on the use of autonomous and
semiautonomous systems wielding lethal force, other countries and non-state actors may not
exercise such self-restraint.

Drone technologies

Standard military aircraft can cost more than US$100 million per unit; a high-quality
quadcopter Unmanned Aerial Vehicle, however, currently costs roughly US$1,000, meaning
that for the price of a single high-end aircraft, a military could acquire one million drones.
Although current commercial drones have limited range, in the future they could have similar
ranges to ballistic missiles, thus rendering existing platforms obsolete.

Robotic assassination

Widespread availability of low-cost, highly-capable, lethal, and autonomous robots could


make targeted assassination more widespread and more difficult to attribute. Automatic
sniping robots could assassinate targets from afar.

III YEAR/V SEM/CSE Page 15


CCS345-ETHICS AND AI

Mobile-robotic-Improvised Explosive Devices

As commercial robotic and autonomous vehicle technologies become widespread, some


groups will leverage this to make more advanced Improvised Explosive Devices (IEDs).
Currently, the technological capability to rapidly deliver explosives to a precise target from
many miles away is restricted to powerful nation states. However, if long distance package
delivery by drone becomes a reality, the cost of precisely delivering explosives from afar
would fall from millions of dollars to thousands or even hundreds. Similarly, self-driving cars
could make suicide car bombs more frequent and devastating since they no longer require a
suicidal driver.

Employing AI in warfare raises several legal and ethical questions. One concern is that
automated weapon systems that exclude human judgment could violate International
Humanitarian Law, and threaten our fundamental right to life and the principle of human
dignity. AI could also lower the threshold of going to war, affecting global stability.

Robots also have no concept of what it means to kill the 'wrong' person. 'It is only because
humans can feel the rage and agony that accompanies the killing of humans that they can
understand sacrifice and the use of force against a human. Only then can they realise the
'gravity of the decision' to kill' (Johnson and Axinn 2013, p. 136).

However, others argue that there is no particular reason why being killed by a machine would
be a subjectively worse, or less dignified, experience than being killed by a cruise missile
strike. 'What matters is whether the victim experiences a sense of humiliation in the process
of getting killed.

Victims being threatened with a potential bombing will not care whether the bomb is dropped
by a human or a robot' (Lim et al, 2019). In addition, not all humans have the emotional
capacity to conceptualise sacrifice or the relevant emotions that accompany risk. In the heat
of battle, soldiers rarely have time to think about the concept of sacrifice, or generate the
relevant emotions to make informed decisions each time they deploy lethal force.

Additionally, who should be held accountable for the actions of autonomous systems — the
commander, programmer, or the operator of the system? Schmit (2013) argues that the
responsibility for committing war crimes should fall on both the individual who programmed
the AI, and the commander or supervisor (assuming that they knew, or should have known,
the autonomous weapon system had been programmed and employed in a war crime, and that
they did nothing to stop it from happening).

III YEAR/V SEM/CSE Page 16

You might also like