AI military weapons should be ban

28 C O M M U N I C AT I O N S O F T H E A C M | M AY 2 0 1 7 | V O L . 6 0 | N O . 5

V
viewpoints

I
M

A
G

E
C

O
U

R
T

E
S

Y
O

F
Q

I
N

E
T

I
Q

N
O

R
T

H
A

M
E

R
I

C
A

principles of international humani-
tarian law (IHL)b must trump utilitar-
ian calculations. Therefore, those who
believe the benefits of LAWS justify
their use and therefore oppose a ban,
are intent that LAWS do not become a
special case within IHL. Demonstrat-
ing that LAWS pose unique challenges

b Four principles of IHL provide protection
for civilians: distinction, necessity and pro-
portionality, humane treatment, and non-
discrimination.

F
R O M A P R I L 1 1 – 1 5 , 2 0 1 6 , at the
United Nations Office at Ge-
neva, the Convention on Cer-
tain Conventional Weapons
(CCW) conducted a third year

of informal meetings to hear expert tes-
timony regarding a preemptive ban on
lethal autonomous weapons systems
(LAWS). A total of 94 states attended the
meeting, and at the end of the week they
agreed by consensus to recommend the
formation of an open-ended Group of
Government Experts (GGE). A GGE is
the next step in forging a concrete pro-
posal upon which the member states
could vote. By the end of 2016 a preemp-
tive ban has been called for by 19 states.
Furthermore, meaningful human control,
a phrase first proposed by advocates for
a ban, has been adopted by nearly all
the states, although the phrase’s mean-
ing is contested. Thus a ban on LAWS
would appear to have gained momen-
tum. Even the large military powers,
notably the U.S., have publicly stated
that they will support a ban if that is
the will of the member states. Behind
the scenes, however, the principal pow-
ers express their serious disinclination
to embrace a ban. Many of the smaller
states will follow their lead. The hurdles
in the way of a successful campaign to
ban LAWS remain daunting, but are not
insurmountable.

The debate to date has been charac-
terized by a succession of arguments

and counterarguments by proponents
and opponents of a ban. This back
and forth should not be interpreted as
either a stalemate or a simple calcula-
tion as to whether the harms of LAWS
can be offset by their benefits. For all
states that are signatories to the laws
of armed conflict,a any violation of the

a LOAC, also known as International Humani-
tarian (IHL), is codified in the Geneva
Conventions and additional Protocols. The
laws seek to limit the effects of armed conflict,
particularly the protection of non-combatants.

Viewpoint
Toward a Ban on Lethal
Autonomous Weapons:
Surmounting the Obstacles
A 10-point plan toward fashioning a proposal to ban some—if not all—lethal
autonomous weapons.

DOI:10.1145/2998579 Wendell Wallach

The Modular Advanced Armed Robotic System is an unmanned ground vehicle for
reconnaissance, surveillance, and target acquisition missions.

http://dx.doi.org/10.1145/2998579

M AY 2 0 1 7 | V O L . 6 0 | N O . 5 | C O M M U N I C AT I O N S O F T H E A C M 29

viewpoints

V
viewpoints

banned without requiring an inspection
regime. Consider, for example, the rela-
tively recent bans on blinding lasers or
anti-personnel weapons, which are of-
ten offered as a model for arms control
for LAWS. These bans rely on represen-
tatives of civil society, non-governmental
organizations such as the International
Committee of the Red Cross, to monitor
and stigmatize violations. So also will a
ban on LAWS. However, blinding lasers
and anti-personnel weapons were rela-
tively easy to define. After the fact, the
use of such weapons can be proven in a
straightforward manner. Lethal autono-
my, on the other hand, is not a weapon
system. It is a feature set that can be add-
ed to many, if not all, weapon systems.
Furthermore, the uses of autonomous
killing features are likely to be masked.

• LAWS will be relatively easy to as-
semble using technologies developed
for civilian applications. Thus their pro-
liferation and availability to non-state
actors cannot be effectively stopped.

In forging arms-control agreements
definitional distinctions have always
been important. Contentions that defi-
nitional consensus cannot be reached
for autonomy or meaningful human con-
trol, that LAWS depend upon advanced
AI, and that such systems are merely a
distant speculative possibility repeat-
edly arose during the April discussion at
the U.N. in Geneva, and generally served
to obfuscate, not clarify, the debate. A
circular and particularly unhelpful de-
bate has ensued over the meaning of
autonomy, with proponents and oppo-
nents of a ban struggling to establish a
definition that serves their cause. For
example, the U.K. delegation insists
that autonomy implies near humanlike
capabilitiese and anything short of this
is merely an automated weapon. The
Campaign to Stop Killer Robots favors
a definition where autonomy is the abil-
ity to perform a task without immediate
intervention from a human. Similarly,
definitions for meaningful human con-
trol range from a military leader specify-
ing a kill order in advance of deploying
a weapon system to having the real-time
engagement of a human in the loop of
selecting and killing a human target.

e While the U.K. representatives did not use this
language, it does succinctly capture the dele-
gation’s statements that all computerized sys-
tems are merely automated until they display
advanced capabilities.

for IHL has been a core strategy for
supporters of a ban.

Those among the more than 3,100
AI/Robotics researchers who signed
the Autonomous Weapons: An Open Let-
ter From AI & Robotics Researchersc are
reflective of a broad consensus among
citizens and even active military per-
sonnel who favor a preemptive ban.4
This consensus is partially attribut-
able to speculative, futuristic, and
fictional scenarios. But perhaps even
science fiction represents a deep in-
tuition that unleashing LAWS is not a
road humanity should tread.

Researchers who have waded into the
debate over banning LAWS have come
to appreciate the manner in which geo-
politics, security concerns, the arcana
of arms control, and linguistic obfusca-
tions can turn a relatively straightfor-
ward proposal into an extremely com-
plicated proposition. A ban on LAWS
does not fit easily, or perhaps at all, into
traditional models for arms control.
If a ban, or even a moratorium, on the
development of LAWS is to progress, it
must be approached creatively.

I favor and have been a long-time
supporter of a ban. While a review of
the extensive debate as to whether
LAWS should be banned is well be-
yond the scope of this paper, I wish
to share a few creative proposals that
could move the campaign to ban LAWS
forward. Many of these proposals were
expressed during my testimony at the
CCW meeting in April and during a
side luncheon event.d Before introduc-
ing those proposals, let me first point
out some of the obstacles to fashioning
an arms control agreement for LAWS.

Why Banning LAWS Is Problematic
˲ Unlike most other weapons that

have been banned, some uses of LAWS

c Available at http://bit.ly/1V9bls5
d The full April 12, 2016, testimony entitled,

Predictability and Lethal Autonomous Weap-
ons Systems (LAWS), is available at http://bit.
ly/2mjmuwH. An extended article accompa-
nied this testimony. That article was circulated
to all the CCW member states by the chair of
the meeting, Ambassador Michael Biontino of
Germany. It was also published in Robin Geiss,
Ed., 2017, “Lethal Autonomous Weapons Sys-
tems: Technology, Definition, Ethics, &
Security.” Federal Foreign Office, p. 295–312.
The luncheon event on April 11, 2016, was
sponsored by the United Nations Institute for
Disarmament Research (UNIDIR).

are perceived as morally acceptable, if
not morally obligatory. The simple fact
that LAWS can be substituted for and
thus save the lives of one’s own sol-
diers is the most obvious moral good.
Unfortunately, this same moral good
lowers the barriers to initiating new
wars. Some nations will be embold-
ened to start wars if they believe they
can achieve political objectives without
the loss of their troops.

˲ It is unclear whether armed mili-
tary robots should be viewed as weap-
on systems or weapon platforms, a
distinction that has been central to
many traditional arms control treaties.
Range, payload, and other features are
commonly used in arms control agree-
ments to restrict the capabilities of a
weapon system. A weapon platform
can be regulated by restricting where
it can be located. For example, agree-
ments to restrict nuclear weapons will
specify number of warheads and the
range of the missiles upon which they
are mounted, and even where the mis-
siles can be stationed. With LAWS,
what is actually being banned?

• Arms control agreements often
focus on working out modes of verifi-
cation and inspection regimes to de-
termine whether adversaries are hon-
oring the ban. The difference between
a lethal and non-lethal robotic system
may be little more than a few lines of
code or a switch, which would be diffi-
cult to detect and could be removed be-
fore or added after an inspection. Pro-
posed verification regimes for LAWS6
would be extremely difficult and costly
to enforce. Military strategists do not
want to restrict their options, when
that of bad actors is unrestricted.

• LAWS differ in kind from the various
weapon systems that have to date been

Some nations will be
emboldened to start
wars if they believe
they can achieve
political objectives
without the loss
of their troops.

http://bit.ly/2mjmuwH

http://bit.ly/2mjmuwH

30 C O M M U N I C AT I O N S O F T H E A C M | M AY 2 0 1 7 | V O L . 6 0 | N O . 5

viewpoints

decision requires the real-time au-
thorization from designated military
personnel for a LAW to kill a combat-
ant or destroy a target that might har-
bor combatants and non-combatants
alike. In other words, it is not suffi-
cient for military personnel to merely
delegate a kill order in advance to an
autonomous weapon or merely be “on-
the-loop”h of systems that can act with-
out a real time go-ahead.

3. Petition leaders of states to de-
clare that LAWS violate existing IHL. In
the U.S. this would entail a Presidential
Order to that effect.i,14

4. Review marginal or ambigu-
ous cases to set guidelines for when a
weapon system is truly autonomous
and when its actions are clearly the
extension of a military commander’s
will and intention. Recognize that any
definition of autonomy will leave some
cases ambiguous.

5. Underscore that some present
and future weapon system will occa-
sionally act unpredictably and most
LAWS will be difficult if not impossible
to test adequately.

6. Present compelling cases for
banning at least some, if not all, LAWS.
In other words, highlight situations in
which nearly all parties will support a
ban. For example, no nation should want
LAWS that can launch nuclear warheads.

7. Accommodate the fact that
there will be necessary exceptions to
any ban. For example, defensive auton-
omous weapons that target unmanned
incoming missiles are already widely
deployed.j These include the U.S. Aegis
Ballistic Missile Defense System and Is-
rael’s Iron Dome.

8. Recognize that future techno-
logical advances may justify additional

h “On the loop” is a term that first appeared in
the “United States Air Force Unmanned Air-
craft Systems Flight Plan 2009–2047.” The plan
states: Increasingly humans will no longer be
“in the loop” but rather “on the loop”—moni-
toring the execution of certain decisions. Si-
multaneously, advances in AI will enable sys-
tems to make combat decisions and act within
legal and policy constraints without necessarily
requiring human input.

i Wallach, W. (2012, unpublished but widely circu-
lated proposal). Establishing limits on autono-
mous weapons capable of initiating lethal force.

j In practice a weapon designed for defensive
purposes might be used offensively. So the
distinction between the two should empha-
size the use of defensive weaponry to target
unmanned incoming missiles.

The leading military powers contend
that they will maintain effective control
over the LAWS they deploy.f But even
if we accept their sincerity, this totally
misses the point. They have no means
of ensuring that other states and non-
state actors will follow suit.

More is at stake in these definition-
al debates than whether to preemp-
tively ban LAWS. Consider a Boston
Dynamic’s Big Dog loaded with explo-
sives, and directed through the use of
a GPS to a specific location, where it is
programmed to explode. Unfortunate-
ly, during the time it takes to travel to
that location, the site is transformed
from a military outpost to a makeshift
hospital for injured civilians. A strong
definition for meaningful human con-
trol would require the location be giv-
en a last-minute inspection before the
explosives could detonate. Big Dog, in
this example, is a dumb LAW, which we
should perhaps fear as much as specu-
lative future systems with advanced
intelligence. Dumb LAWS, however,
do open up comparisons to widely de-
ployed existing weapon systems, such
as cruise missiles, whose impact on
an intended target military leaders
have little or no ability to alter once the
missile has been launched. In other
words, banning dumb LAWS quickly
converges with other arms control
campaigns, such as those directed at
limiting cruise missiles and ballistic
missiles.5 States will demand a defini-
tion for LAWS that distinguishes them
from existing weapon systems.

Delegates at the CCW are cognizant
that in the past (1990s) they failed at
banning the dumbest, most indiscrim-
inate, and autonomous weapons of all,
anti-personnel mines. Nevertheless,
anti-personnel weapons (land mines)
were eventually banned during an in-
dependent process that led up to the
Mine Ban or Ottawa Treaty; 162 coun-
tries have committed to fully comply
with that treaty.g

f See, for example, the U.S. Department of De-
fense Directive 2000.09 entitled, “Autonomy in
Weapon Systems.” The Directive is dated No-
vember 21, 2012 and signed by Deputy Secretary
of Defense, Ashton B. Carter, who was appoint-
ed Secretary of Defense by President Obama on
December 5, 2014; http://bit.ly/1myJikF

g The U.S., Russia, and China are not signatories
to the Ottawa Treaty, although the U.S. has
pledged to largely abide by its terms.

A second failure to pass restric-
tions on the use of a weapon systems,
whose ban has garnered popular sup-
port, might damage the whole CCW ap-
proach to arms control. This knowledge
offers the supporters of a ban a degree
of leverage presuming: the ban truly
has broad and effective public support;
LAWS can be distinguished from exist-
ing weaponry that is widely deployed;
and creative means can be forged to de-
velop the framework for an agreement.

A 10-Point Plan
Many of the barriers to fitting a ban
on LAWS into traditional approaches
to arms control can be overcome by
adopting the following approach.

1. Rather than focus on establish-
ing a bright line or clear definition for
lethal autonomy, first establish a high
order moral principle that can garner
broad support. My candidate for that
principle is: Machines, even semi-intelli-
gent machines, should not be making life
and death decisions. Only moral agents
should make life and death decisions
about humans. Arguably, something
like this principle is already implicit,
but not explicit, in existing interna-
tional humanitarian law, also known
as the laws of armed conflict (LOAC).3
A higher order moral principle makes
explicitly clear what is off limits, while
leaving open the discussion of margin-
al cases where a weapon system may
or may not be considered to be making
life and death decisions.

2. Insist that meaningful human
control and making a life and death

The leading military
powers contend
they will maintain
effective control
over the LAWS
they deploy.
But even if
we accept their
sincerity, this totally
misses the point.

http://bit.ly/1myJikF

M AY 2 0 1 7 | V O L . 6 0 | N O . 5 | C O M M U N I C AT I O N S O F T H E A C M 31

viewpoints

exceptions to a ban. Probably the use of
LAWS to protect refugee non-combat-
ants would be embraced as an exception.
Whether the use of LAWS in a combat
zone where there are no non-combat-
ants should be treated as an exception to
a ban would need to be debated. Offen-
sive autonomous weapon systems that
do not target humans, but only target,
for example, unmanned submarines,
might be deemed an exception.

9. Utilize the unacceptable LAWS to
campaign for a broad ban, and a mecha-
nism for adding future exceptions.

10. Demand that the onus of ensur-
ing that LAWS will be controllable, and
that those who deploy the LAWS will be
held accountable, lies with those par-
ties who petition for, and deploy, an
exception to the ban.

Unpredictable Behavior:
Why Some LAWS Must Be Banned
A ban will not succeed unless there is a
compelling argument for restricting at
least some, if not all, LAWS. In addition
to the ethical arguments for and against
LAWS, concern has been expressed that
autonomous weapons will occasionally
behave unpredictably and therefore
might violate IHL, even when this is not
the intention of those who deploy the
system. The ethical arguments against
LAWS have already received serious
attention over the past years and in
the ACM. During my testimony at the
CCW in April 2016, I fleshed out why
the prospect of unanticipated behav-
ior should be taken seriously by mem-
ber states. The points I made are fairly
well understood within the community
of AI and robotics’ engineers, and go
beyond weaponry to our ability to pre-
dict, test, verify, validate, and ensure
the behavior and reliability of software
and indeed any complex system. In ad-
dition, debugging and ensuring that
software is secure can be a costly and a
never-ending challenge.

Factors that influence a system’s pre-
dictability. Predictability for weaponry
means that within the task limits for
which the system is designed, the an-
ticipated behavior will be realized,
yielding the intended result. However,
nothing less than a law of physics is
absolutely predictable. There are only
degrees of predictability, which in the-
ory can be represented as a probability.
Many factors influence the predictabil-

ity of a system’s behavior, and whether
operators can properly anticipate the
system’s behavior.

˲ An unanticipated event, force, or
resistance can alter the behavior of
even highly predictable systems.

˲ Many if not most autonomous
systems are best understood as com-
plex adaptive systems. Within systems
theory, complex adaptive systems act
unpredictably on occasion, have tip-
ping points that lead to fundamental
reorganization, and can even display
emergent properties that are difficult,
if not impossible, to explain.

˲ Complex adaptive systems fail for
a variety of reasons including incompe-
tence or wrongdoing; design flaws and
vulnerabilities; underestimating risks
and failure to plan for low probability
events; unforeseen high-impact events
(Black Swans;12 and what Charles Per-
row characterized as uncontrollable
and unavoidable “normal accidents”
(discussed more fully here).

˲ Reasonable testing procedures will
not be exhaustive and can fail to ascer-
tain whether many complex adaptive
systems will behave in an uncertain
manner. Furthermore, the testing of
complex systems is costly and only af-
fordable by a few states, and they tend
to be under pressure to cut military
expenditures. To make matters worse,
each software error fixed and each new
feature added can alter a system’s be-
havior in ways that can require addi-
tional rounds of extensive testing. No
military can support the time and ex-
pense entailed in testing systems that
are continually being upgraded.

˲ Learning systems can be even
more problematic. Each new task or
strategy learned can alter a system’s
behavior and performance. Further-
more, learning is not just a process of
adding and altering information; it can
alter the very algorithm that process-
es the information. Placing a system
on the battlefield that can change its
programming significantly raises the
risk of uncertain behavior. Retesting
dynamic systems that are constantly
learning is impossible.

˲ For some complex adaptive sys-
tems various mathematical proofs or
formal verification procedures have
been used to ensure appropriate be-
haviors. Existing approaches to formal
verification will not be adequate for

Calendar
of Events
May 6–11
CHI’17: CHI Conference on
Human Factors in Computing
Systems,
Denver, CO,
Sponsored: ACM/SIG,
Contact: Susan R. Fussell,
Email: [email protected]

May 8–10
HotOS ‘17: Workshop on Hot
Topics in Operating Systems,
Whistler, BC, Canada,
Sponsored: ACM/SIG,
Contact: Rachit Agarwal,
Email: [email protected]

May 10–12
GLSVLSI ‘17: Great Lakes
Symposium on VLSI 2017,
Banff, AB, Canada,
Contact: Laleh Behjat,
Email: [email protected]

May 15–17
CF’17: Computing Frontiers
Conference,
Siena, Italy,
Sponsored: ACM/SIG,
Contact: Roberto Giorgi,
Email: [email protected]

May 22–24
SYSTOR 2017:
International Systems
and Storage Conference,
Haifa, Israel,
Sponsored: ACM/SIG,
Contact: Doron Chen,
Email: [email protected]

May 24–26
SIGSIM-PADS ‘17:
SIGSIM Principles of
Advanced Discrete Simulation
Singapore, Singapore,
Sponsored: ACM/SIG,
Contact: Wentong Cai,
Email: [email protected]

June

June 5–7
Web3D ‘17: The 22nd
International Conference
on Web3D Technology,
Brisbane, QLD, Australia,
Sponsored: ACM/SIG,
Contact: Matt Adcock,
Email: [email protected]

June 5–8
ICMR ‘17: International
Conference on
Multimedia Retrieval,
Bucharest, Romania,
Sponsored: ACM/SIG,
Contact: Niculae Sebe,
Email: [email protected]

32 C O M M U N I C AT I O N S O F T H E A C M | M AY 2 0 1 7 | V O L . 6 0 | N O . 5

viewpoints

systems with learning or planning ca-
pabilities functioning in complex so-
cio-technical contexts. However, new
formal verification procedures may be
developed. The success of these will be
an empirical question, but ultimately
political leaders and military planners
must judge whether such approaches
are adequate for ensuring that LAWS
will act within the constraints of IHL.

˲ While increasing autonomy, im-
proving intelligence, and machine
learning can boost the system’s accu-
racy in performing certain tasks; they
can also increase the unpredictability
in how a system performs overall.

˲ Unpredictable behavior from a
weapon system will not necessarily be
lethal. But even a low-risk autonomous
weapon will occasionally kill non-com-
batants, start a new conflict, or esca-
late hostilities.

Coordination, Normal Accidents, and
Trust. Military planners often underes-
timate the risks and costs entailed in
implementing weapon systems. Anal-
yses often presume a high degree of
reliability in the equipment deployed,
and ease at integrating that equipment
into a combat unit. Even autonomous

weapons will function as components
within a team that will include humans
fulfilling a variety of roles, other me-
chanical or computational systems,
and an adequate supply chain serving
combat and non-combat needs.

Periodic failures or system accidents
are inevitable for extremely complex
systems. Charles Perrow labeled such
failures “normal accidents.”8 The near
meltdown of a nuclear reactor at Three
Mile Island in Pennsylvania on March
28, 1979, is a classical example of a nor-
mal accident. Normal accidents will
occur even when no one does anything
wrong. Or they can occur in a joint cog-
nitive system—where both operators
and software are selecting courses of
action—when it is impossible for the
operators to know the appropriate ac-
tion to take in response to an unan-
ticipated event or action by a compu-
tational system. In the latter case, the
operators do the wrong thing, because
they misunderstand what the semi-in-
telligent system is trying to do. This was
the case on December 6, 1999, when
after a successful landing, confusion
reigned, and a Global Hawk unmanned
air vehicle veered off the runway and its

nose collapsed in the adjacent desert,
incurring $5.3 million in damages.7

In a joint cognitive system, when
anything goes wrong, the humans are
usually judged to be at fault. This is
largely because of assumptions that
the actions of the system are auto-
mated, while humans are presumed
to be the adaptive players on the team.
A commonly proposed solution to the
failure of a joint cognitive system will
be to build more autonomy into the
computational system. This strategy,
however, does not solve the problem.
It becomes ever more challenging for
a human operator to anticipate the ac-
tions of a smart system, as the system
and the environments in which it oper-
ates become more complex. Expecting
operators to understand how a sophis-
ticated computer thinks, and to antici-
pate its actions so as to coordinate the
activities of the team, increases the re-
sponsibility of the operators.

Difficulty anticipating the actions
of other team members (human or
computational) in turn undermines
trust, an essential and often over-
looked element of military prepared-
ness. Heather Roff and David Danks

The Digital Mind
How Science Is Redefi ning Humanity
Arlindo Oliveira

“This book is a delightful romp through com-
puter science, biology, physics, and much
else, all unifi ed by the question: What is the
future of intelligence? Few books this ambi-
tious manage to pull it off, but this one does.”

—Pedro Domingos, Professor of Computer
Science, University of Washington; author of
The Master Algorithm

Common Sense, the Turing
Test, and the Quest for Real AI
Hector J. Levesque

“Provides a lucid and highly insightful account
of the remaining research challenges facing
AI, arguing persuasively that common sense
reasoning remains an open problem and lies at
the core of the versatility of human intelligence.”

—Bart Selman, Professor of Computer
Science, Cornell University

The MIT Press

What Algorithms Want
Imagination in the Age of Computing
Ed Finn

“This is a brilliant and important work. I know
of no other book that so ably describes the
cultural work that algorithms do. Once you
read this you won’t think of algorithms as mere
batches of code that guide processes. You will
see them as actors in the world.”

—Siva Vaidhyanathan, author of The
Googlization of Everything: (And Why We
Should Worry)

What Algorithms WantThe Digital Mind Common Sense, the Turing

mitpress.mit.edu

M AY 2 0 1 7 | V O L . 6 0 | N O . 5 | C O M M U N I C AT I O N S O F T H E A C M 33

viewpoints

fore it behooves the member states
of CCW to not be shortsighted in
their evaluation of what will be a very
broad class of military applications.
CCW must not appear to green light
autonomous systems that can deto-
nate weapons of mass destruction.
Given the high level of risks, power-
ful munitions, such as autonomous
ballistic missiles or autonomous
submarines capable of launching
nuclear warheads, must be prohib-
ited. Deploying systems that can alter
their programming is also foolhardy.
This last proviso would rule out many
learning systems that, for example,
improve their planning capabilities.

States and military leaders may
differ on the degree of unpredictabil-
ity and level of risk they will accept in
weapon systems. The risks posed by
less powerful LAWS will, in all prob-
ability, be deemed acceptable to mili-
tary strategists, particularly in com-
parison to similar risks posed by often
unreliable humans. Nevertheless, it
may be difficult to accurately quantify
whether a specific LAW is more or less
reliable than a human. While autono-
mous vehicles can be demonstrated
to likely cause far fewer deaths than
human drivers, similar benchmarks
for accidents occurring during com-
bat will be hard to collect and will be
less than convincing. Perhaps, realis-
tic simulated tests might demonstrate
that LAWS outperformed humans in
similar exercises. More importantly,
the world has adjusted to accidents
caused by humans. Public opinion is
likely to be less forgiving of unintend-
ed wars or deaths of non-combatants
caused by LAWS.

Regardless of the level of risk
deemed acceptable, it is essential to

recognize the degree of unpredictable
risk actually posed by various autono-
mous weapons configurations. Em-
pirical tools should be employed to ad-
equately determine the risk posed by
each type of LAW and whether that risk
exceeds acceptable levels.

Most parties will agree that the un-
predictability and therefore the risks
posed by LAWS capable of dispatch-
ing high-powered munitions includ-
ing nuclear weapons are unaccept-
able. The decision of states should not
be whether any autonomous systems
must be prohibited, but rather how
broadly …

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more
Open chat
1
You can contact our live agent via WhatsApp! Via + 1 929 473-0077

Feel free to ask questions, clarifications, or discounts available when placing an order.

Order your essay today and save 20% with the discount code GURUH