Artificial Intelligence: Autonomous Technology (AT), Lethal Autonomous Weapons Systems (LAWS) and Peace Time Threats - By Regina Surber ...

Page created by Tammy Harris
 
CONTINUE READING
Artificial Intelligence: Autonomous Technology (AT), Lethal Autonomous Weapons Systems (LAWS) and Peace Time Threats - By Regina Surber ...
Artificial Intelligence: Autonomous
Technology (AT), Lethal Autonomous
Weapons Systems (LAWS) and Peace
Time Threats
By Regina Surber, Scientific Advisor, ICT4Peace Foundation and the
Zurich Hub for Ethics and Technology (ZHET)
Copyright 2018, ICT4Peace Foundation
Zurich, 21 February 2018

Artificial Intelligence: Autonomous Technology (AT), Lethal Autonomous Weapons Systems (LAWS)   II
and Peace Time Threats
Table of Contents
1     Introduction.....................................................................................................................................1
2     Artificial Intelligence (AI) .................................................................................................................2
3     Autonomous Technology (AT) .........................................................................................................5
4     Lethal Autonomous Weapons Systems (LAWS) ..............................................................................8
5     The debate at the United Nations Convention on Certain Conventional Weapons (UN CCW) . 11
6     Further ways to weaponize AT ..................................................................................................... 13
7     Peace-time threats of not-weaponized AT ................................................................................... 16
    7.1       Mass disinformation generated by intelligent technology .................................................. 16
    7.2       Autonomously generated profiles ....................................................................................... 17
    7.3       Autonomous technology in light of emerging resource-scarcity on our planet .................. 18
8     Arguments for shaping an international interdisciplinary debate ................................................ 19
    8.1       The polity of the cyberspace................................................................................................ 19
    8.2       The subtle linguistics and the human-machine analogy ...................................................... 20
    8.3       A moral argument for a sustainable environment............................................................... 20
9     Conclusion .................................................................................................................................... 21
10 ANNEX: Existing guidelines on responsible AI, AT and Robotics research ..................................... 25

Artificial Intelligence: Autonomous Technology (AT), Lethal Autonomous Weapons Systems (LAWS)                                                        III
and Peace Time Threats
List of Abbreviations

AAAI                     Association for the Advancement of Artificial Intelligence
ACM                      Association for Computing Machinery
AI                       Artificial Intelligence
AT                       Autonomous Technology
CBM                      Confidence Building Measures
CCW                      Convention on Certain Conventional Weapons
DNA                      Deoxyribonucleic Acid
EPSRC                    Engineering and Physical Science Research Council
EURON                    European Robotics Research Network
FLI                      Future of Life Institute
GAN                      Generative Adversarial Network
GGE                      Group of Governmental Experts
HRW                      Human Rights Watch
IDSIA                    Instituto Dalle Molle di Studi sull’Instelligenza Artificiale / Dalle Molle
                         Institute for Artificial Intelligence Research
IEEE                     Institute of Electrical and Electronics Engineers
IHL                      International Humanitarian Law
IHRL                     International Human Rights Law
LAWS                     Lethal Autonomous Weapons Systems
UN                       United Nations

Artificial Intelligence: Autonomous Technology (AT), Lethal Autonomous Weapons Systems (LAWS)     IV
and Peace Time Threats
1    Introduction

The main purpose of this paper is to inform the international community on the risks of
Autonomous Technology (AT) for global society. AT can be said to be the essence of Lethal
Autonomous Weapons Systems (LAWS), which have triggered a legal and policy debate within
the international arms control framework of the United Nations Convention on Certain
Conventional Weapons (UN CCW) that is now entering its fifth year. Since LAWS highly
challenge existing International Humanitarian Law (IHL) due to their capacity of replacing a
human operator on a weapons platform, the CCW’s tasks of, i.a., ensuring that the concepts
of legal accountability and human responsibility do not become void, and assessing whether
LAWS are legal under IHL, are of utmost importance.

However, LAWS are not the only manifestation of the security risks of AT. This paper will
demonstrate further ways of the actual and potential weaponization of AT that are currently
not yet fully addressed by the UN organizations. Moreover, AT not only poses risks to global
society if weaponized, but can pose tremendous systemic risks to global society and humanity
also when not weaponized. This potentially dangerous transformative power of AT, which is
beyond the scope of the CCW’s mandate, will be the thematic core of this paper. Based on a
risk assessment of not-weaponized AT, the paper will present thought-provoking impulses that
can shape an international interdisciplinary debate on the risks of AT specifically and of
emerging technologies more generally.

In addition, this paper highlights risks underlying the application of terms originally referring
to human traits to technological artefacts, such as ‘intelligence’, ‘autonomy’, ‘decision-making
capacity’ or ‘agent’. It will argue that this unquestioned so-called ‘anthropomorphism’ leads to
a premature overvaluation of technology and a simultaneous potential devaluation of human
beings, and will present ideas for linguistic substitutes.

At the same time, the paper will illustrate that the ‘classical’ understanding of ‘autonomy’ as
human ‘personal autonomy’ has, in fact, donated its meaning to the current technological use
of the term. However, this fact risks to be obfuscated by the broadening pool of diverse
definitions and understandings of ‘autonomy’ for technological artefacts. Thereby, the paper
will unearth the current paradigm shift in human technological creation and self-
understanding that underlies the ongoing debate on AT and LAWS: The fact that humans are
creating technological artefacts that may lose their instrumental character because we
gradually give away control and responsibility for the outcomes of their usage. Locating the
core challenge of AT, AI and any emerging technology in this still subtle but pervasive change
in the understanding of the human-technology relationship, this paper will also provide
conclusions and recommendations that are of a more general and long-term character.

The paper will be structured as follows: Chapters 2 and 3 will describe the current
understandings, uses as well as the risks of those uses of the terms ‘Artificial Intelligence’ (AI)
and ‘Autonomous Technology’ (AT). Chapter 4 will introduce the term ‘Lethal Autonomous
Weapons System’ (LAWS), which will lead over to Chapter 5 on the international discussions
within the UN CCW and this UN debate’s limitations. Chapter 6 will present further ways of

Artificial Intelligence: Autonomous Technology (AT), Lethal Autonomous Weapons Systems (LAWS)    1
and Peace Time Threats
weaponizing AT that are ignored by the UN CCW, yet need immediate attention. Chapter 7
shows threats of AT for global society during peace-time. Chapter 8 presents three arguments
for shaping an international debate on AT, AI and LAWS. Chapter 9 concludes and presents a
list of recommendations. Eleven lists of principles for ethical/ responsible research on AI, AT
and Robotics can be found in the annex.

2     Artificial Intelligence (AI)

AI are two letters that represent the financially most lucrative scientific field that currently
exists.1 Moreover, they represent something that is often regarded as the fuel of the fourth
industrial revolution, which is taking place at an unprecedented pace compared to any other
in human history.2 However, the question of what AI really is most often receives a rather
vague and elusive answer. The reason for this lack of clarity may by two-fold.

First, the term ‘Artificial Intelligence’ includes the term ‘intelligence.’ ‘Intelligence’ originally
has been used as a characteristic of humans. However, there neither exists a general
understanding of this natural trait, nor a standard definition, despite a long history of research
and debate.3

Precisely due to the growing research on AI, there exist strong incentives to define what the
term ‘intelligence’ shall mean. This need is especially acute when artificial systems are
considered that are significantly different to humans. This is the reason why researchers at the
Swiss AI Lab IDSIA (Instituto Dalle Molle di Studi sull’Intelligenza Artificiale) created a single
definition based on a collection of 70 definitions of ‘Intelligence’ by dictionaries, psychologists
and AI researchers. They state that ‘intelligence measures an agent’s ability to achieve goals in
a wide range of environments.’ This general ability includes the ability to understand, to learn

1 The Economist, 2017, Coding Competition, The Battle in AI, The Economist Online, December 7, 2017, available at:
https://www.economist.com/news/leaders/21732111-artificial-intelligence-looks-tailor-made-incumbent-tech-giants-worry-
battle?frsc=dg%7Cehttp://www.economist.com/news/leaders/21732111-artificial-intelligence-looks-tailor-made-
incumbent-tech-giants-worry-battle?frsc=dg%7Ce (accessed on December 11, 2017).
2 See e.g. Wan, James, 2018, Artificial Intelligence is the fourth industrial revolution, Lexology.com, January 18, 2018,

available at: https://www.lexology.com/library/detail.aspx?g=fccf419c-6339-48b0-94f9-2313dd6f5186 (accessed on January
31, 2018); Kelnar, David, 2016, The fourth industrial revolution: a primer on Artificial Intelligence (AI), Medium.com,
December 2, 2016, available at: https://medium.com/mmc-writes/the-fourth-industrial-revolution-a-primer-on-artificial-
intelligence-ai-ff5e7fffcae1 (accessed on January 31, 2018); Wright, Ian, 2017, Artificial Intelligence and Industry 4.0 – Taking
the Plunge, Engineering.com, October 19, 2017, available at:
https://www.engineering.com/AdvancedManufacturing/ArticleID/15871/Artificial-Intelligence-and-Industry-40--Taking-the-
Plunge.aspx (accessed on January 31, 2018). Some experts also say that we are currently in the middle of a digital revolution,
see e.g. Helbing, Dirk, 2017, A Presentation on Responsible Innovation and Ethics in Engineering, November 11, 2017,
available at: https://www.youtube.com/watch?v=Jyv3QpRp9LA (accessed on February 7, 2018). Research by McKinsey
suggests that AI could potentially transform global society ‘… ten times faster and 300 times the scale, or roughly 3000
times the impact,’ Dobbs, R., Manyika, J. and Woetzel, J., 2015, The four global forces breaking all trends,
McKinsey&Company, available at: https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-
insights/the-four-global-forces-breaking-all-the-trends (accessed on February 3, 2018).
3 Helbing, Dirk, 2018, Personal Interview, February 9, 2018. For a list of 70 definitions of ‘Intelligence’ see Legg, Shane, and

Hutter, Marcus, 2007, A Collection of Definitions of Intelligence, Frontiers in Artificial Intelligence and Applications Vol 157,
17-24.

Artificial Intelligence: Autonomous Technology (AT), Lethal Autonomous Weapons Systems (LAWS)                                  2
and Peace Time Threats
and to adapt, since those are the features that enable an agent to solve a problem in a wide
range of environments.4

It must be highlighted that the driving force behind the above-mentioned definition was to
create a definitional reference point useful for both human as well as technological artefacts.5
This ignores the fact that the term ‘intelligence’ was originally used to refer to a natural human
capacity; and without a clear understanding of this human trait, we could possibly risk an
overvaluation of technology and a devaluation of human beings.6

And second, a reason for confusion about the meaning of AI may lie in the fact that the term
AI is used to refer to two distinct but interrelated understandings. The distinction of these two
possible understandings of AI will be highlighted here by two definitions of AI. However, we do
not claim for these definitions to gain universal validity, as they would merely increase the
existing pool of possible choices of such definitions. Yet, they should provide the reader with a
first sense of caution when dealing with the application of originally ‘human terms’ such as
‘intelligence’ or ‘autonomy’ to technological artefacts. At first glance, it might seem accurate
and comprehensive to apply originally human terms to technological artefacts, since the latter
are increasingly capable to perform ‘actions’ that resemble those of humans. However, the
elaborations in this paper will show that this could prove to be risky.

On the one hand, AI refers to a scientific field, whose modern history started with the
development of stored-program electronic computers,7 but whose intellectual roots can
already be found in Greek mythology.8 As a scientific field, AI can be regarded as the attempt
to answer the question of how the human brain gives rise to thoughts and feelings. AI as a
research field began with the idea that ‘… every aspect of learning or any other feature of
intelligence can in principle be so precisely described that a machine can be made to simulate
it.’9 Therefore, AI refers to ‘… the study of the computations that make it possible to perceive,
reason, and act’;10 it is the ‘… effort to make computers think …;’11 and it is the ‘… art of
creating machines that perform functions that require intelligence when performed by
people.’12

4 Legg and Hutter, 2007, 8.
5 Legg, Shane, and Hutter, Marcus, 2006, A formal measure of artificial intelligence, Proc. 15 th Annual Machine Learning
Conference of Belgium and The Netherlands, 73-80, 73.
6 Helbing, 2018.
7 A computer that stores program instructions in electronic memory.
8 See e.g. on the bronze man Talos from Crete, who can be regarded as incorporating the idea of an intelligent robot:

Appollodorus, The Library, Book 1, Chapter 9, Section 26, Frazer, Sir James George (trnsl.), 1921, Cambridge, MA: Harvard
University Press; London: William Heinemann Ltd; Apollonius Rhodius, Argonautica, Book 4, Section 1638 et seq., Seaton,
R.C. (trnsl.), 1912, London: William Heinemann; Cohen, J., 1966, Human Robots in Myth and Science, London: Allen and
Unwin.
9 McCarthy, J., Minsky, M., Rochester, N., & Shannon, C. E., 1955, A proposal for the Dartmouth summer research project on

artificial intelligence, 1.
10 Winston, Addison-Wesley, 1992, Artificial Intelligence 3 rd ed., Boston, MA: Longman Publishing Co, emphasis added.
11 Haugeland, John, 1985, Symbolic Computation: Artificial Intelligence: The Very Idea, Cambridge, MA: The MIT Press,

emphasis added.
12 Kurzweil, Raymond, 1990, The Age of Intelligent Machines, Chapter 1: The Roots of Artificial Intelligence, 2, emphasis

added.

Artificial Intelligence: Autonomous Technology (AT), Lethal Autonomous Weapons Systems (LAWS)                            3
and Peace Time Threats
Bearing in mind the above-mentioned risk of devaluating humans in creating a definition of
(artificial) intelligence without a human reference, AI shall here be understood as

     (1) a scientific undertaking that is aiming to create software or machines that exhibit traits
         that resemble human reasoning, problem-solving, perception, learning, planning, and/
         or knowledge.

Core parts of research on AI include: ‘knowledge engineering,’ which aims at creating software
and machines that have abundant information relating to the world; ‘machine learning’, which
is the modern probabilistic approach to AI and that studies algorithms that ‘learn’ to predict
from data; ‘reinforcement learning’, a sub-discipline of machine learning and currently the
most promising approach for general intelligence that studies algorithms that learn to act in
an unknown environment through trial and error; ‘deep learning’13, a very fast-moving and
successful approach to machine learning based on neural networks, which has enabled recent
breakthroughs in computer vision and speech recognition;14 ‘machine perception’, which deals
with the capability of using sensory inputs to deduce aspects of the world, ‘computer vision’,
the capability of analyzing visual inputs; and ‘robotics’, which deals with robots and the
computer systems for their control.15

On the other hand, AI is also referred to the ‘knowledge’ or ‘capacity’ embedded in software
or hardware architecture that are the result of the research on AI (1). Such capacities of
software or hardware, e.g. the capacity to ‘recognize’ faces or voices or to ‘drive’ without a
human behind a steering wheel, can be understood as artificially created intelligence – or AI.
In this sense, AI can be regarded as a resource or a commodity, because it can be traded. Tech
giants around the world are competing about the brilliance of their respective algorithms.16
Therefore, AI can be regarded both as a formless potential foundation of wealth as well as a
resource for political leverage.17

In this sense, AI can also be understood as

     (2) the formless capacity embedded in software and hardware architecture which enables
         the latter to exhibit traits that resemble human reasoning, problem-solving, perception,
         learning, planning, and/ or knowledge.

13 For a more detailed description of deep learning, see p. 5.
14 Leike, J., AI Safety Syllabus, 80.000hours.org, available at: https://80000hours.org/ai-safety-syllabus/ (accessed on
February 3, 2018).
15 See e.g., Techopedia.com, Artificial Intelligence, available at: https://www.techopedia.com/definition/190/artificial-

intelligence-ai (accessed on January 31, 2018).
16 The Economist, 2017, Battle of the brains, Google leads in the race to dominate artificial intelligence, December 7, 2017,

available at: https://www.economist.com/news/business/21732125-tech-giants-are-investing-billions-transformative-
technology-google-leads-race (accessed on January 31, 2018).
17 See e.g. CNBC, 2017, Putin: Leader in artificial intelligence will rule the world, September 4, 2017, available at:

https://www.cnbc.com/2017/09/04/putin-leader-in-artificial-intelligence-will-rule-world.html (accessed on February 7,
2018); Metz, Cade, 2017, Google is already late to China’s AI revolution, February 2, 2017, Wired.com, available at:
https://www.wired.com/2017/06/ai-revolution-bigger-google-facebook-microsoft/ (accessed on February 7, 2018),
Armbruster, Alexander, 2017, Künstliche Intelligenz: Google-Manager Eric Schmidt warnt vor China, Frankfurter Allgemeine
Zeitung Online, November 2, 2017, available at: http://www.faz.net/aktuell/wirtschaft/kuenstliche-intelligenz/kuenstliche-
intelligenz-google-manager-eric-schmidt-warnt-vor-china-15273843.html (accessed on February 7, 2018).

Artificial Intelligence: Autonomous Technology (AT), Lethal Autonomous Weapons Systems (LAWS)                                   4
and Peace Time Threats
Current AI in the second sense of the term (2) is known as ‘narrow’ or ‘weak’ AI, in that it is
designed to perform a narrow task, such as only driving a car or only recognizing faces. The
long-term goal of many researchers, however, is to create so-called ‘general’ or ‘strong’ AI,
sometimes also called ‘artificial human-level intelligence’.18 General AI is the formless capacity
embedded in general purpose systems that are comparable to that of the human mind.19 If
general AI was achieved, this might also lead to ‘artificial superintelligence’, which can be
defined as ‘… any intellect that greatly exceeds the cognitive performance of humans in
virtually all domains of interest.’20

3     Autonomous Technology (AT)

AT is a result of research in the fields of AI and robotics, but also draws on other disciplines
such as mathematics, psychology and biology.21 Currently, there exists no clear understanding
and no universally valid definition of the term ‘autonomous’ or AT in the context of AI and
robotics. However, there do exist different attempts.

Sometimes a purely operational understanding of ‘autonomy’ is used. In this sense, the term
‘autonomous’ may refer to any outcome by a machine or software that is created without
human intervention. This could include, e.g., a toaster’s ejection of a bread slice when it is
warm. In this form, autonomy would be equivalent to automation22 and would not be limited
to digital technology but could be used in analog technology or mechanics as well.23 Hence,
this understanding does not locate AT exclusively within the research field of modern AI.

Some experts use a narrower understanding and limit the use of the attribute ‘autonomous’
to more complex technological processes. They argue that AT extends beyond conventional
automation and can solve application problems by using materially different algorithms and
software system architectures.24 This perspective is narrower and clearly locates the
emergence of AT within the research of modern AI.

In this sense, the key benefit of AT is the ability of an autonomous system to ‘… explore the
possibilities for action and decide ‘what to do next’ with little or no human involvement, and
to do so in unstructured situations which may possess significant uncertainty. This process is,
in practice, indeterminate in that we cannot foresee all possible relevant information ….
‘What to do next’ may include … a step in problem-solving, a change in attention, the creation

18 Müller, Vincent C., and Bostrom, Nick, 2016, Future Progress in Artificial Intelligence: A Survey of Expert Opinion, In:
Müller, Vincent C., (ed.), Fundamental Issues of Artificial Intelligence, Synthese Library; Berlin: Springer, 553-571, 553.
19 Adams, S., Arel, I., Bach, J., Coop, R., Furlan, R., Goertzel, B., Hall, J., Samsonovich, A., Scheutz, M., Schlesinger, M., Shapiro,

S., and Sowa, J. F., 2012, Mapping the landscape of human-level artificial general intelligence, AI Magazine, 33(1), 25-42.
20 Bostrom, N., 2014, Superintelligence: Paths, dangers, strategies, Oxford: Oxford University Press, Ch. 2.
21 Atkinson, David J., 2015, Emerging Cyber-Security Issues of Autonomy and the Psychopathology of Intelligent Machines,

Foundation of Autonomy and Its (Cyber) Threats: From Individuals to Interdependence: Papers from the 2015 Spring
Symposium, 6-13, 7.
22 Christen, Markus, Burri, Thomas, Chapa, Joseph, Salvi, Raphael, Santoni de Sio, Filippo, and Sullins, John, 2017, An

Evaluation Scheme for the Ethical Use of Autonomous Robotic Systems in Security Applications, Digital Society Initiative (DSI)
of the University of Zurich, DSI White Paper Series, White Paper No. 1, 36.
23 Helbing, 2018.
24 Land mines are an often-cited example of an automated weapon, see e.g. Ibid., 46.

Artificial Intelligence: Autonomous Technology (AT), Lethal Autonomous Weapons Systems (LAWS)                                        5
and Peace Time Threats
or pursuit of a goal, and many other activities ….’25 In other words, a system is ‘autonomous’
if it can change its behavior during operation in response to events that are unanticipated,26
e.g. a self-driving car’s reaction to a traffic jam, a therapist chatbot’s27 answer to a person
lamenting about her disastrous day, or a missile defense system that intercepts an incoming
hostile missile, like Israel’s Iron Dome.

The theoretical AI approach that is at the core of AT in its narrow understanding, and that
enables technological systems to perform the above-mentioned actions without a human
operator, is deep learning. Deep learning software tries to imitate the activity of layers of
neurons in the human brain. Through improvements in mathematical formulas and the
continuously increasing computing power of computers, it is possible to model a huge number
of layers of virtual neurons. Through an inflow of a vast amount of data, the software can
recognize patterns in this data and ‘learn’ from it.28 This is key for ‘autonomous’ systems’
reaction to unanticipated changes: due to new data inflow, the software can recognize new
patterns and adapt to a changing ‘environment’. Thereby, an autonomous system can modify
its actions in order to follow its goal or agenda.

It is crucial to highlight that deep learning mechanisms are so complex that a human being
cannot comprehend why a technological process based on deep learning creates the outcome
it does.29 Hence, outputs of autonomous systems may not only come as a surprise due to their
core capacity of choosing a course of action undetermined by a human operator, but also due
to the impossibility of locating the technological ‘trigger’ for a certain output.

At this stage one could address the fear that software or machines could independently create
something that may resemble free will. It is a fact that autonomous systems may perform
actions that are both unanticipated and ex post untraceable. However, the initial programming
of the software with the potential for future ‘autonomous behavior’ is the engineer’s and
programmer’s decision, and not an unavoidable fact. It is up to humans to discuss and set
standards that ensure the development of beneficial and safe technology.

Since there exists no agreement whether automated systems (e.g. a toaster) should already
be regarded as autonomous (no human operator controls the ejection of the warm bread),
some experts see it useful to think of ‘autonomy as a continuum’ 30 or of ‘degrees of

25 Atkinson, 2015, 7, italics added. For further elaborations a limited use of the term ‘autonomy’ to its more complex forms,
see Russell, Stuart J. and Norvig, Peter, 2014, Artificial intelligence: a modern approach, Third Edition, Pearson Education:
Harlow; Van der Vyver, J.-J. et al., 2004, Towards genuine machine autonomy, in: Robotics and Autonomous Systems, Vol.
46, No. 3, 151-157.
26 Watson, David P., and Scheidt, David H., 2005, Autonomous Systems, Johns Hopkins APL Technical Digest 26(4), 368-376,

368.
27 See e.g. the 24/7 Woebot that chats in order to improve someone’s mood, available at: https://woebot.io/ (accessed on

February 14, 2018).
28 Burkhalter, Patrick, 2018, Personal Interview, February 14, 2018.
29 Ibid. See also Knight, Will, 2017a, The Dark Secret at the Heart of AI, MIT Technology Review, April 11, 2017, available at:

https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/ (accessed on February 16, 2018).
30 Asaro, Peter, 2009, How just could a robot war be?, Proceedings of the 2008 Conference on Current Issues in Computing

and Philosophy, 50-64, 51; Nicholas Marsh, Defining the Scope of Autonomy: Issues for the Campaign to Stop Killer Robots 2
(2014), available at: http://file.prio.no/Publication_files/Prio/Marsh%20(2014)%20-
%20Defining%20the%20Scope%20of%20Autonomy,%20PRIO%20Policy%20Brief%202-2014.pdf (accessed on February 14,
2018); Michael Biontino, Summary of Technical Issues: CCW Expert Meeting on Lethal Autonomous Weapons Systems 1

Artificial Intelligence: Autonomous Technology (AT), Lethal Autonomous Weapons Systems (LAWS)                                 6
and Peace Time Threats
autonomy’.31 They would characterize automated processes or semi-autonomous processes
as ‘autonomous’, however to a lower degree than ‘fully autonomous’ systems.32 This takes into
account a blurring of definitional borders and reflects the lack of a clear definition of
‘autonomy’ in AI and Robotics, but does not fill this gap.

There is also no agreement regarding whether or not a system could be classified as
‘autonomous’ if only certain aspects of its capacities function without human intervention.
Some experts argue that, e.g., a system that can function independently from external energy
sources (autarkic), or one that can adapt its programming behavior based on previous data
acquired (‘learning’), could already be regarded as ‘autonomous’.33

Some experts also claim that the attribute ‘autonomous’ is used for a technological artefact
when it becomes (nearly) impossible for a human being to intervene in a technological process.
In this sense, ‘autonomy’ is not a term that covers a set of clearly defined characteristics (e.g.
an artificial agent’s capacity to ‘learn’, to be autarkic, to function independently from human
control), but one that describes the result of a technological process for which the human
cannot or does not want to bear responsibility.34

This view is influenced by the highly important, and thus not negligible fact, that the term
‘autonomy’ has a rich philosophical history and refers to an unquantifiable attribute intrinsic
to human personhood. There are two distinct but interrelated understandings of ‘autonomy’
as a human attribute.

‘Personal autonomy’, on the one hand, refers to self-governance or the capacity to decide for
oneself and follow a course of action in one’s life, independent of moral content. 35 This
necessarily leads to personal responsibility for the course of action taken.

On the other hand, ‘moral autonomy,’ usually traced back to Immanuel Kant, can be
understood as the capacity of an individual human to deliberate, understand and give oneself
the moral law. For Kant, it is by virtue of our autonomy that we are moral beings that can take
on moral responsibility. At the same time, we are moral to the extent that we are
autonomous.36

This second classical understanding of moral autonomy, connected with the fact that the term
‘autonomy’ is used when referring to software and machines, may have prematurely
supported the idea of, and fueled discussions about ‘autonomous’ robots that may also behave

(2014), available at
http://www.unog.ch/80256EDD006B8954/%28httpAssets%29/6035B96DE2BE0C59C1257CDA00553F03/$file/Germany_LA
WS_Technical_Summary_2014.pdf (accessed on February 14, 2018).
31 Christen et al., 2017, 10.
32 Schörrig, Niklas, 2017, Automatisierung in der Militär- und Waffentechnik, 27. ETH-Arbeitstagung zur Sicherheitspolitik,

Autonome Waffensysteme und ihre Folgen für die Sicherheitspolitik, February 3, 2017.
33 Christen et al., 10.
34 Helbing, 2018.
35 Dryden, Jane, Internet Encyclopedia of Philosophy, Autonomy, available at: https://www.iep.utm.edu/autonomy/

(accessed on February 1, 2018).
36 Kant, Immanuel, 1998 (1785), Groundwork for the Metaphysics of Morals, Cambridge: Cambridge University Press.

Artificial Intelligence: Autonomous Technology (AT), Lethal Autonomous Weapons Systems (LAWS)                                 7
and Peace Time Threats
morally and ethically.37 Both a precise technological understanding as well as careful linguistic
usage may minimize or eliminate the risk of a (potentially unconscious) terminological
confusion.38

However, it is barely possible to completely strip off a term from its ‘classical’ meaning. And
the fact that ‘autonomy’, when used to characterize technological processes, does so when
the latter create outcomes for which humans have a hard time taking control – in other words,
when they actually do give away the capacity to decide for an action that leads to a
technological process’ outcome – there clearly exists an overlap of the ‘classical’ understanding
of personal autonomy and the technological use of the term.

Due to this common contextual denominator of personal autonomy and ‘autonomy’ for
technological artefacts, one could argue that the international debate about a definition of
‘autonomy’ for artefacts clearly distinguished from personal autonomy is misguided. The
reason is that the technological use of the term ‘autonomy’ precisely uses this term in order
to highlight a notion of ‘self-governance’ of an artefact. And whether or not this ‘self-
governance’ is in fact technologically possible, one must not ignore that research endeavors to
create ‘autonomous’ systems bear an immense risk of going hand in hand with losing human
control over outputs (deep learning) and relinquishing human responsibility for outcomes. And
this risk is independent of the term itself. In other words, a distinct definition of ‘autonomy’
for artefacts, measurable and potentially existing to degrees, obfuscates the fact that humans
are creating technological instruments that may lose their instrumental character because we
gradually give away responsibility for the outcomes of their usage.

Consequently, agreeing that ‘autonomy’ for artefacts is a term willingly borrowed from its
‘classical’ usage of personal self-governance, and intrinsically linked to responsibility, would
shed a different light on the creation of autonomous artifacts and thus lead to a different
question: Why are we aiming at limiting the space for responsible human action instead of
increasing it? It is highly important not to lose oneself in technological definitions of
‘autonomy’. ‘Autonomy’ for artefacts is a term that could function as an excuse for
relinquished human responsibility for ‘ugly’ and potentially immoral outcomes, i.a., the killing
of human beings in the case of LAWS.

4    Lethal Autonomous Weapons Systems (LAWS)

AT can supplant human beings in decision-making processes in certain areas. This can have an
enormous potential for good (e.g. autonomously driving cars for visually impaired people,
37 Arkin, Ronald, 2009, Ethical Robots in Warfare, IEEE Technology and Society Magazine 28(1), 30-33; Arkin, Ronald, 2010,
The case for ethical autonomy in unmanned systems, Journal of Military Ethics 9(4), 332-341; Arkin, Ronald, 2017, A
roboticist’s perspective on lethal autonomous weapon systems, UNODA Occasional Papers No. 30, New York: United
Nations, 35-37; Anderson, M., Anderson, S., and Armen, C., 2004, Towards Machine Ethics, AAAI-04 Workshop on Agent
Organizations: Theory and Practice, San Jose, CA; Anderson, M., Anderson, S., and Berenz, V., 2016, Ensuring Ethical
Behavior from Autonomous Systems, Proc. AAAI Workshop: Artificial Intelligence Applied to Assistive Technologies and
Smart Environments, available at: http://www.aaai.org/ocs/index.php/WS/AAAIW16/paper/view/12555 (accessed on
February 4, 2018); Moor, J., 2006, The Nature, Importance, and Difficulty of Machine Ethics, IEEE Intelligent Systems,
July/August, 18-21; McLaren, B., 2005, Lessons in Machine Ethics from the Perspective of Two Computational Models of
Ethical Reasoning, 2005 AAAI Fall Symposium on Machine Ethics, AAAI Technical Report FS-05-06.
38 See 6.2.

Artificial Intelligence: Autonomous Technology (AT), Lethal Autonomous Weapons Systems (LAWS)                                8
and Peace Time Threats
surgical robots39). However, in addition to promising applications of AT, autonomous software
can be (and arguably already is) integrated into robots that can select and engage a (military)
target (e.g. infrastructure and potentially also combatants) without a human override.40 Often-
called Lethal Autonomous Weapons Systems (LAWS), as yet, there exists no agreed definition
of LAWS. One reason for this lack of definition is that there exists, as highlighted above, no
general understanding of the term ‘autonomy’ in AI and robotics.

The general idea is that a LAWS, once activated, would, with the help of sensors and
computationally intense algorithms, identify, search, select, and attack targets without further
human intervention. Whether the human being can still overpower or veto an autonomous
weapon’s ‘decision’ in order for it to be called a LAWS, is also debated.41 However, military
operational necessity precisely seems to require weapons systems that can function once
human communication links break down.42 Furthermore, state-of-the-art research on AI is
currently creating software which can ‘learn’ entirely on its own43 and even ‘learn’ to ‘learn’
on its own.44 Hence, (precursor) technologies for creating fully ‘human-out-of-the-loop’45
weapons systems already exist.

From a military perspective, LAWS have many advantages over classical automated or remotely
controlled systems: LAWS would not depend on communication links; they could operate at
increased range for extended periods; fewer humans would be needed to support military

39 See e.g., Strickland, Eliza, 2017, In Flesh-Cutting Task, Autonomous Robot Surgeon Beats Human Surgeons, IEEE Spectrum,
October 13, 2017, available at: https://spectrum.ieee.org/the-human-os/biomedical/devices/in-fleshcutting-task-
autonomous-robot-surgeon-beats-human-surgeons (accessed on February 1, 2018).
40 The aim of this paper is not to provide examples or a list of existing weapons with autonomous capabilities. A continuously

updated list can be found through e.g. Roff, Heather, and Moyes, Richard, 2016, Dataset: Survey of Autonomous Weapons
Systems, Global Security Initiative: Autonomy, Robotics & Collective Systems, Arizona State University, available at:
https://globalsecurity.asu.edu/robotics-autonomy (accessed on February 15, 2018).
41 The US Department of Defense defines a weapons system as autonomous if it ‘… can select and engage targets without

further intervention by a human operator.’ Department of Defense, Directive 3000.09, November 21, 2012, 13-14; The UN
Special Rapporteur on extrajudicial, summary or arbitrary executions adds the element of choice: ‘The important element is
that the robot has an autonomous “choice” regarding selection of a target and the use of lethal force.’ Report of the Special
Rapporteur on extrajudicial, summary or arbitrary executions, Christoph Heyns, UN doc. A/HRC/23/47, § 38; Human Rights
Watch (HRW) distinguishes level of autonomy in weapons systems and contrasts the terms ‘human-out-of-the-loop’ and
‘human-on-the-loop’. A ‘human-out-of-the-loop’ weapon is ‘… capable of selecting targets and delivering force without any
human input or interaction ….’ In other words, a ‘human-out-of-the-loop’ weapon’s decision cannot be vetoed by a human
being. On the other hand side, a ‘human-on-the-loop’ weapon can ‘… select targets and deliver force under the oversight
of a human operator who can override the robots’ actions …’. According to HRW, both types can be considered ‘fully
autonomous weapons’ when supervision is so limited that the weapon can be considered ‘out-of- the-loop.’ Docherty, B.,
2012, Losing Humanity: The Case Against Killer Robots, Human Rights Watch, November 2012, available at:
https://www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-robots (accessed on February 1, 2018); The
ICRC defines autonomous weapons systems as ‘… (a)ny weapon system with autonomy in its critical functions. That is, a
weapon system that can select (i.e. search for or detect, identify, track, select) and attack (i.e. use force against, neutralize,
damage or destroy) targets without human intervention.’ ICRC, 2016, Convention on Certain Conventional Weapons,
Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS), April 11 – 15, 2016, Geneva, Switzerland, 1.
42 Adams, T., 2002, Future Warfare and the Decline of Human Decision making, Parameters, U.S. Army War College

Quarterly, Winter 2001-02, 57-71.
43 Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A, Hubert, T., Baker, L., Lai, M., Bolton, A., Chen,

Y., Lillicrap, T., Hui, F., Sifre, L., van den Driessche, G., Graepel, T. and Hassabis, D., 2017, Mastering the game of Go without
human knowledge, Nature vol. 550, 354-359.
44 See e.g., Finn, Chelsea, 2017, Learning to Learn, Berkeley Artificial Intelligence Research, available at:

http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/ (accessed February 2, 2018).
45 Docherty, B., 2012.

Artificial Intelligence: Autonomous Technology (AT), Lethal Autonomous Weapons Systems (LAWS)                                      9
and Peace Time Threats
operations; their higher processing speeds would suit the increasing pace of combat;46 by
replacing human soldiers, they will spare lives; and with the absence of emotions such as self-
interest, fear or vengeance, their ‘objective’ ‘decision-making’ could lead to overall outcomes
that are less harmful.47

However, the use of LAWS may also generate substantial threats. Generally, LAWS may change
how humans exercise control over the use of force and its consequences. Further, humans
may no longer be able to predict who or what is made the target of an attack, or even explain
why a particular target was chosen by a LAWS. This fact raises serious legal, ethical,
humanitarian and security concerns.48 From a humanitarian and ethical point of view, LAWS
could be regarded as diminishing the value of human life as a machine and not a human being
‘decides’ to kill.49 Also, the physical and emotional distance between the programmer or
engineer of a LAWS and the targeted person may generate an indifference or even a ‘Gameboy
Mentality’ on the side of the former.50 From a security perspective, LAWS could be dangerous
because they may also be imperfect and malfunction.51 Moreover, the greater the technology
advances, the more the level of autonomy of a LAWS increases. This, further, leads to an
increased unpredictability of outcomes of LAWS and may enable the interaction of multiple
LAWS as e.g. self-organizing swarms.52

The focus of scholarly inquiry of the legality of LAWS was mainly on IHL,53 which presents
significant challenges for both the development and the use of LAWS, since the latter would
face problems to meet IHL’s requirements of distinction,54 proportionality55 and precaution.56
57 Moreover, the nature of autonomy in a weapons system means that the lines of

46 Thurnher, J., 2014, Examining Autonomous Weapons Systems from a Law of Armed Conflict Perspective, in: Nasu, H., and
McLaughlin, R. (eds.), New Technologies and the Law of Armed Conflict, TMS Asser Press, 213-218.
47 ICRC, 2011, International Humanitarian Law and the Challenges of Contemporary Armed Conflicts, Official Working

Document of the 31st International Conference of the Red Cross and the Red Crescent, November 28 – December 1, 2011.
48 Geneva Academy, 2017, Autonomous Weapons Systems: Legality under International Humanitarian Law and Human

Rights, https://www.geneva-academy.ch/news/detail/48-autonomous-weapon-systems-legality-under-international-
humanitarian-law-and-human-rights (accessed on February 2, 2018).
49 UN Doc. A/HRC/23/47, § 109.
50 Sassòli, Marco, 2014, Autonomous Weapons and International Humanitarian Law: Advantages, Open Technical Questions

and Legal Issues to be Clarified, International Law Studies Vol. 90, 308-340, 317.
51 ICRC, 2014, Expert Meeting on ‘Autonomous weapons systems: technical, military, legal and humanitarian aspects’, March

26 – 28, 2014, Report of November 1, 2014, available at: https://www.icrc.org/en/document/report-icrc-meeting-
autonomous-weapon-systems-26-28-march-2014# (accessed on February 1, 2018).
52 ICRC, 2016, 3.
53 The reason for this legal focus on LAWS based almost exclusively on IHL is the fact that the UN CCW is underpinned by IHL,

see also 3.1.5. This fact appears in an even odder light when considering that the first international thematic reference on
autonomy in weapons systems was expressed by UN Special Rapporteur on Rapporteur on extrajudicial, summary or
arbitrary executions, Christoph Heyns, in UN doc. A/HRC/23/47, § 38, for the Office of the High Commissioner for Human
Rights.
54 Art. 48, 49 51 (2) and 52 (2) Protocol Additional to the Geneva Conventions of August 12, 1949, and relating to the

Protection of Victims of International Armed Conflicts (Protocol I), June 8, 1977.
55 Art. 51 (5) (b) and Art. 57 Protocol I.
56 Art. 57 (1) Protocol I.
57 Brehm, Maya, 2017, Defending the boundary: Constraints and requirements on the use of autonomous weapons systems

under international humanitarian and human rights law, Geneva Academy of International Humanitarian Law and Human
Rights, 22; see also Bolton, M. ‘From Minefields to Minespace: An Archeology of the Changing Architecture of Autonomous
Killing in US Army Field Manuals on Landmines, Booby Traps and IEDs’, 46 Political Geography (2015) 41–53.

Artificial Intelligence: Autonomous Technology (AT), Lethal Autonomous Weapons Systems (LAWS)                             10
and Peace Time Threats
responsibility for an attack by a LAWS may not always be clear. Therefore, LAWS also challenge
the legal concept of accountability.58

Recently, LAWS have also been discussed in the light of International Human Rights Law (IHRL),
whose benchmark for the legal use of force is higher than under IHL.59 However, the emphasis
on IHRL falls behind the strong focus on IHL within the UN CCW forum.

5    The debate at the United Nations Convention on Certain Conventional Weapons (UN
     CCW)

LAWS were taken up as an issue by the international arms control community in the framework
of the UN CCW in 2014.60 After a series of annual informal discussions, a Group of
Governmental Experts (GGE) debated on the subject matter for the first time as a formal
meeting during a 5-day-gathering in the CCW framework in Geneva in November 2017.

The main points of discussion of the GGE were LAWS’s potential legality under IHL, questions
of accountability and responsibility for the use of LAWS during armed conflict, potential
(working) definitions of LAWS, as well as the need for emerging norms, since LAWS highly
challenges both existing IHL as well as normative principles. However, this first GGE on LAWS
brought no agreement on a political declaration and also no path toward a new regulatory
international treaty. The only common denominator was the general will of states to continue
conversations in 2018.61

The UN CCW’s debate highlights at least five severe challenges to a comprehensive
understanding of the risks of LAWS and AT.

     (1) To date, states have not agreed on a definition of LAWS or the concept of autonomy,
         or on whether increasingly autonomous weapons systems, or precursor technologies,
         already exist. Moreover, national as well as international policy debates on LAWS have
         lacked precise terminology.62

         Bearing in mind the above-described thoughts on the technological concept of
         ‘autonomy’, this is no surprise. However, it is claimed that definitions will most likely
         play a key role in the international deliberation on the issue of LAWS.63
         In order to comprehensively discuss, and reach agreement on, a topic, it is crucial to
         base the debate on a common understanding of the issue. It is also important in light

58 See e.g. Davison, Neil, 2017, A legal perspective: Autonomous weapon systems under international humanitarian law,
UNODA Occasional Papers No. 30, New York: United Nations, 12, 16.
59 Brehm, 2017; Heyns, Christof, 2016, Human Rights and the use of Autonomous Weapons Systems (AWS) During Domestic

Law Enforcement, Human Rights Quarterly 38, 350-378; Heyns, Christof, 2014, Autonomous Weapons Systems and Human
Rights Law, Presentation made at the informal expert meeting organized by the state parties to the Convention on Certain
Conventional Weapons, May 13-14, 2017, Geneva, Switzerland.
60 CCW/MSP/2014/3.
61 CCW/GGE.1/2017/CRP.1, 4, 5.
62 Ibid., 13. See above on Autonomous Technology.
63 Nakamitsu, Izumi, 2017, Foreword to the Perspectives on Lethal Autonomous Weapons Systems, UNODA Occasional

Papers No. 30, New York: United Nations, V.

Artificial Intelligence: Autonomous Technology (AT), Lethal Autonomous Weapons Systems (LAWS)                         11
and Peace Time Threats
of the fact that there exists a not negligible movement of some states and NGOs to ban
          LAWS.64

          However, since AT and the concept of ‘autonomy’ for technological artefacts may be a
          proxy term for an ongoing trend in human technological endeavours to give away
          control to technological agents thereby relinquishing human responsibility for
          outcomes of autonomous systems, a premature agreement on a definition of
          ‘autonomy’ in weapons systems by the GGE on LAWS would probably hide this trend.
          Therefore, instead of pressing for a definition of LAWS and ‘autonomy’ within the GGE,
          it would be advisable to locate these challenges within a bigger picture of the general
          relationship between humans and technology, and focus on the question whether we
          want to continue to regard technology as a controllable tool. In this sense, the GGE
          framework could be deemed as unfitting. Surely, principles for responsible AI research
          are both a first reflection of this underlying and ongoing paradigm change, as well as a
          first step in the direction of responsibly addressing the seriousness of this risk. A list of
          existing principles is found in the ANNEX of this paper.

     (2) States are generally unwilling to share information on their capacity to develop LAWS.
         However, in order to gain a better understanding of the lessons learned from already
         existing weapons with certain levels of autonomy, the sharing of information is vital.65

     (3) The GGE’s mandate comprises the discussion of ‘… emerging technologies in the area
         of lethal autonomous weapons systems (LAWS) in the context of the objectives and the
         purposes of the convention …’.66 However, the misuse of technology, e.g., by non-
         state actors, does not fall within the scope of this mandate.67 Certainly, though, a
         holistic analysis and discussion of the peace and security implications of AT and new
         technologies requires the international community to address also the use of such by
         non-state actors.68

64 See the Campaign to Stop Killer Robots, https://www.stopkillerrobots.org/ (accessed on February 15, 2018). Currently, 22
states are backing this position: Algeria, Argentina, Bolivia, Brazil, Chile, Costa Rica, Cuba, Ecuador, Egypt, Ghana, Guatemala,
Holy See, Iraq, Mexico, Nicaragua, Pakistan, Panama, Peru, State of Palestine, Uganda, Venezuela, Zimbabwe, Campaign to
Stop Killer Robots, 2017, Country Views on Killer Robots, November 16, 2017.
65 ICRC, 2016, Autonomous weapons systems: Profound implications for future warfare, May 6, 2016, available at:

https://www.icrc.org/en/document/autonomous-weapons-systems-profound-implications-future-warfare (accessed on
February 4, 2018).
66 CCW/CONF.V/10,10.
67 Ambassador Amandeep Singh, 2017 GGE on LAWS, Geneva, November 13-17, 2017, Plenary Session of November 14,

2017.
68 See e.g., the attack on Russian military facilities by a swarm of more than a dozen autonomous drones. Russia accused

Turkish-backed rebel forces to be behind the attack. See e.g. Satherley, Dan, 2018, Wooden drone swarm attacks Russian
forces in Syria, Newshub.com, available at: http://www.newshub.co.nz/home/world/2018/01/wooden-drone-swarm-
attacks-russian-forces-in-syria.html (accessed on February 4, 2018); Embury-Dennis, Tom, 2018, Russia says mysterious
armed drones are attacking its military base in Syria – and they don’t know who’s sending them, January 10, 2018,
Independent.co.uk, available at: http://www.independent.co.uk/news/world/middle-east/russia-military-bases-drones-syria-
armed-attacks-tartus-uavs-latakia-a8151066.html (accessed on February 4, 2018); Focus, 2018, Mit schwer bewaffnetem
Drohnenschwarm: Terroristen greifen russischen Stützpunkt an, January 14, 2018, Focus.de, available at:
https://www.focus.de/politik/ausland/drohnenschwarm-is-griff-russischen-stuetzpunkt-an-nun-naehrt-sich-ein-
besorgniserregender-verdacht_id_8296804.html (accessed on January 15, 2017).

Artificial Intelligence: Autonomous Technology (AT), Lethal Autonomous Weapons Systems (LAWS)                                 12
and Peace Time Threats
(4) LAWS represent a new category of weapons, in that their novelty lies in a formless
         technological capacity to recognize patterns from a continuous inflow of data. The
         difference between a currently existing remotely controlled drone and a ‘fully
         autonomous’ drone does not lie in the casing, but in the fact that the latter is controlled
         by a software with autonomous capacities. The UN CCW, established in 1983, seeks to
         prohibit the use of certain conventional weapons. Its protocols currently prohibit the
         use of weapons whose primary effect is to injure by fragments that, once within the
         human body, escape X-Ray detection, as well as the use of mines, booby-traps and
         incendiary weapons against civilians.69 One may argue that the UN CCW’s GGE on LAWS
         is not capable to fully understand the technological complexity of current (not to
         mention future) AT.

     (5) In addition, the CCW is a framework underpinned by IHL, which narrows the debate’s
         focus on weapons and their use during armed conflict.70 However, increasingly
         autonomous weapons systems can be and are used during peace time in law
         enforcement operations (e.g. crowd control, hostage situations), 71 where IHRL
         represents the legal benchmark.

          Compared to IHL, IHRL is much more restrictive on the use of force. Military technology
          often finds its way into law enforcement. One may assume that once the advantages
          of increasingly autonomous systems have been proven in the military context, they
          might be considered for use during domestic law enforcement, although IHRL,
          regulating the latter, would prohibit their use.72 Therefore, the CCW’s/ GGE’s approach
          could be criticized as not being legally comprehensive enough due to its limited focus
          on the use of a weapons during times of war.

6     Further ways to weaponize AT

The CCW’s discussion on LAWS has focused on conventional (physical/ robotic) systems which
interact in a 3D reality with other machines or humans.73 However, there also exist additional
ways to weaponize AT.

69 Protocol I to Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be
Deemed to Be Excessively Injurious or to Have Indiscriminate Effects as amended on 21 December 2001 (CCW) on Non-
Detectable Fragments, Protocol II to the CCW on Prohibitions or Restrictions on the Use of Mines, Booby Traps and Other
Devices, and Protocol III to the CCW on Prohibitions or Restrictions on the Use of Incendiary Weapons.
70 Art. 1 and 2 CCW.
71 See e.g. Opall-Rome, Barbara, 2016, Introducing: Israeli 12-Kilo Killer Robot, DefenseNews.com, May 8, 2016, available at:

https://www.defensenews.com/global/mideast-africa/2016/05/08/introducing-israeli-12-kilo-killer-robot/ (accessed on
February 4, 2018); Hurst, Luke, 2015, Indian Police Buy Pepper Spraying Drones To Control ‘Unruly Mobs’, Newsweek.com,
April 7, 2015, available at: http://www.newsweek.com/pepper-spraying-drones-control-unruly-mobs-say-police-india-
320189 (accessed on February 4, 2018). The ‘Mozzy Wildlife Darting Copter’ is promoted for wildlife capture, Desert Wolf:
Leaders in Technology and Innovation, available at: http://www.desert-wolf.com/dw/products/unmanned-aerial-
systems/mozzy-wildlife-darting-copter.html (accessed on February 4, 2018).
72 Heyns, Christof, 2016, Human Rights and the use of Autonomous Weapons Systems (AWS) During Domestic Law

Enforcement, Human Rights Quarterly 38, 350-378; Heyns, Christof, 2014, Autonomous Weapons Systems and Human
Rights Law, Presentation made at the informal expert meeting organized by the state parties to the Convention on Certain
Conventional Weapons, May 13-14, 2017, Geneva, Switzerland.
73 See also, UNIDIR, 2017, 1.

Artificial Intelligence: Autonomous Technology (AT), Lethal Autonomous Weapons Systems (LAWS)                              13
and Peace Time Threats
You can also read