CONFERENCE READER - 6th November 2020 - Rethinking Arms Control

Page created by Steven Stevenson
 
CONTINUE READING
CONFERENCE READER - 6th November 2020 - Rethinking Arms Control
5th – 6th November 2020

CONFERENCE READER
CONFERENCE READER - 6th November 2020 - Rethinking Arms Control
BERLIN, NOVEMBER 2020

                        3
CONFERENCE READER - 6th November 2020 - Rethinking Arms Control
2 0 2 0 . C A P T U R I N G T E C H N O L O G Y. R E T H I N K I N G A R M S C O N T R O L .                                                                                                                                    BERLIN, NOVEMBER 2020

                                                                                               Message
                                                                                               from Heiko Maas,
                                                                                               Federal Minister for Foreign Affairs

© Auswärtiges Amt / photothek.net

                     New technologies are changing our lives. The Covid-19 pandemic has further illustrated their                        As in 2019, this years’ conference will apply a wide lense in analysing the promises and perils of new
                     importance: Modern biotechnology provides the basis for a vaccine against the virus. Information and                technologies for existing and future arms control regimes. We will again provide a forum for exchange
                     communication technologies help us trace infections. They also allow us to stay in touch and to work                between politicians, military, academia, civil society, and the private sector. The virtual format of the
                     from home. In the fight against the pandemic, technological progress has certainly been a blessing for              conference this year allows us to reach out to even more participants from all across the globe.
                     us humans.
                                                                                                                                         This conference reader contains cutting edge analysis by renowned experts from leading research
                     But there is also a dark side to new technologies: Their military use in future conflicts could threaten            and policy institutes. I would like to thank the Carnegie Endowment for International Peace (CEIP),
                     strategic stability and lead to devastating consequences. On the one hand, militaries can make                      the Fondation pour la Recherche Stratégique (FRS), the Institute for Peace Research
                     responsible use of new technologies – for example to increase the accuracy of weapon systems and                    and Security Policy at Hamburg University (IFSH), the International Institute for
                     to protect civilians. But they can also use new technologies in potentially destabilising ways, beyond              Strategic Research (IISS), the United Nations Institute for Disarmament
                     human control and in breach of international legal standards. We therefore need to find ways to                     Research (UNIDIR) and the Stockholm International Peace Research
                     manage the risks emanating from new technologies, while at the same time harnessing their full                      Institute (SIPRI) for sharing their analyses and recommendations.
                     potential to avoid human suffering.
                                                                                                                                         I look forward to very productive discussions!
                     Germany has launched a series of initiatives to strengthen arms control in our new technological age.
                     The Missile Dialogue Initiative we started at last year’s Conference has provided innovative answers
                     to the challenges posed by new missile technology and proliferation trends in the post-INF world.                   Yours,
                     Our inaugural conference “Capturing technology. Rethinking arms control” in March 2019 has lifted
                     the debate on such crucial questions to a political level. Since then, we have refined the results of this
                     first conference through a series of workshops the German Foreign Office conducted with experts and
                     stakeholders. In these discussions, we have emphasised that emerging technologies also bear great
                     potential for existing arms control mechanisms, for example for verification or confidence and security
                     building measures.                                                                                                  Heiko Maas,
                                                                                                                                         Federal Minister for Foreign Affairs
                     We have at this point captured many of the challenges posed by new technologies. It is now time to
                     come up with ideas on how arms control can tackle them. We Europeans should spearhead the search
                     for global standards on the military use of new technologies. In doing so, we will need to engage
                     with a multitude of stakeholders and build an inclusive multilateral approach that takes on board the
                     perspectives of all the players concerned.

 4                                                                                                                   CONFERENCE READER                                                                                                                  5
CONFERENCE READER - 6th November 2020 - Rethinking Arms Control
2 0 2 0 . C A P T U R I N G T E C H N O L O G Y. R E T H I N K I N G A R M S C O N T R O L .                                                                                              BERLIN, NOVEMBER 2020

                                                                                                                   Content

                                                                                                                   Strategic Stability and the Global Race for Technological Leadership              9

                                                                                                                   The Military Use of AI:
                                                                                                                   Artificial and Real Challenges for Arms Control                                  17

                                                                                                                   New Opportunities to Build Trust and Ensure Compliance:
                                                                                                                   Using Emerging Technologies for Arms Control and Verification                    27

                                                                                                                   How on Earth Can We Trust Each Other?
                                                                                                                   Confidence and Security Building in New Domains                                  35

                                                                                                                   Multi-stakeholder Approaches to Arms Control Negotiations:
                                                                                                                   Working with Science and Industry                                                47

                                                                                                                   Europe, Arms Control and Strategic Autonomy:
                                                                                                                   Understanding the Equation for Effective Action                                  55

                                                                                                                   About the Authors                                                                64

                                                                                                                   About the Institutions                                                           66

                                                                                                                   Imprint                                                                          68

6                                                                                              CONFERENCE READER                                                                                                  7
CONFERENCE READER - 6th November 2020 - Rethinking Arms Control
2 0 2 0 . C A P T U R I N G T E C H N O L O G Y. R E T H I N K I N G A R M S C O N T R O L .                                                                                                                         BERLIN, NOVEMBER 2020

                                                                                                                   Strategic Stability and the Global Race for
                                                                                                                   Technological Leadership
                                                                                                                   James M. Acton
                                                                                                                   Carnegie Endowment for International Peace

                                                                                                                   War is not always the result of a series of calculated
                                                                                                                   and intentional decisions. While it only ever occurs
                                                                                                                   when interstate relations are stretched almost to
                                                                                                                   the point of breaking, the actual decision to use
                                                                                                                   force can result from misinterpreting an adversary’s
                                                                                                                   intentions—in particular, from concluding that
                                                                                                                   the adversary may be planning to act in a more
                                                                                                                   aggressive way than it actually is. Technology
                                                                                                                   can exacerbate this risk by increasing the danger
                                                                                                                   of waiting for those intentions to clarify. Thus,
                                                                                                                   once the major continental European powers
                                                                                                                   had begun to mobilize in 1914, each worried
                                                                                                                   that its adversaries intended to start a war and
                                                                                                                   that delaying its own mobilization would leave it
                                                                                                                   dangerously unprepared. The result was a process
                                                                                                                   of competitive mobilizations that contributed to
                                                                                                                   the outbreak of World War I.

                                                                                                                   The advent of nuclear weapons increased the
                                                                                                                   danger of a similar process between the Cold                  The USS Porter launches a Tomahawk Land Attack Missile against Iraq on 22
                                                                                                                                                                                 March 2003. Today, various nuclear-armed states fear that cruise missiles and
                                                                                                                   War superpowers producing civilization-ending                 other high-precision conventional weapons could be used to attack their nuclear
                                                                                                                   consequences. The ill-defined term “strategic                 forces.
                                                                                                                   stability,” which originated in that era, can be
                                                                                                                   used to describe a situation in which the danger of
                                                                                                                   inadvertent escalation—that is, escalation sparked by an action that is not intended to be escalatory—
                                                                                                                   is minimized. The development of long-range nuclear weapons—intercontinental ballistic missiles
                                                                                                                   (ICBMs), especially—increased the likelihood of one particular escalation mechanism, crisis instability,
                                                                                                                   by leading each of the superpowers to fear that its nuclear forces were vulnerable to being preemptively
                                                                                                                   destroyed in a nuclear attack. In a deep crisis or major conventional conflict, this fear could have created
                                                                                                                   pressure on the Soviet Union or the United States to launch nuclear strikes on its adversary’s nuclear
                                                                                                                   forces while it still could.[i] Indeed, at times, both superpowers felt this pressure even if, thankfully, it
                                                                                                                   was not strong enough to lead to nuclear war.[ii]

                                                                                                                   The end of the Cold War saw fears of inadvertent nuclear war ebb—but only because war itself seemed
                                                                                                                   less likely. Improvements in military technology have created new potential threats to nuclear forces
                                                                                                                   and their command, control, communications and intelligence (C3I) systems. Many of the most
                                                                                                                   significant developments—in offensive and defensive weapons, and in information-gathering and data
                                                                                                                   analysis capabilities—concern nonnuclear technologies. Nuclear-armed states are often at forefront of
                                                                                                                   these developments; indeed, there is frequently a strong element of competition between them (the
                                                                                                                   development of hypersonic missiles is a case in point).

8                                                                                              CONFERENCE READER                                                                                                                                       9
CONFERENCE READER - 6th November 2020 - Rethinking Arms Control
2 0 2 0 . C A P T U R I N G T E C H N O L O G Y. R E T H I N K I N G A R M S C O N T R O L .                                                                                                                                              BERLIN, NOVEMBER 2020

                  Now that war between nuclear-armed states no longer seems so unthinkable, increasing “entanglement”          New risks and new technologies
                  between the nuclear and nonnuclear domains is exacerbating the danger of inadvertent escalation
                  once again. (Unlike during the Cold War, such escalation is unlikely to result in an immediate all-out       Developments in nonnuclear technologies are also exacerbating dangers besides crisis instability. In
                  nuclear exchange; more likely are escalation spirals, featuring increasingly aggressive conventional         fact, inadvertent escalation could result from nonnuclear strikes even if the target—like the United
                  operations, nuclear threats, and limited nuclear use.[iii]) This risk is growing even though the extent to   States today—were confident in the survivability of its nuclear forces.
                  which nonnuclear technologies actually threaten nuclear forces and C3I capabilities is uncertain. Few
                  of these technologies were specifically developed for so-called damage limitation and it is unclear          Attacks on nuclear C3I systems are one particularly acute risk.[vi] The term “nuclear C3I system” is
                  whether nuclear-armed states—the United States, in particular—really plan to use them for that               really something of a misnomer since many key C3I capabilities support both nuclear and nonnuclear
                  purpose. Moreover, technological change is a double-edged sword since it can also enhance the ability        operations. In a conventional conflict between two nuclear-armed states, this form of entanglement
                  of nuclear-armed states to protect their nuclear forces. Yet, this skepticism is somewhat beside the         could catalyze escalation. Specifically, in such a conflict, one belligerent might attack its adversary’s
                  point. Perceptions—whether states believe their nuclear forces and C3I capabilities are under threat—        C3I assets for the purpose of undermining the target’s ability to wage a conventional war. Such attacks,
                  can drive escalation. China, Russia, and Pakistan all worry about nonnuclear threats to their nuclear        however, could have the unintended consequence of degrading the target’s nuclear C3I capabilities,
                  forces today or in the near future. The same is most likely true for North Korea (though evidence is         potentially giving it the (incorrect) perception that the conflict was about to turn nuclear.
                  harder to come by). The United States, meanwhile, may not worry much about the survivability of its
                  nuclear forces today—but it is concerned about the survivability of its nuclear C3I system and about
                  whether its forces will remain survivable over the coming decades.
                                                                                                                                                                                                                                            communications
                                                                                                                                                                                                                                            satellite

                  Old risks and new technologies
                                                                                                                                                                                                                                                         earth observation
                                                                                                                                                                                                                                                         satellite
                  Four different kinds of nonnuclear technologies are behind the growing danger of crisis instability.                                                    cyber
                                                                                                                                                                                                     hypersonic
                  First are weapons that could destroy or disable nuclear delivery systems preemptively. They include                                                     capabilities
                                                                                                                                                                                                     glider
                  precise nonnuclear munitions (whether subsonic, supersonic or hypersonic) and cyber weapons.
                  Second, nonnuclear attacks against nuclear C3I capabilities, such as early-warning or communication
                  assets, could complement attacks against nuclear forces by interfering with the target state’s ability
                  to detect attacks on those forces or to use them while it still could. Precise nonnuclear munitions and
                  cyberattack capabilities are again relevant here—as are anti-satellite weapons, given the widespread use
                  of satellites in nuclear C3I systems. Third, if nuclear forces were launched, they could be intercepted                                          sea-launched
                                                                                                                                                                   cruise missile                                 early-warning
                  prior to detonation by ballistic missile and air defenses. Fourth, information-gathering capabilities,                                                                                          radar
                  especially when coupled to data analysis software, could help to locate nuclear weapons—including
                  mobile delivery systems prior to launch—and thus bolster the effectiveness of both preemptive                         anti-satellite
                  strikes and defensive operations.[iv] Remote sensing technologies, many of which are space-based, are                 weapon
                  important in this respect, as are cyber espionage capabilities.                                                                                                                                                                  mobile ICBM
                                                                                                                                                                      attack submarine                                   underground
                                                                                                                                                                                                                      leadership bunker
                  Each of these technologies has somewhat different implications for crisis instability—but, in theory
                  at least, cyber capabilities could prove to be uniquely problematic because they could be used both
                  for offensive operations and for intelligence gathering.[v] (“In theory” because given how tightly                                                                                                                         ICBM silo
                  cyber capabilities are held—must be held, in fact, to prevent adversaries from countering them—it
                  is particularly challenging to assess their potential efficacy, though, once again, perceptions may be
                  more important than reality.) The danger here stems from the time that can be required to determine          Highly simplified schematic diagram showing potential nonnuclear threats to nuclear
                  the purpose of malware. A state that discovered an intrusion in its nuclear C3I system might be              forces and their command, control, communications, and intelligence systems.
                  unsure, for some time, whether malign code had been implanted only for surveillance purposes or was
                  designed to damage or disable that system. In fact, during a deep crisis or a conflict, the state might
                  feel compelled, out of a sense of prudence, to assume that malware was offensive and hence an attack         U.S. early-warning satellites, for example, can detect both nuclear-armed ICBMs and a variety of
                  on its nuclear forces might be imminent. In this way, cyber espionage—even if conducted for purely           nonnuclear munitions, including short-range nonnuclear ballistic missiles. Such warning, which is used,
                  defensive purposes—could be misinterpreted as a cyberattack and risk sparking escalation.                    for example, to cue missile defenses, can enhance the United States’ ability to respond to an attack.
                                                                                                                               As a result, in a conflict between NATO and Russia, say, Moscow might have an incentive to launch
                                                                                                                               attacks on U.S early-warning satellites if NATO’s regional missile defenses were proving effective
                                                                                                                               at intercepting regional Russian ballistic missiles fired against targets in Europe. Such attacks could
                                                                                                                               be misinterpreted by the United States, however, as a Russian effort to disable its homeland missile
                                                                                                                               defenses prior to using nuclear weapons in an attempt to terrify Washington into backing down. Such

10                                                                                                 CONFERENCE READER                                                                                                                                                         11
CONFERENCE READER - 6th November 2020 - Rethinking Arms Control
2 0 2 0 . C A P T U R I N G T E C H N O L O G Y. R E T H I N K I N G A R M S C O N T R O L .                                                                                                                             BERLIN, NOVEMBER 2020

                  misinterpreted warning could spark nuclear threats or even nuclear use. Indeed, in the 2018 Nuclear             of nuclear-armed SLCMs. This transparency arrangement is no longer in force, but could be revived
                  Posture Review, the United States explicitly threatened that it might use nuclear weapons to respond            and expanded to include nonnuclear and nuclear-armed SLCMs as well as nonnuclear sea-launched
                  to nonnuclear attacks against its nuclear C3I system.[vii]                                                      boost-glide missiles (nuclear-armed sea-launched boost-glide missiles should be accountable under
                                                                                                                                  a future treaty).[x]
                  Cyber operations directed against nuclear C3I capabilities could prove particularly dangerous—
                  especially if the target of the attack had multiple nuclear-armed adversaries.[viii] If the target discovered   Testing prohibitions could help manage the threat to dual-use C3I satellites. The most important
                  the intrusion, it would need to determine which of those adversaries was responsible. Attribution               of these satellites are generally located in geostationary orbit at an altitude of about 36,000 km. An
                  is a time-consuming process that is not guaranteed to yield a confident and correct conclusion.                 agreement—involving China, Russia, the United States and, perhaps, other states too—to refrain
                  Ambiguity—or, worse still, an incorrect conclusion—could be dangerous. India, for example, has two              from testing direct-ascent anti-satellite weapons above, say, 20,000 km could create a roadblock
                  nuclear-armed adversaries, China and Pakistan, which could have incentives to implant malware in its            to enhancing such weapons to the point where they could threaten geostationary satellites. Such
                  nuclear command-and-control system. If, in a conventional conflict against Pakistan, India discovered           an agreement would not be a panacea (it would not cover directed energy or co-orbital weapons)
                  a Chinese cyber intrusion, it might end up treating Pakistan as the culprit. India might attribute the          and would carry risks (it might legitimize testing below 20,000 km). But these weaknesses must be
                  attack incorrectly, for example. Or, prior to the attribution process being completed, India might feel         balanced against the escalation risks associated with threats to—and especially attacks on—dual-use
                  compelled to assume that Pakistan was behind the attack. Or, it might make the same assumption if               C3I satellites.
                  the attribution process reached an indeterminate conclusion. In any case, if a Chinese cyber intrusion
                  led India to conclude that Pakistan was planning to use nuclear weapons, inadvertent escalation could           Managing cyber capabilities could prove particularly challenging and quite different approaches will be
                  result.                                                                                                         needed. Behavioral norms are a potentially promising possibility. Nuclear-armed states, for example,
                                                                                                                                  could commit to refrain from launching cyber operations against one another’s nuclear C3I systems.
                                                                                                                                  [xi]
                                                                                                                                       It would not be possible to verify such a commitment in the traditional way. Each state, however,
                  New approaches to risk reduction                                                                                would scan its networks looking for any interference. If one of them discovered a cyber intrusion,
                                                                                                                                  then it would no longer be bound by the agreement and would be free to retaliate in kind against the
                  The range of nonnuclear threats facing nuclear forces and their C3I systems is much too diverse and             intrusion’s instigator or, indeed, to respond asymmetrically. In this way, deterrence may help enforce
                  complex to be managed through any one arms control agreement. Indeed, in a number of cases,                     compliance. The challenges facing such an agreement are daunting. Perhaps the most significant is the
                  cooperative approaches may not be possible at all; it is very difficult to imagine any kind of an agreement     dual-use nature of many C3I systems, which would complicate the task of reaching clear agreement
                  to manage, say, remote sensing technology. As a result, unilateral actions—including self-restraint—            on which capabilities were covered by a non-interference norm. Nonetheless, given the severity of the
                  have a critical role to play in risk reduction. In a number of instances, however, cooperative approaches       escalation dangers posed by cyber capabilities, this idea is worth exploring—perhaps initially on a track
                  are more feasible. In the short-term, bilateral U.S.-Russian agreements are the most plausible—though           2 basis—either in various bilateral fora or in trilateral Chinese-Russian-U.S. discussions.
                  far from easy—and may help to reassure third parties, most notably China. Over the longer-term, the
                  goal should be to develop either multilateral approaches or multiple parallel bilateral tracks.                 Finally, over the long term, more ambitious treaty-based approaches may be possible. For example,
                                                                                                                                  China, Russia, and the United States could seek to negotiate an agreement that capped launchers of
                  Some nonnuclear weapons could fit straightforwardly into a future U.S.-Russian strategic arms                   ground-launched ballistic missiles with ranges longer 500km, ground-launched cruise missiles with
                  control agreement.[ix] For example, all ground-launched intercontinental hypersonic gliders, nuclear-           ranges longer 500km, sea-launched ballistic missiles, and heavy bombers—that is, all launchers that
                  armed or otherwise, should be accountable under such a treaty (just as all ICBMs, whether or not                were limited by the 1987 Intermediate-range Nuclear Forces Treaty or are limited by New START.
                  they are nuclear armed, are accountable under the 2010 New Strategic Arms Reduction Treaty or                   [xii]
                                                                                                                                        Because this limit would apply regardless of whether a launcher was associated with a nuclear or
                  New START). This step would be technically simple and could help address Russian fears about the                conventionally armed delivery system, it would help mitigate nonnuclear threats to nuclear forces and
                  United States’ developing such weapons for the purpose of targeting Russia’s nuclear forces (while              their C3I capabilities as well as various other concerns.
                  also addressing U.S. concerns about Russian programs). For similar reasons, the treaty could also
                  prohibit the deployment of air-launched ballistic and boost-glide missiles on any aircraft other than
                  treaty-accountable bombers or short-range tactical fighters. This provision would help to manage
                  Russia’s concerns about “converted” U.S. heavy bombers, that is, aircraft that have been modified
                  so they cannot deliver nuclear weapons (it would also manage U.S. concerns that Russia may deploy
                  its new air-launched ballistic missile, Kinzhal, on the nonaccountable Backfire bomber, potentially
                  enabling it to reach the United States).

                  Some other types of high-precision conventional weapons are less amenable to treaty-imposed
                  limits—but could be subject to a politically binding transparency arrangement. For example, neither
                  the United States nor, in all likelihood, Russia has any interest in limiting sea-launched cruise missiles
                  (SLCMs). Moreover, even if they both did, verification would likely prove challenging because these
                  weapons are deployed on vessels—surface ships and attack submarines—that have never been
                  subject to inspections. Nonetheless, a non-legally binding approach may be possible. As part of the
                  negotiations over START I, Russia and the United States agreed to exchange data about deployments

12                                                                                                   CONFERENCE READER                                                                                                                           13
CONFERENCE READER - 6th November 2020 - Rethinking Arms Control
2 0 2 0 . C A P T U R I N G T E C H N O L O G Y. R E T H I N K I N G A R M S C O N T R O L .                                                        BERLIN, NOVEMBER 2020

                  Endnotes

                  [i]      Thomas C. Schelling, The Strategy of Conflict (Cambridge, MA: Harvard University Press, 1960), chap. 9.
                  [ii]     James M. Acton, “Reclaiming Strategic Stability” in Elbridge A. Colby and Michael S. Gerson, eds. Strategic Stability:
                           Contending Interpretations (Carlisle: PA: Strategic Studies Institute and U.S. Army War College, 2013), 55.
                  [iii]    Caitlin Talmadge, “Would China Go Nuclear? Assessing the Risk of Chinese Nuclear Escalation in a Conventional War With
                           the United States,” International Security 41, no. 4 (Spring 2017): 50–92.
                  [iv]     Paul Bracken, “The Cyber Threat to Nuclear Stability,” Orbis 60, no. 2 (2016): 197–200.
                  [v]      James M. Acton, “Cyber Warfare & Inadvertent Escalation,” Dædalus 149, no. 2 (2020): 140-141.
                  [vi]     James M. Acton, “Escalation Through Entanglement: How the Vulnerability of Command-and-Control Systems Raises the
                           Risks of an Inadvertent Nuclear War,” International Security 43, no. 1 (Summer 2018): 56-99.
                  [vii]    U.S. Department of Defense, Nuclear Posture Review, February 21, 2018, 21, https://media.defense.gov/2018/
                           Feb/02/2001872886/-1/-1/1/2018-NUCLEAR-POSTURE-REVIEW-FINAL-REPORT.PDF.
                  [viii]   Acton, “Cyber Warfare & Inadvertent Escalation,” 142-143.
                  [ix]     Pranay Vaddi and James M. Acton, A ReSTART for U.S.-Russian Nuclear Arms Control: Enhancing Security Through
                           Cooperation, Working Paper (Washington, DC: Carnegie Endowment for International Peace, October 2020), 9-10 and 12-
                           13, https://carnegieendowment.org/files/Acton_Vaddi_ReStart.pdf.
                  [x]      Jeffrey Lewis, “Russia and the United States Should Resume Data Exchanges on Nuclear-Armed Sea-Launched Cruise
                           Missiles” in James M. Acton, ed., Beyond Treaties: Immediate Steps to Reduce Nuclear Dangers, Policy Outlook
                           (Washington, DC: Carnegie Endowment for International Peace, October 10, 2012), 4–5, https://carnegieendowment.org/
                           files/beyond_treaties.pdf.
                  [xi]     Richard J. Danzig, Surviving on a Diet of Poisoned Fruit: Reducing the National Security Risks of America’s Cyber
                           Dependencies (Washington, DC: Center for a New American Security, July 2014), 24–27, https://s3.amazonaws.com/ales.
                           cnas.org/documents/CNAS_Poisoned Fruit_Danzig.pdf.
                  [xii]    Tong Zhao, “Opportunities for Nuclear Arms Control Engagement With China,” Arms Control Today (January/February
                           2020), https://www.armscontrol.org/act/2020-01/features/opportunities-nuclear-arms-control-engagement-china.

14                                                                                                                   CONFERENCE READER                                      15
2 0 2 0 . C A P T U R I N G T E C H N O L O G Y. R E T H I N K I N G A R M S C O N T R O L .                                                                                                                 BERLIN, NOVEMBER 2020

                                                                                                                   The Military Use of AI:
                                                                                                                   Artificial and Real Challenges for Arms Control
                                                                                                                   Vincent Boulanin, Kolja Brockmann, Netta Goussac, Luke Richards, Laura Bruun
                                                                                                                   Stockholm International Peace Research Institute (SIPRI)

                                                                                                                   Artificial intelligence (AI) is impacting every aspect of military
                                                                                                                   affairs—much in the same way as its civilian applications are
                                                                                                                   impacting people’s day-to-day lives. Militaries are seeking AI
                                                                                                                   applications to strengthen their capabilities—from intelligence,
                                                                                                                   surveillance and reconnaissance (ISR), to combat operations,
                                                                                                                   through logistics. AI could make future military systems
                                                                                                                   ‘smarter’, faster and more autonomous and could enable
                                                                                                                   military decision-makers to exploit ever-growing amounts of
                                                                                                                   data on adversaries and the battlefield.[i]

                                                                                                                   As with all technological developments, there is a pay-
                                                                                                                   off to speed and convenience. In the civilian world, we are
                                                                                                                   grappling with issues such as the impact of AI technologies
                                                                                                                                                                                             AI: Artificial Intelligence
                                                                                                                   on data privacy, and the ethical implications of algorithmic
                                                                                                                   decision-making, for example in the context of criminal law
                                                                                                                   enforcement and judicial decision-making.[ii] In the military
                                                                                                                   world, concerns are focused not only on the humanitarian consequences of mistakes or misuse of AI,
                                                                                                                   but also on the risk that the increasing military use of AI could be a destabilizing force for international
                                                                                                                   peace and security.

                                                                                                                   These risks have over the past five years emerged as a matter of concern for the arms control community.
                                                                                                                   [iii]
                                                                                                                         A hotly debated question is whether and how traditional arms control could mitigate these risks.
                                                                                                                   The military use of AI poses a range of challenges making it difficult to effectively use the familiar
                                                                                                                   tools of arms control to address related humanitarian and strategic risks. It is therefore necessary to
                                                                                                                   rethink how they can be used in creative ways and how new and complementary processes could help
                                                                                                                   overcome these difficulties and provide new ways to reducing threats the military use of AI poses to
                                                                                                                   international peace and security.

16                                                                                             CONFERENCE READER                                                                                                                     17
2 0 2 0 . C A P T U R I N G T E C H N O L O G Y. R E T H I N K I N G A R M S C O N T R O L .                                                                                                                                                                                    BERLIN, NOVEMBER 2020

                  Humanitarian and strategic risks posed by the military use of AI                                                                The diffusion of AI technologies is difficult to control. AI systems that have been specially designed
                                                                                                                                                  or are necessary for the development of a military item on the export control lists are captured by
                                                                                                                                                  export controls. However, many civilian AI applications that are not subject to such controls could
                  There is a rapidly expanding body of literature on the potential impact of the use of AI in military                            potentially be repurposed for military applications. This fuels the concern that it could be relatively
                  systems on international peace and security.[iv] The conversation is still nascent, but it generally focuses                    easy for malevolent actors—be they states or non-state actors—to be in a position to develop military
                  on two categories of risks; humanitarian and strategic.                                                                         applications of AI. While applications adopted by non-state actors would unlikely be at the high-
                                                                                                                                                  end of the technology, they could nevertheless be used in ways that asymmetrically threaten states’
                                                                                                                                                  militaries and populations. The proliferation of high-end military AI applications to other states could
                  Humanitarian risks                                                                                                              impact specific military capabilities, the balance of power between states and more generally increase
                                                                                                                                                  the risk of non-compliance with IHL in interstate conflicts.[xii]
                  From a humanitarian perspective, the concern is that AI could, by design or through the way it is
                  employed, undermine the ability of the military to operate within the limits of international humanitarian
                  law and, thereby, expose civilians and civilian objects to greater risk of harm, death or destruction.[v]                                                                                                                             Better fusion and analysis of sensor data
                                                                                                                                              Better detection and onboard
                                                                                                                                          data processing in sensor systems                                                                                                   Better capability for
                  This concern is already central to the deliberation on emerging technologies in the area of lethal                                                                                                                                                          predicting enemy‘s nuclear-
                  autonomous weapons systems (LAWS) at the Convention on Certain Conventional Weapons (CCW).                                        Autonomous vehicles                                                                                                       related activities
                                                                                                                                                      for remote sensing
                  States parties to the CCW are discussing whether the use of AI to increase autonomy in weapons
                  systems could break the connection between a commander and the consequences of the use of force                       Autonomous anti-submarine                                                           Data
                                                                                                                                                                                                                         processing
                  in an attack, and thereby undermine their ability to properly exercise the context-specific evaluative                   and countermine systems
                                                                                                                                                                                                     Detection                              Making
                  judgements demanded by IHL, and potentially lead to IHL violations.[vi]                                                Autonomous robots for
                                                                                                                                                                                                     & sensing                            predictions

                                                                                                                                         nuclear asset protection
                                                                                                                                                                                                                          Early                                                                Better protection
                  The military use of AI is not limited to its use for autonomy in weapons systems. Experts are concerned                                                                                                warning                                                               against cyber attacks
                                                                                                                                                                                       Physical
                  that, from a humanitarian standpoint, the use of AI in decision support systems could be as problematic              Better cyber-defence                            security                          and ISR
                                                                                                                                         and cyber-offence
                  if adopted without proper safeguards in place. Known design flaws, such as data bias and algorithmic                           capabilities
                                                                                                                                                                                                                                                            Cyber
                                                                                                                                                                                                                                                           security
                  opacity, could induce users to make mistakes or misjudge situations—which could have dramatic                                                                     Cyber-
                                                                                                                                          Better detection of                      warefare
                  humanitarian consequences.                                                                                                enemy signal and                     Electronic
                                                                                                                                                                                  warefare
                                                                                                                                                                                                   Defence
                                                                                                                                                                                                  and force           MILITARY              Command
                                                                                                                                          jamming capability                                                                                & control
                  A related concern discussed among ethicists is the prospect that the use of AI could undermine                                                                  Information
                                                                                                                                                                                                  protection          USE OF AI
                                                                                                                                                                                    warefare
                  combatants’ ability to remain engaged in warfighting at a moral level. This, in turn, could increase the                 Deep fakes
                                                                                                                                                                                                                                                          Efficient
                  risk that people, civilians and military personnel alike, would not be spared from harm.[vii]                                                                         Missile
                                                                                                                                                                                                                                                         & resilient
                                                                                                                                                                                                                                                        management
                                                                                                                                               Better targeting                       air & space
                                                                                                                                                   capability in
                                                                                                                                                                                       defence                        Precision strike
                                                                                                                                                                                                                      & force delivery                                                        Better capability
                                                                                                                                             defensive systems                                                                                                                                for force and
                  Strategic risks                                                                                                                                                                    Unmanned                                                                                 stockpile management
                                                                                                                                                                                                                                         Autonomous
                                                                                                                                                                                                    aerial vehicles                      surface and
                                                                                                                                                                                                                                         underwater
                  The adoption of AI by the military and the proliferation of AI-enabled military systems threatens                                                                                 Hypersonics           Cruise
                                                                                                                                                                                                                          missiles         vehicles
                  to destabilize the relations between states and increase the risk of armed conflict. Risk scenarios of
                  particular concern include the adoption and use of AI by militaries (i) undermining states’ sense of
                  security, (ii) fuelling the risk of crisis and conflict escalation, and (iii) leading to the proliferation of such
                                                                                                                                                 Autonomous navigation control                                                                                             Autonomous UUVs/torpedoes
                  capabilities and the underlying technologies to unauthorized or unreliable end-users.                                              for UCAVs and hypersonics

                  The speed and way in which states will integrate AI into their military apparatus is bound to impact                                                                                                                                  Better navigation systems for cruise missiles

                  how other states see their relative power and level of security.[viii] For instance, Tthe use of autonomous
                  systems and machine learning for reconnaissance and data analysis could help states locate adversaries’
                  nuclear second-strike assets. This could further destabilize deterrence relationships.[ix]                                      Forseeable applications of military use of AI

                  Machine learning-powered AI is still an immature technology that is prone to unpredictable behaviour                            AI = artificial intelligence, ISR = intelligence, surveillance and reconnaissance, UCAV = unmanned combat aerial vehicle, UUV =
                                                                                                                                                  unmanned underwater vehicle
                  and failures.[x] A premature (i.e. not properly tested) adoption of AI in military systems could cause
                                                                                                                                                  Reference: Boulanin, V., Salmaan, L., Topyckanov, P., Su. F., Peldan Carlsson, M., Artificial Intelligence, Strategic Stability and
                  accidental or inadvertent escalation in a crisis or conflict.[xi]                                                               Nuclear Risk (SIPRI: Stockholm, 2020)

18                                                                                                       CONFERENCE READER                                                                                                                                                                                             19
2 0 2 0 . C A P T U R I N G T E C H N O L O G Y. R E T H I N K I N G A R M S C O N T R O L .                                                                                                                             BERLIN, NOVEMBER 2020

                  Challenges to using arms control to govern the military use of AI                                            Sequencing challenge: keeping up with the pace of advances in AI

                                                                                                                               AI, as with many other emerging technologies, is advancing at a rapid pace, often driven by huge
                  The humanitarian and strategic risks posed by the military use of AI require a response. The development     military appetite and significant private sector investment. In contrast, arms control processes and
                  and adoption of military applications of AI is not inevitable but a choice; one that must be made with       negotiations usually move slowly. With the notable exception of the CCW protocol on blinding laser
                  due mitigation of risks. The arms control community is considering ways in which arms control can            weapons, it typically takes many years—in numerous instances decades—for arms control processes
                  be used to ensure that the risks posed by the military use of AI technologies are addressed. However,        to result in concrete outcomes. Two significant examples of relevant processes that touched upon the
                  there are three major challenges of a conceptual, sequential and political nature.                           issue of AI are the UN processes on cyber (since 1998) and on LAWS (since 2014). Progress in these
                                                                                                                               processes has been exceedingly slow both at the substantial level (e.g. determining what the problem
                                                                                                                               is, interpretation around the applicability of international humanitarian law) and at the political level
                  Conceptual challenge: defining what military AI is and what problem it poses                                 (agreeing on what the political outcome should be). Therefore, there are legitimate concerns that
                                                                                                                               advances in AI could outpace any arms control process. If policymakers lag behind technological
                  Typically, arms control processes are ex-post processes; that is, they are developed in reaction to actual   developments, there is a risk that new applications may be adopted without appropriate safeguards
                  events or at least to a well identified problem. States agree on these controls and then incentivize         in place. Some technologies and their use might also be difficult to govern once they are adopted and
                  or enforce compliant behaviour by relevant actors (research, academia and the private sector). The           used by some militaries.
                  challenge with this model is that, as a baseline, states need to create a common understanding of the
                  nature and extent of the problem, both domestically and internationally, in order to agree on an arms
                  control instrument. This is difficult in the case of AI for three main reasons: its intangibility, and the   Political challenge: finding agreement between states
                  multi-purpose and technical complexity of the technology.
                                                                                                                               Arms control is always contingent on governments’ political will and geopolitical circumstances.
                  To date, there are no events or tangible consequences that can serve as a baseline for defining a            Finding agreement between states on AI governance is likely to be difficult in the current geopolitical
                  problem and building a consensus around it, as was the case with the prohibition of biological and           context.[xv] Major powers, including China, Russia and the USA, currently appear to have limited faith
                  chemical weapons, as well as anti-personnel landmines and cluster munitions. The arms control                in each other’s engagement in arms control processes.[xvi] These very states also have a vested interest
                  community has demonstrated in the past that it can be forward looking and take action before a               not to limit the speed and trajectory of developments in AI technology. They are therefore likely to
                  weapon or capability is developed and used. One example of this type of preventive arms control is the       object to any initiative that could cause them to lose their advantages or become disadvantaged in
                  CCW protocol on blinding laser weapons.[xiii] However, in the case of military AI, it is hard to formulate   their strategic competition. The current ‘arms control winter’, in combination with the great power
                  one clearly identifiable overarching problem. AI is an enabling technology which has not one but many        competition on AI, renders the chances of an arms control agreement on the military use of AI very
                  possible military uses of which only some may generate the aforementioned humanitarian and strategic         slim—at least for the time being.[xvii]
                  challenges. The risks posed by AI—and hence the governance response—need to be considered in
                  relation to specific application cases.

                  Furthermore, ‘military use of AI’ as an abstract term that hides a complex reality, which can be difficult   Addressing the challenges with the arms control toolbox
                  to communicate in multi-disciplinary settings and multilateral diplomatic negotiations.[xiv] It naturally
                  takes time for states, especially those that might not have the relevant technical expertise readily
                  available, to understand and assess the technology and its implications at a more granular level. The        The challenges above should not discourage the pursuit of arms control objectives with regard to
                  technical complexity and the fact that states might have different levels of understanding of the            the military use of AI, including risk reduction and transparency, and related objectives such as non-
                  technology are a major obstacle to consensus building.                                                       proliferation and compliance with IHL. Rather, they invite coming up with creative ways to use the
                                                                                                                               familiar tools of arms control, but also exploring new or complementary processes that could help
                  The multipurpose dual-use nature of AI technology is a source of concern as arms control issues have         reach these objectives.
                  traditionally been discussed and addressed in institutional silos, such as in specific UN conventions or
                  UN bodies like the Conference on Disarmament which are limited by their mandate in terms of topic
                  and process. Technological advances in the field of AI could, in theory, be leveraged in all the areas       Using the familiar tools of arms control in a creative way
                  covered by arms control: conventional, nuclear, chemical and biological or cyber weapons and related
                  capabilities. The question of whether each institutional silo should deal with the challenges posed          Confidence building measures aimed at raising awareness, developing a common language, and
                  by AI separately or whether the situation calls for a separate and dedicated process is still debated.       shaping a shared framework for discussions are all familiar tools of arms control. These tools could
                  However, the conceptual reasons outlined above indicate that the creation of an overarching arms             help solve the conceptual challenges and develop collaborative risk-reduction measures.
                  control process dedicated to the whole range of AI applications would be difficult and at this point
                  appears highly unlikely.                                                                                     Despite the large amount of publications and events on the topic, there are enduring misconceptions
                                                                                                                               about the possibilities and risks that AI could create in the military sphere. Ensuring that all sides share
                                                                                                                               a common vocabulary and an equal sense of how the technology is evolving and what challenges the

20                                                                                                 CONFERENCE READER                                                                                                                             21
2 0 2 0 . C A P T U R I N G T E C H N O L O G Y. R E T H I N K I N G A R M S C O N T R O L .                                                                                                                              BERLIN, NOVEMBER 2020

                  military use of AI poses is a prerequisite for the identification of risk-reduction measures. It is also       Second, it is forward looking. RRI aims to identify and respond to problems before they actually occur—
                  essential to reduce the risk of misperception and misunderstanding among states on AI-related issues.          be it through design choices or through self-restraints in terms of knowledge diffusion and trade.

                  The discussion around such confidence-building measures should be inclusive. Ideally, it should                Third, RRI is by nature iterative, as it seeks to monitor issues throughout the life-cycle of technologies—
                  involve all states, but also research, academia, the private sector and civil society. However, given the      from basic scientific research to product commercialization. It is also meant to continue over time and
                  current geopolitical climate, inclusiveness might be the enemy of efficiency and it would be valuable if       react to new developments in technologies. RRI thus offers an opportunity to reflect on how the
                  some conversations were also taking place in less politized and polarizing settings.                           multilateral arms control processes could both be more reactive to technology developments and
                                                                                                                                 work more closely with academia and the private sector in the process.
                  In light of the competitive aspects of AI development and adoption, discussions among like-minded
                  states take greater prominence. This could happen in the framework of existing military alliances              Finally, RRI processes provide an opportunity to develop common principles for responsible
                  like NATO or regional organisations such as the EU. Such groupings usually do not have a defined               development, diffusion and military use of AI, which may later become the basis for a treaty-based
                  role when it comes to arms control. However, they allow states that share a common set of strategic            arms control response on military AI.
                  interests and political values to discuss their views on the opportunities and risks of the military use of
                  AI in a relatively open and constructive way. NATO and the EU each have processes that allow member            In summary, the military use of AI poses a number of risks—humanitarian and strategic—that
                  states to engage both at highly political levels and at very technical levels. They provide opportunities      demand a response, including from the arms control community. This multi-faceted technology
                  to share information and work towards common resolutions to challenges, such as issues related to AI           necessitates several processes that incorporate different forms, diverse actors and distinct objectives.
                  safety. These could enable member states to achieve better coordination and contribute in multilateral         These processes should explore all available means to address the risks and challenges raised by the
                  and international forums with a more unified voice.                                                            development, adoption and use of military AI applications, and should not be limited to multilateral
                                                                                                                                 adoption of regulatory limits. The adoption of measures to mitigate risks posed by the military use of
                  There are, however, significant shortcomings to limiting discussions to like-minded states. A plurality        AI is predicated on stakeholders seeking out and creating opportunities to align their views and work
                  of perspectives is essential. Track 1.5 and track 2 initiatives can be helpful tools to further enable         collaboratively. Given the return of great power politics it is important to recall that such processes can
                  discussions where there are significant obstacles to such discussions at the intergovernmental level.          support both national strategic interests and international peace and security.
                  Meetings involving technical experts from rival countries are useful opportunities to increase mutual
                  understanding and practice information sharing. Expert involvement makes it possible to zoom in onto
                  specific technically or thematically narrow issues. There are several track 2 dialogues on AI currently
                  ongoing, notably between the USA and China. These are welcome and should be continued and
                  expanded to include other partners. They can provide national experts with an opportunity to have
                  frank discussions and agree directly on specific technical issues. Track 1.5 and track 2 dialogues can
                  thus help states straighten out differences before entering into formal negotiations on risk reduction
                  measures that could be mutually beneficial to their national security.

                  Exploring complementary processes

                  As existing arms control tools might not hold all the answers it is imperative to explore other
                  complementary processes. Given the leadership of the civilian sector in AI innovation, and the state-
                  centric nature of multilateral arms control, there is a need for multi-stakeholder initiatives involving
                  research, academia, the private sector and civil society. One option, in that regard, would be to build
                  on the conversation on “Responsible AI” that is taking place in the civilian sphere and aims to promote
                  responsible research and innovation (RRI) in AI through the definition of ethical principles and
                  standards on safety. RRI, as an approach to self-governance, is valuable for the pursuit of arms control
                  objectives with regard to the military use of AI in several ways.

                  First, it is inclusive. It involves diverse groups of stakeholders, from across academia, the private sector
                  and government. Such inclusiveness is essential to ensuring that the risks associated with the military
                  use of AI are accurately identified. The risks should neither be underestimated nor overestimated and
                  issues need to be addressed, even if only certain actors are concerned. Inclusiveness is also essential
                  to ensuring that risk management responses do not have an excessive negative political, economic or
                  societal impact.

22                                                                                                   CONFERENCE READER                                                                                                                            23
2 0 2 0 . C A P T U R I N G T E C H N O L O G Y. R E T H I N K I N G A R M S C O N T R O L .                                                                  BERLIN, NOVEMBER 2020

                  Endnotes

                  [i]      Boulanin, V. et al., Artificial Intelligence, Strategic Stability and Nuclear Risk (SIPRI: Stockholm, 2020).); Scharre, P.
                           and Horowitz, M. C., Artificial Intelligence: What Every Policymaker Needs to Know (Center for New American Security:
                           Washington DC, June 2018).
                  [ii]     Deeks, A. ‘Detaining by algorithm’, ICRC Humanitarian Law and Policy blog, 25 mar. 2019 < https://blogs.icrc.org/law-
                           and-policy/2019/03/25/detaining-by-algorithm/>
                  [iii]    Kaspersen, A. and King, C., ‘Mitigating the challenges of nuclear risk while ensuring the benefits of technology’, ed. V.
                           Boulanin, The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk, vol. I, Euro-Atlantic Perspectives
                           (SIPRI: Stockholm, may 2019)
                  [iv]     Horowitz, M. C. et al., Strategic Competition in the Era of Artificial Intelligence (Center for New American Security:
                           Washington DC, July 2018); Cummings, M. L., Artificial Intelligence and the Future of Warfare (Chatham House: London,
                           Jan. 2017); Roff, H. and Moyes, R., Meaningful Human Control, Artificial Intelligence and Autonomous Weapons, Briefing
                           Paper (Article 36: London, 2016).
                  [v]      Schmitt, M. N. and Thurnher, J. ‘“Out of the loop”: Autonomous weapon systems and the law of armed conflict’, Harvard
                           National Security Journal, vol. 4, no. 2 (2013); ICRC, Ethics and Autonomous Weapon Systems: An Ethical Basis for Human
                           Control? Report (ICRC: Geneva, Apr. 2018)
                  [vi]     Chertoff, P., ‚Perils of Lethal Autonomous Weapons Systems Proliferation: Preventing Non-State Acquisition‘, 2018.
                           Boulanin, V. et al., Limits and Autonomy in Weapon Systems, Identifying Practical Elements of Human Control (SIPRI:
                           Stockholm, 2020).
                  [vii]    Asaro, P., ‘On banning autonomous weapon systems: Human rights, automation, and the dehumanization of lethal
                           decision-making’, International Review of the Red Cross, vol. 94, no. 886 (summer 2012); ICRC, Ethics and Autonomous
                           Weapon Systems: An Ethical Basis for Human Control? Report (ICRC: Geneva, Apr. 2018) ; Human Rights Council, Report of
                           the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Christof Heyns, A/HRC/23/47, 9 Apr. 2013.
                  [viii]   Geist, E. and Lohn, A. J., How Might Artificial Intelligence Affect the Risk of Nuclear War? (Rand Corporation: Santa Monica,
                           CA, 2018)
                  [ix]     Gates, J., ‘Is the SSBN deterrent vulnerable to autonomous drones?’, RUSI Journal, vol. 161, no. 6 (2016), pp. 28–35 ;
                           Hambling, D., ‘The inescapable net: Unmanned systems in anti-submarine warfare’, British– American Security Information
                           Council (BASIC) Parliamentary Briefings on Trident Renewal no. 1, Mar. 2016.
                  [x]      Hagström, M., ‘Military applications of machine learning and autonomous systems’, ed. V. Boulanin, The Impact of Artificial
                           Intelligence on Strategic Stability and Nuclear Risk, vol. I, Euro-Atlantic Perspectives (SIPRI: Stockholm, May 2019).
                  [xi]     Boulanin, V., et al. Artificial Intelligence, Strategic Stability and Nuclear Risk, (SIPRI: Stockholm, 2020).
                  [xii]    Rickli, J-M., ‘The impact of autonomy and artificial intelligence on strategic stability’, UN Special, no. 781 (July–Aug. 2018).
                  [xiii]   Rosert E. and Sauer, F., ‘How (not) to stop killer robots: A comparative analysis of humanitarian campaign strategies’,
                           Contemporary Security Policy (May 2020).
                  [xiv]    Boulanin, V., Mapping the Debate on LAWS at the CCW: Taking Stock and Moving Forward, EU Non-proliferation Paper
                           no. 49 (SIPRI: Stockholm, Mar. 2016).
                  [xv]     Sauer, F. ‘Stepping back from the brink: Why multilateral regulation of autonomy in weapons systems is difficult, yet
                           imperative and feasible’, in: International Review of the Red Cross Special Issue on „Digital Technologies and War“
                           (forthcoming)
                  [xvi]    Countryman, T., ‘Why nuclear arms control matters today’, Foreign Service Journal, (May 2020); Asada, A. ”Winter phase”
                           for Arms Control and Disarmament and the role for Japan, Japan Review, Vol 3, No. 3-4 Spring 2020
                  [xvii]   Sauer, F. ‘Stepping back from the brink: Why multilateral regulation of autonomy in weapons systems is difficult, yet
                           imperative and feasible’, in: International Review of the Red Cross Special Issue on „Digital Technologies and War“
                           (forthcoming)

24                                                                                                                          CONFERENCE READER                                         25
2 0 2 0 . C A P T U R I N G T E C H N O L O G Y. R E T H I N K I N G A R M S C O N T R O L .                                                                                                                          BERLIN, NOVEMBER 2020

                                                                                                                   New Opportunities to Build Trust and Ensure
                                                                                                                   Compliance: Using Emerging Technologies for
                                                                                                                   Arms Control and Verification
                                                                                                                   Alexander Graef and Moritz Kütt
                                                                                                                   Institute for Peace Research and Security Policy at the University of Hamburg (IFSH)

                                                                                                                   Arms control faces tremendous challenges at the moment. Many of them are political in nature. In an
                                                                                                                   international environment characterized by growing mistrust, states tend to increase investment in
                                                                                                                   military technology in order to mitigate future risks and create comparative advantages. Technological
                                                                                                                   breakthroughs, however, not only translate into new military capabilities. They also present new
                                                                                                                   opportunities in support of arms control and verification.

                                                                                                                   Verification is vital to ensure compliance with arms control provisions. If successful, it builds trust
                                                                                                                   and confidence between states and strengthens stability. Most verification approaches rely on
                                                                                                                   technological means. Support for human inspectors’ work relies on, for example, measurement
                                                                                                                   devices, data analysis systems and global communication systems. Advances in emerging technologies
                                                                                                                   will provide additional benefits for arms control and verification, which both states and the public at
                                                                                                                   large can harness.

                                                                                                                   There are various different ways of approaching this. Overall, emerging technologies can make existing
                                                                                                                   approaches more effective and efficient. They can replace or at least supplement intrusive measures
                                                                                                                   such as on-site inspections that are increasingly unfeasible due to the prevailing political climate
                                                                                                                   characterized by a lack of trust between states. The Covid-19 pandemic also illustrates how social
                                                                                                                   distancing requirements can inhibit verification tasks and transparency measures. For example, in
                                                                                                                   2020, inspections under the New START treaty stopped due to safety risks for inspectors and national
                                                                                                                   entry regulations. Here remote techniques offer novel solutions without cutting back on the ability to
                                                                                                                   detect violations.

                                                                                                                   Uses of Emerging Technologies for Arms Control and Verification.
                                                                                                                   Methods & Tools in blue and exemplary applications are discussed in this chapter.
                                                                                                                   Applications in solid orange boxes relate to arms control, in light orange are general examples.

26                                                                                             CONFERENCE READER                                                                                                                              27
2 0 2 0 . C A P T U R I N G T E C H N O L O G Y. R E T H I N K I N G A R M S C O N T R O L .                                                                                                                                                                BERLIN, NOVEMBER 2020

                  Emerging technologies can also provide means to generate new agreements, because the technical                         Furthermore, arms control verification efforts can benefit from the combination of satellite imagery
                  ability to facilitate the verification of state behavior might help creating the political will for regulations        with additional open source information. Conceptually, that shifts the focus from classified information
                  in the first place. Finally, including emerging technologies into national and multilateral verification               to publicly available data, and from national intelligence gathering to open source intelligence
                  systems can connect new communities, such as IT-specialists, with the field of arms control, thereby                   collection. Multiple tools exist to assist with the data combination, for example the open source data
                  expanding and diversifying the available expertise.                                                                    platform Datayo, founded by the One Earth Future Foundation, which allows governmental and non-
                                                                                                                                         governmental experts to view and discuss data from different sources, improving our understanding
                  In the following, three concrete examples for the use of emerging technologies in arms control                         of complex emerging security threats.
                  and verification are outlined. They include the use of satellite imagery to detect illicit activities,
                  crowd-sourcing as an instrument to overcome limited resources of traditional data analysis and the
                  development of open source software to build mutual trust and transparency.

                  Satellite Imagery: Remote Detection of Illicit Activities

                  Detecting treaty violations requires information gathering about activities of a country of concern.
                  Such violations may include for example the construction of undeclared facilities, diversion of fissile
                  material from a civilian fuel cycle for military means, exceeding agreed numerical limits of weapons
                  stocks, or troop movements out of consented areas. From early on, treaties foresaw personal visits to
                  foreign countries to detect violations – starting a century ago with the Inter-Allied Military Control
                  Commissions of the Versailles Treaty. Such on-site inspections are, in terms of achievable results, the
                  gold standard for information gathering. However, agreeing to on-site inspections requires a high level
                  of mutual trust, since the host country needs to allow access to sensitive sites. They are also relatively
                  resource intensive and may put the health of inspectors at risk.

                  Hence, in parallel to direct access, states have long sought to establish means to detect violations from         Satellite Images of the U.S. Pentagon. The image on the left was taken by a U.S. Corona Satellite in September 1967,
                  afar. Information gathered included optical images and long-range sensing technologies, e.g., radar,              the image on the right on June 6, 2020 and is publicly available on Google Earth
                  radioactivity or seismic activity measurements. For imagery, starting with aircraft overflights, nations
                  turned to space-based options when they became technologically feasible. The U.S. Corona Satellite                Copyright for Images: Left image is Public Domain (cf. https://de.m.wikipedia.org/wiki/Datei:Corona_pentagon.jpg). Right image is from Google Earth, which allows
                  Program provided first images successfully in late 1960s.[i] The military system and the images were              reproduction in Print according to https://www.google.com/help/terms_maps/
                                                                                                                                    “1) License. As long as you follow these Terms, the Google Terms of Service give you a license to use Google Maps/Google Earth, including features that allow you to:
                  kept secret until 1995, however. Similar secret reconnaissance systems that were later also deployed              1. view and annotate maps; 2. create KML files and map layers; and            3. publicly display content with proper attribution online, in video, and in print.”
                  by other states are commonly referred to as National Technical Means (NTM).

                  Commercial satellite imagery became available with the Landsat satellites in the 1970s.[ii] Today,
                  satellite imagery has become a ubiquitous tool for various purposes. Since 2001, Google provides                       Crowd-sourced data and analysis: Overcoming limited resources
                  satellite imagery with nearly global coverage as part of its map services. Other information technology
                  companies have followed (e.g. Microsoft and Apple). In parallel to an increased access to satellite                    Modern information and communication technologies provide the necessary infrastructure for
                  imagery, production has soared – the last three decades have seen a range of new companies providing                   aggregating individuals and bringing together disparate, independent ideas. In arms control and non-
                  imaging services and launching their own satellites or even constellations. Today, it is possible for                  proliferation, the exponential growth of publicly available data and the internet of things, including
                  governmental and non-governmental actors alike to request even videos of regions of interest with                      the worldwide expansion of handheld devices, particularly smartphones and tablet computers, enable
                  unprecedented resolution, and at continuously decreasing cost.                                                         citizens to either directly complement official verification efforts or to serve as watchdogs, alerting
                                                                                                                                         state agencies to novel, undetected issues. Similar to the case of Malaysian Airlines flight MH-370
                  But data alone does not provide answers to treaty relevant questions. Expert image analysts are able                   mentioned above, such crowd-sourcing approaches draw upon large, web-based networks in order to
                  to identify essential objects in seemingly featureless images. Automated image processing helps                        collect, analyze and verify data or to find solutions to complex technological problems. As a method, it
                  them to deal with the ever-increasing amount of data, for example through feature change detection.                    allows administrators (governments or international organizations) to outsource specific tasks, reduce
                  [iii]
                        Alternatively, one can appeal to the public at large and draw on crowd-sourcing as a method to                   costs or to increase existing databases.
                  facilitate the analysis of large data sets of satellite imagery. The search for the lost Malaysian Airlines
                  MH-370 that disappeared on 8 March 2014 while flying from Kuala Lumpur International Airport to                        Crowd-sourcing can serve different purposes. On the one hand, administrators can use readily available
                  its planned destination in Beijing is a case in point. Although eventually unsuccessful, the participation             crowd-sourced content that has been generated for different purposes. This form of analysis – societal
                  of volunteers sifting through thousands of satellite imagery, helped to find remnants of the aircraft.[iv]             observation – makes use of publicly available texts, images, video and audio materials published on
                                                                                                                                         social media.[v] Put differently, it relies on open source intelligence that can widen the amount of existing
                                                                                                                                         information and thus complements the capabilities of NTM. On the other hand, it is also possible to

28                                                                                                     CONFERENCE READER                                                                                                                                                                          29
You can also read