ThesisPDF Available

Sears, Nathan (2022) Great Power Rivalry and Macrosecuritization Failure- PhD Thesis.pdf

Authors:

Abstract

Humanity lives under a growing spectrum of existential threats that have their origins in human agency and could bring about the collapse of modern global civilization or even human extinction. Why do states fail to mobilize the will and resources to neutralize the existential threats to humankind? This dissertation develops a theory of “macrosecuritization failure,” based on securitization theory and the concept of “macrosecuritization,” and studies the empirical phenomenon of the recurrent failure of states to take extraordinary action for the security and survival of humankind. It finds great power consensus or rivalry to be the central dynamic behind the success or failure of macrosecuritization in international relations. In the absence of a world political entity with the authority and capabilities to “speak” and “do security” on behalf of humanity, the great powers collectively shape the fate of macrosecuritization and the security of humankind. The argument is that conflicting security narratives of “humanity securitization” and "national securitization”—or ways of framing the referent object, the threat, and necessary measures for security—shape the thinking and action of the great powers towards macrosecuritization. When a security narrative of humanity securitization prevails, this can open space for great power consensus on macrosecuritizaton; but when national securitization triumphs, great power rivalries lead them to prioritize national power and security over the security and survival of humankind. The dissertation examines three historical case studies of macrosecuritization (failure) in international relations: the international control over atomic energy (1942-1946), the Biological Weapons Convention (1968-1972), and artificial intelligence (2014-present). The theoretical framework posits three variables to explain the influence of conflicting security narratives over the great powers: (1) the stability of the distribution of power in the international system; (2) the power and interests of domestic securitizing actors over state audiences; and (3) the beliefs and perceptions of political leaders about threats. Ultimately, macrosecuritization fails when these conditions favour a security narrative of national securitization amongst the great powers, whereby fear of “the Other” outweighs fear of an existential threat. The argument has important implications in an age characterized by the escalating great power rivalry between the United States and China under a growing spectrum of existential threats to humanity.
Great Power Rivalry and Macrosecuritization Failure: Why
States Fail to “Securitize” Existential Threats to Humanity
by
Nathan Alexander Sears
A thesis submitted in conformity with the requirements
for the degree of PhD in Political Science
Department of Political Science
University of Toronto
© Copyright by Nathan Alexander Sears, 2023
ii
Great Power Rivalry and Macrosecuritization Failure: Why States
Fail to “Securitize” Existential Threats to Humanity
Nathan Alexander Sears
PhD in Political Science
Department of Political Science
University of Toronto
2023
Abstract
Humanity lives under a growing spectrum of existential threats that have their origins in human
agency and could bring about the collapse of modern global civilization or even human extinction.
Why do states fail to mobilize the will and resources to neutralize the existential threats to
humankind? This dissertation develops a theory of “macrosecuritization failure,” based on
securitization theory and the concept of “macrosecuritization,” and studies the empirical
phenomenon of the recurrent failure of states to take extraordinary action for the security and
survival of humankind. It finds great power consensus or rivalry to be the central dynamic behind
the success or failure of macrosecuritization in international relations. In the absence of a world
political entity with the authority and capabilities to “speak” and “do security” on behalf of
humanity, the great powers collectively shape the fate of macrosecuritization and the security of
humankind. The argument is that conflicting security narratives of “humanity securitization” and
“national securitization”—or ways of framing the referent object, the threat, and necessary
measures for security—shape the thinking and action of the great powers towards
macrosecuritization. When a security narrative of humanity securitization prevails, this can open
space for great power consensus on macrosecuritizaton; but when national securitization triumphs,
great power rivalries lead them to prioritize national power and security over the security and
survival of humankind. The dissertation examines three historical case studies of
macrosecuritization (failure) in international relations: the international control over atomic energy
(1942-1946), the Biological Weapons Convention (1968-1972), and artificial intelligence (2014-
iii
present). The theoretical framework posits three variables to explain the influence of conflicting
security narratives over the great powers: (1) the stability of the distribution of power in the
international system; (2) the power and interests of domestic securitizing actors over state
audiences; and (3) the beliefs and perceptions of political leaders about threats. Ultimately,
macrosecuritization fails when these conditions favour a security narrative of national
securitization amongst the great powers, whereby fear of “the Other” outweighs fear of an
existential threat. The argument has important implications in an age characterized by the
escalating great power rivalry between the United States and China under a growing spectrum of
existential threats to humanity.
iv
Acknowledgments
This dissertation project has received support and inspiration from many people. I would like to
express my gratitude to my supervisor and mentor, Steven Bernstein, for his continuous support
and confidence. Not every professor would be willing to supervise a project like this one and I
appreciate the guidance he has given me throughout the dissertation project and PhD program. I
would also like to thank my committee members, Seva Gunitsky and Ron Deibert, for their
feedback on the proposal and the draft. The same goes to the internal reviewer, Janice Stein, and
external reviewer, Daniel Deudney.
I also had the benefit of a great group of peers and colleagues in the PhD program in Political
Science at The University of Toronto. I would especially like to thank my colleague and friend,
Ryder McKeown, for encouraging me to switch dissertation topics from the historical study of
“world empire” to the study of existential threats to humanity and for our frequent conversations
about international relations, PhD struggles, and life in general.
There are many scholars in both the International Relations and existential risk communities who
have contributed to my thinking about this project. In the existential risk community, I have had
the good fortune to meet and talk with Matthijs Maas, Phil Torres, Haydn Belfield, Luke Kemp,
and Balkan Devlen. In the International Relations community, I would like to thank the peers and
mentors who participated in the 2019 International Studies Association Junior Scholar Symposium
in Toronto, “Rethinking IR Theory and Security in the 21st Century,” including the discussants,
Virginie Grzelczyk and and Jonathan Caverley, and participants, Hiroto Sawada, Michelle Small,
Leaza Jernberg, Oyvind Svendsen, Aphisith Eichinger, and Logan Stundal. I would also like to
thank Barry Buzan, Jairus Victor Grove, Michael Lawrence, Elizabeth Mendenhall, and Michael
J. Albert, amongst others, for participating in a series of workshops and panels on “World Orders
& Catastrophic and Existential Threats” during annual conventions of the International Studies
Association. I would especially like to thank Daniel Deudney for organizing these workshops and
panels, inviting me to participate in them, and providing much insightful—and tough—feedback
on my research and papers in this area.
I also appreciate the opportunities to submit and publish my research in journals like the Journal
of Global Security Studies, Global Policy, and World Futures. I would especially like to thank the
editors and anonymous reviewers for their comments on two published research articles,
“Existential Security: Towards a Security Framework for the Survival of Humanity” and
“International Politics in the Age of Existential Threats.” These experiences were invaluable to
my professional development as an International Relations scholar and to honing my theoretical
argument about international relations and existential threats.
During the course of my PhD studies, I have also benefited from several scholarships and
opportunities offered by the Government of Canada, including the Social Sciences and Humanities
Research Council of Canada, Global Affairs Canada for the International Policy Ideas Challenge
and Cadieux-Léger Fellowship, and the Department of National Defence for the Mobilizing
Insights in Defence and Security scholarship. As the 2019-2020 Cadieux-Léger Fellow at Global
Affairs Canada, I had the opportunity to work alongside a great group of professionals and
practitioners in the Foreign Policy Research and Foresight Division, especially Anna Bretzlaff and
Madeline Johnson.
v
I would like to thank the Department of Political Science and International Relations at the
Universidad de Las Américas (UDLA) in Quito, Ecuador, for giving me the opportunity to teach
International Relations between 2012 and 2016. Special thanks to Ian Keil for being such a great
colleague and friend during my time at UDLA.
And, of course, I would like to thank my family and friends, including my parents, Randy and
Elizabeth, my sister Brittany, and family-in-law, Gustavo, Sofía, Hugo, Nadya, and Andres. Most
of all, I would like to express my deep gratitude and appreciation to my spouse, Araceli, who has
always been there for me and who has listened to many iterations of the theoretical argument of
this dissertation on countless dog-walks.
vi
Table of Contents
Chapter 1: Introduction Page 1
Chapter 2: The Age of Existential Threats Page 17
Chapter 3: Theory of Macrosecuritization Failure Page 45
Chapter 4: The International Control over Atomic Energy Page 95
Chapter 5: The Biological Weapons Convention Page 160
Chapter 6: The AI Revolution Page 184
Chapter 7: Conclusion Page 235
vii
List of Tables
Table 2.1: Non-Fiction Books on Existential Threats to Humanity Page 20
Table 2.2: Typology: The Origin and Scale of Threats Page 25
Table 2.3: Catastrophic & Existential Threats to Humanity Page 32
Table 2.4: The Meanings of “Humanity” and “Existential Threats” Page 39
Table 2.5: Existential Threats and Scenarios Page 44
Table 3.1: Frequency Count of Rhetorical Features Page 57
Table 3.2: Cases of Macrosecuritization Page 66
Table 3.3: The Spectrum of Discourse Page 67
Table 3.4: The Spectrum of Action Page 68
Table 3.5: Outcomes of Macrosecuritization Page 69
Table 5.1: National Material Capabilities, 1968–1972 Page 175
Table 6.1: National AI Strategies Page 195
viii
List of Figures
Figure 1.1: Historical Instances of Macrosecuritization and Failure Page 5
Figure 1.2: The Basic Framework Page 10
Figure 2.1: Natural and Anthropogenic Threats Page 30
Figure 3.1: Macrosecuritization: Scope and Comprehensiveness Page 51
Figure 3.2: The Vocabulary of Macrosecuritization Page 57
Figure 3.3: Nuclear Non-Proliferation Page 60
Figure 3.4: The Climate Emergency Page 61
Figure 3.5: A Case of Macrosecuritization Page 66
Figure 3.6 Macrosecuritization Outcomes: Discourse and Action Page 69
Figure 3.7: The Great Powers in International Relations Page 85
Figure 3.8: Conflicting Security Narratives and Great Power Conesus Page 94
Figure 4.1: “Emergence,” August 1939–May 1945 Page 110
Figure 4.2: “Evolution,” June 1945–June 1946 Page 125
Figure 4.3: “Demise,June–December 1946 Page 141
Figure 5.1: Actors and Audiences behind the Biological Weapons Convention Page 172
Figure 6.1: AGI/ASI and Existential AI Risk Page 216
Figure 6.2: The International Distribution of AI Capabilities Page 221
ix
List of Appendices
Appendix 1: Nuclear War
Appendix 2: Climate Change
Appendix 3: Bioengineered Pathogens
Appendix 4: Artificial Intelligence
Appendix 5: Discourse Analysis of Macrosecuritization
Appendix 6: Analysis of Macrosecuritization Cases
Appendix 7: Summary of Theoretical Hypotheses and Empirical Findings
Appendix 8: Views of Key Individuals on the International Control over Atomic Energy
Appendix 9: Timeline on the International Control over Atomic Energy, 1939–1946
Appendix 10: Artificial General Intelligence, Superintelligence, and Existential AI Risks
1
Chapter 1
This peculiar failure of response, in which hundreds of millions of people
acknowledge the presence of an immediate, unremitting threat to their existence
and to the existence of the world they live in but do nothing about it—a failure in
which both self-interest and fellow-feeling seem to have died—has itself been such
a striking phenomenon that it has to be regarded as an extremely important part of
the nuclear predicament.
— Jonathan Schell, 1982
Introduction
At a time when the world continues to struggle against the COVID-19 pandemic, humanity may
face the specter of a growing number of catastrophic and existential threats to modern global
civilization and even human survival (see Leslie 1996; Bostrom 2002; 2013; Rees 2003; 2018;
Posner 2004; Bostrom and Cirkovic eds. 2008; Torres 2017; Garrick ed. 2017; Hawking 2018; Ord
2020; Deudney 2020). British astrophysicist and Director of the Centre for the Study of Existential
Risk, Martin Rees, put the odds at “no better than fifty-fifty that our present civilization on Earth
will survive until the end of the present century” (2003, 8). According to Toby Ord, Senior
Research Fellow at the Future of Humanity Institute, “the chance of an existential catastrophe
striking humanity in the next hundred years is about one in six” (2020, 169). While humanity has
always lived under certain “natural” existential risks—such as asteroid impacts, supervolcanic
eruptions, and the glacial-interglacial cycle (Bostrom and Cirkovic eds. 2008)—the present era is
distinct from the past in that much of the contemporary existential predicament has its origins in
human agency (Sears 2021).
Today, the spectrum of human-driven existential threats is broad and growing, including
persistent threats to international peace and security that could end in violent omnicide (e.g.,
nuclear war or bioterrorism) (Schell 1982; Koblentz 2003; 2010), looming dangers from the large-
scale destruction of Earth’s natural systems that could lead to an inhospitable planet (e.g., climate
change, pollution, and biodiversity loss) (Kolbert 2014; Ripple et al. 2019; Wallace-Wells 2019),
and prospective risks from the loss of control over increasingly powerful technologies (e.g.,
biotechnology, nanotechnology, and artificial intelligence) (Doudna and Sternberg 2018; Drexler
2013; Bostrom 2014). Whether humanity will survive this “age of existential threats” is perhaps
the most pressing question of our times (Sears 2021). Given that there is no entity in world politics
that can “speak” and “do security” on behalf of all humankind, security and survival in the twenty-
2
first century will depend largely on the decisions and actions of states—particularly the “great
powers”—to mobilize the will and capabilities necessary to navigate this growing spectrum of
existential threats. Unfortunately, the history of international relations shows that states frequently
fail to do what is possible to neutralize the existential threats to humanity. Understanding why
states succeed or fail to take decisive action for the security and survival of humankind is a crucial
task for theory and policy in international relations.
1.1 The Puzzle
Humanity’s growing power and prospects for self-destruction poses an interesting puzzle for
international relations: one might reasonably expect the existence of one or more existential threats
to human survival to catalyze decisive global action to reduce and eliminate them, but the
responses by states around the world have for the most part fallen far short of this expectation. On
the contrary, the continuing failure of states to take effective action to neutralize existential threats
suggests their general willingness to accept the shadow of total annihilation as the normal
condition of international relations. This “normalization” of existential threats may be even more
pronounced today than it was in the past. During the Cold War, the danger of nuclear annihilation
was a matter of fierce debates over theory and policy (Morgenthau 1956; 1964; Brodie 1959; Herz
1957; 1959; Wohlstetter 1958; Kahn 1960; 1962; Niebuhr 1963; Waltz 1981; 1990; Jervis 1989),
which today has largely receded into a background condition. While moments of international
crisis may remind the world of the dangers of nuclear war—such as the nuclear saber-rattling
between North Korea and the United States in 2017, or Russian threats amidst the War in Ukraine
in 2022—the fear of nuclear war quickly recedes to the background of international relations once
a crisis is behind us. This situation, in which states recognize the existential threats to humanity
but fail to take decisive action to neutralize them, must be regarded as one of the most striking
dilemmas of world politics.
This puzzle is deepened by various “schools” of International Relations theory. The first is
“Realism.” Arguably the main tenet of Realism is that international politics is a perennial struggle
for power and security (Morgenthau 1948; Waltz 1979; Gilpin 1981; Mearsheimer 2001).
Neorealist theory in particular postulates that states are the principal actors in international politics
and assumes that their primary interest is security and survival (Waltz 1959; 1979; 1988; Jervis
1978; Grieco 1988; Mearsheimer 1994/95; 2001). Kenneth Waltz claimed that “security is the
highest end” in international politics: “Only if survival is assured can states safely seek such other
3
goals as tranquility, profit, and power” (Waltz 1979, 126). Similarly, John Mearsheimer asserts that
“the most basic motive driving states is survival” (Mearsheimer 1994/95, 10). Since there can be
no states or nations without the continuation of humanity, an existential threat to humankind is, by
logical extension, an existential threat to national survival (Sears 2020a, 259–260). One would
therefore expect security from the existential threats to humanity to be in the survival interests of
states; for only if humanity survives can states seek such other goals as national power, peace, and
prosperity. As Winston Churchill (1925, 58) wrote, “Surely if a sense of self-preservation still
exists among men [sic.], if the will to live resides not merely in individuals or nations but in
humanity as a whole, the prevention of the supreme catastrophe ought to be the paramount object
of all endeavors.” The failure of states to eliminate existential threats to humankind poses an
empirical challenge—or exposes a theoretical inconsistency—to this core premise of realism.
The second is “rationalism” in International Relations (Fearon 1995; Fearon and Wendt
2004). The basic premise of rationalism—which encompasses many neorealist (Grieco 1988;
Glaser 1994/95; 2010; Jervis 1999; Mearsheimer 2009) and neoliberal institutionalist approaches
(Axelord 1984; Axelrod and Keohane 1984; Legro and Moravcsik 1999)—is that states are rational
actors in international relations, which make foreign policy decisions based on calculations to
optimize their interests or preferences (i.e., “expected utility”). Rationalism is particularly
influential within the study of war or armed conflict (Schelling 1960; Fearon 1994/95) and the
theoretical models on deterrence and compellence that have dominated strategic studies and
nuclear deterrence (Brodie 1959; Schelling 1966; Levy 2008). Similarly, many IR theories assume
that states are rational actors, but then seek to explain state behavior that deviates from the rational
baseline (Ripsman et al. 2016)—including individual-level approaches that emphasize
psychological (Jervis 1976), cognitive (Mercer 2010), or behavioralist factors (Lake 2010) (i.e.,
the “first image”); state-level approaches that emphasize bureaucracy (Allison 1971), or interest
groups in domestic politics (Snyder 1991) (i.e., the “second image”); and system-level approaches
that emphasize the structural effects of the international system (i.e., the “third image”) (Waltz
1979; Jervis 1978; 1997). Yet the failure of states to neutralize the existential threats to humanity
poses a more fundamental challenge to the notion of state rationality, whether understood as an
ontological or a methodological assumption in international relations. While states may reasonably
perceive strategies to reduce existential threats as being in tension with other goals and objectives
of foreign policy, there are good a priori reasons to believe that any rational calculation of the
relative costs and benefits of different courses of action should lead states to prioritize strategies
4
to prevent or mitigate existential threats, since the expected utility from reducing the probability
of the unlimited—or at least extremely grave—consequences of existential catastrophe should
outweigh the possible gains or costs associated with alternative strategies (see Parfit 1984;
Bostrom 2002; Matheny 2007; Weitzman 2009; Ord 2020). The failures of states to prioritize
strategies to eliminate existential threats to humanity would seem to represent particularly
noteworthy examples of state irrationality.
The third is the “Copenhagen School” of International Relations, which sees extraordinary
action for survival as the essence of “security” (Wæver 1995; Buzan et al. 1998; Huysmans 1998;
McDonald 2008; Balzacq ed. 2011; Balzacq et al. 2015). In securitization theory, security entails
the discursive move—or “speech act”—of framing a particular issue as an existential threat to a
valued referent object and taking extraordinary measures beyond the bounds of normal politics for
the purposes of survival. In the words of Barry Buzan, Ole Wæver, and Jaap de Wilde (1998, 21):
Security is about survival. It is when an issue is presented as an existential threat to
a designated referent object… The special nature of security justifies the use of
extraordinary measures to handle them. The invocation of security has been the key
to legitimizing the use of force, but more generally it has opened the way for the
state to mobilize, or to take special powers, to handle existential threats.
Traditionally, by saying ‘security,’ a state representative declares an emergency
condition, thus claiming a right to use whatever means are necessary to block a
threatening development.
More recently, the Copenhagen School has begun to pay closer attention to processes of
“macrosecuritization” (Buzan and Wæver 2009), which involve “higher level” referent objects
such as humankind (Vuori 2010; Dalaqua 2013; Methmann and Rothe 2012; Thomas and Yuk-
ping 2020). In theory, the concept of macrosecuritization leads to the expectation that states should
take extraordinary action to eliminate existential threats to humanity. In practice, the historical
record of macrosecuritization tells a different story.
Since the end of the Second World War, there have been multiple historically bounded
instances in which a (macro)securitizing actor with a reasonable claim to speak with legitimacy on
a particular subject frames the issue as an existential threat to humanity (i.e., a “macrosecuritizing
move”). In a few instances, states have accepted that an issue constitutes an existential threat to
humanity and taken action beyond the normal practices of international politics to reduce or
eliminate the danger (i.e., “macrosecuritization”). More frequently, however, states have contested
that an issue poses an existential threat to humanity or have simply failed to take effective action
to neutralize the danger (i.e., “macrosecuritization failure”) (see Figure 1.1). Thus, the historical
5
record reveals an empirical pattern of the recurrent failure of macrosecuritization. This leads to the
research question behind this dissertation: Why do states fail to take extraordinary action for the
security and survival of humankind? In other words, why does the macrosecuritization of humanity
fail?
Figure 1.1: Historical Instances of Macrosecuritization and Failure
1.2 The Argument
The recurrent failure of the securitization of humanity has its roots in the structure of the
international system and the dynamics of great power rivalry in international relations. Why this
is so requires an appreciation for the distinct structural conditions behind the phenomenon of
macrosecuritization. Unlike the “lower-level” processes of securitization that occur entirely within
nations, are matters of state-society relations, and are subject to the hierarchic political authority
6
of states, macrosecuritizations are necessarily “higher-level” processes in international relations:
they occur between nations, are matters of inter-state and inter-society relations, and are subject to
the conditions of securitization under anarchy. In macrosecuritization, no one nation has the final
say, no single state can act alone, and no individual society can decide for the whole world that an
issue poses an existential threat to humanity and take emergency action for human survival.
In the absence of a world political authority that can “speak” and “do security” on behalf
of humankind, the success and failure of macrosecuritization hinges on the actors with the most
powerful voices and greatest capabilities in the international system: the great powers.
International anarchy privileges the great powers in macrosecuritization because of their unequal
political status and superior material capabilities in international relations. First, the great powers
enjoy the higher political status and prestige of their membership in the “great power club” (Paul
et al. 2014), which gives them a more powerful “voice” in macrosecuritization. Great power status
makes these states the key “audience” in macrosecuritization since their acceptance (or
contestation) of the legitimacy of a macrosecuritizing move is critical to shaping the overall
narrative of how an issue is framed and understood in international relations. Secondly, the great
powers possess a larger share of the distribution of material capabilities, which gives them superior
capacity to mobilize extraordinary resources to reduce or eliminate an existential threat (Sears
2021b). The great powers are the principal functional actors in macrosecuritization because the
possibility of taking decisive action to neutralize an existential threat depends heavily on the
political will and national capabilities of the great powers. In short, the great powers are the
essential actors in macrosecuritization because their unequal political status gives them greater
influence over the legitimacy of a macrosecuritizing move, while their superior national
capabilities make them indispensable for effective action to neutralize an existential threat.
Yet the prospects of macrosecuritization are not determined by one but by all of the great
powers in the international system. When the great powers have been able to reach a consensus
that an issue constitutes an existential threat to humanity and on the necessity of extraordinary
action for security and survival—such as on nuclear proliferation (1961–1967), biological
weapons (1968–1972), or nuclear winter (1982–1991)—then states have been able to collectively
mobilize the political will and national capabilities to reduce or neutralize an existential threat to
humanity, at least to some degree (i.e., macrosecuritization). But when the great powers have been
unable to achieve consensus on the nature of an existential threat or on the necessity of
extraordinary measures—including the international control over atomic energy (1942–1946),
7
climate change (1979–1991; 2017–present), nuclear prohibition (2007–2017), artificial
intelligence (2014–present), or biodiversity loss (2018–present)—then states have failed to take
actions that would reduce or neutralize an existential threat to humanity (i.e. macrosecuritization
failure). Indeed, the historical record shows that resistance by even a single great power can
prevent macrosecuritization in international relations. Great power consensus is therefore essential
to macrosecuritization.
What, then, shapes the possibilities for great power consensus on macrosecuritization? The
argument here is that macrosecuritization fails because of conflicting securitization narratives that
lead the great powers to prioritize “national securitization” over “humanity securitization.”
Humanity securitization narratives frame an issue as an existential threat to humanity and call on
states to take cooperative action for the survival of humankind. In contrast, national securitization
narratives frame an issue as a threat to the nation and call on the state to take unilateral action for
national security. Humanity securitization and national securitization narratives are always in
tension with one another to some degree, since they are based on conflicting premises about the
referent object of security (Sears 2020a): humanity securitization prioritizes the survival of
humankind, while national securitization puts the nation first. This disjuncture in the ends of
security leads to divergences in means, since humanity securitization generally requires states to
sacrifice some degree of their national interests and/or acquiesce to new international powers that
erode national sovereignty, while national securitization often leads states to take measures that
can exacerbate—or at least fail to mitigate—the existential threats to humanity, such as by
developing or retaining large nuclear arsenals. Since international anarchy means there is no world
political authority that can speak and act on behalf of the security of humankind, or that can impose
its will on the great powers, macrosecuritization depends on consensus amongst the great powers,
who are predisposed to think and act on the basis of more parochial understandings of their national
interests. In general, the structure of international politics and the conflict between securitization
narratives explains the recurrent failure of macrosecuritization—or, put differently, the triumph of
national securitization over humanity securitization.
Yet the history of international relations shows that macrosecuritization is not doomed to
failure. Under the right conditions, the great powers may accept that an issue constitutes an
existential threat to humanity and agree to take extraordinary measures for security. The theoretical
framework developed here explains the success or failure of great power consensus on
macrosecuritization in terms of three variables at different levels-of-analysis, which shape the
8
relative influence of the competing narratives of humanity securitization and national
securitization over the great powers.
The international system: the structure and stability of the distribution of power between
the great powers;
The state: the power and interests of securitizing actors vis-á-vis state audiences within the
domestic security constellations of the great powers;
The individual: the ideas and beliefs of the political leaders and policymakers of the great
powers.
When these conditions favor a narrative of humanity securitization, then great power consensus is
possible and macrosecuritization should occur; but when they favor a narrative of national
securitization, then great power politics should lead to macrosecuritization failure (see Figure 1.2).
The first system-level variable concerns the structural forces of the distribution of power
in international relations. The theoretical logic here is that the structure and stability of the
distribution of power shapes the intensity of great power rivalries, which in turn influences the
prospects for great power consensus on macrosecuritization. The existence of a stable distribution
of power between the great powers reduces the intensity of great power rivalries, which can make
the great powers more amenable to humanity securitization narratives and strengthen the
opportunity for great power consensus. In contrast, an unstable distribution of power increases the
intensity of great power rivalries, leaving the great powers more susceptible to narratives of
national securitization and weakening the prospects for great power consensus on
macrosecuritization. Two principal factors affect the stability of the distribution of power: (i) the
number of great powers in the international system, with bipolar systems being relatively stable
by comparison to unipolar and multipolar systems; and (ii) the differential growth in the nature
and/or distribution of capabilities, which can be destabilizing when it fuels a structural transition
in the number of great powers. In essence, the system-level variable highlights the importance of
the distribution of power in international relations, which shapes the intensity of great power
rivalries and how the great powers respond to the narratives of humanity securitization and national
securitization. When there is an unstable distribution of power—especially during periods of
structural transition in the international system—then this should weaken the prospects for great
power consensus and increase the likelihood of macrosecuritization failure.
The second state-level variable involves the internal dynamics of security constellations
within the great powers. The theoretical logic here is that the adoption of a particular securitization
9
narrative by powerful domestic securitizing actors influences the state as an audience. When
powerful domestic actors/coalitions take on a narrative of humanity securitization then this should
amplify the influence of this narrative over a state audience, which can lay the domestic political
foundations for great power consensus on macrosecuritization. However, when powerful domestic
actors/coalitions take on a narrative of national securitization then this should weaken the influence
of humanity securitization over a state audience, making domestic political conditions antithetical
to great power consensus on macrosecuritization. There are two main factors that shape the power
and interests of securitizing actors: (i) the identity and/or position in domestic politics; and (ii)
their authority or legitimacy with respect to an issue area. This state-level variable calls attention
to the internal dynamics and relationships between securitizing actors and audiences within the
great powers, which shapes who has the power and authority to “speak” and “do security” in
domestic politics and whether they will favor a narrative of humanity securitization or national
securitization. When powerful securitizing actors within the state—especially within the foreign
policy and/or national security bureaucracy—adopt a narrative of national securitization, then this
should undermine the possibilities for great power consensus and make macrosecuritization failure
more likely.
The third individual-level variable concerns the ideational factors behind the threat-related
beliefs and perceptions of the political leaders of the great powers. The theoretical logic here is
that the beliefs and perceptions of political leaders shape their receptiveness to the claims in
securitization narratives about the nature of the threat and measures for security, which in turn
influences their decision-making and the prospects for great power consensus on
macrosecuritization. While the ways in which individual political leaders understand and perceive
threats may be influenced by idiosyncratic factors (e.g., their ideological or moral convictions, or
skepticism towards science), there are two primary factors that shape their receptiveness to the
competing claims of securitization narratives: (i) their beliefs and perceptions about the
intentions/capabilities of other great and major powers; (ii) their beliefs and perceptions about the
clarity/credibility of an existential threat to humanity. When political leaders find themselves more
susceptible to a narrative of national securitization—especially when they are suspicious of
another great power, or skeptical of an existential threat—then this should weaken the prospects
for great power consensus and increase the likelihood of macrosecuritization failure.
10
Figure 1.2: The Basic Framework
Note: This is a simplified version of the theoretical framework. For a detailed version, see Figure 3.7.
To summarize the argument, the structure of the international system means that
macrosecuritization is necessarily a process of “securitization under anarchy.” In the absence of a
world political authority to speak and do security on behalf of humankind, the great powers are
the principal actors with the power and capabilities to shape the fate of macrosecuritization. Great
power consensus is an essential condition for macrosecuritization, since resistance by even a single
great power has historically been enough for macrosecuritization to end in failure. The prospects
for great power consensus depend on the relative influence of the conflicting securitization
narratives of humanity securitization and national securitization over the great powers. Whether
the great powers are more receptive to a narrative of humanity securitization or national
securitization depends on the interplay between (1) the distribution of power in the international
system, (2) the power and interests of securitizing actors within the state, and (3) the beliefs and
perceptions of political leaders towards threats. When these conditions favor a narrative of
humanity securitization, then the great powers can reach consensus to mobilize their political will
and national capabilities to reduce or neutralize an existential threat; but when these conditions
11
favor a narrative of national securitization, then great power politics can prevent consensus from
emerging and lead to macrosecuritization failure, threatening the very survival of humankind.
1.3 Research Design, Epistemology, and Methodology
This dissertation aims to answer a causal question: why does the macrosecuritization of humanity
repeatedly fail? Parts of the dissertation also ask constitutive questions (Wendt 1998), such as what
is an “existential threat” or what constitutes “macrosecuritization”? To address these questions, I
develop a “mid-range” explanatory theory of macrosecuritization (failure) (Dunne et al. 2013;
Lake 2013). Yet the argument has broader implications that contribute to “grand theory” in
International Relations (Mearsheimer and Walt 2013), especially realism and securitization theory,
by expanding the empirical research agenda to new domains and producing “novel facts” for these
paradigms (Elman and Elman 2002). Specifically, it brings IR theory into conversation with the
growing interdisciplinary research agenda on “catastrophic” and “existential risk,” which has
received relatively little scholarly attention in contemporary International Relations (Harrington
2016; Mitchell 2017; Grove 2019; Deudney 2020; Pelopidas 2020; Sears 2020a; 2021a; Mitchell
and Chaudhury 2020; Kreienkamp and Pegram 2021).
The research design is primarily informed by the logic of causal inference through
comparative case study analysis (King et al. 1994), although it also relies on some interpretative
methods to identify what constitutes relevant cases. The research identifies a universe of 10 cases
of macrosecuritization—including cases of success and failure—and examines the relationships
between multiple independent variables and the dependent variable of macrosecuritization
outcomes. The empirical analysis of the patterns of success and failure across multiple cases of
macrosecuritization offers a first cut at what types of hypotheses and variables can be discarded
(or “falsified”) (Popper 1952), and which ones provide a promising avenue for an explanatory
theory of macrosecuritization failure. In doing so, it follows the advice of Carl Sagan: “In science
we may start with experimental results, data, observations, measurements, ‘facts.’ We invent, if we
can, a rich array of possible explanations and systematically confront each explanation with the
facts” (Quoted in Jackson 2011, 1).
Following this medium-N analysis, the research analyzes three cases in detail to uncover
the dynamics that explain the success and failure of macrosecuritization in international relations.
The case studies employ the qualitative research methods of discourse analysis (Milliken 1999)
and historical process-tracing (Bennett and Checkel eds. 2014). The discourse analysis serves to
12
show how securitizing actors frame a particular issue as an existential threat to humanity and
determine to what extent this discourse is accepted by a relevant audience. Historical process
tracing serves to illuminate the identity and relationships between the relevant actors, the critical
junctures for the boundaries and evolution of a case, and the context and factors that shape the
success and failure of macrosecuritization. Together, the medium-N analysis and the more detailed
comparative qualitative analyses of a subset of cases provide significant analytical leverage on the
question of why the macrosecuritization of humanity fails: it allows us to discard many hypotheses
and variables that are inconsistent with the empirical record, and to zero-in on the logic and
dynamics at play in specific cases. This empirical research provides a solid foundation for a mid-
range theory of macrosecuritization (failure)—that is, it tells us something about who can “speak”
and “do security” on behalf of humanity, on what existential threats, under what conditions, and
with what effects (Buzan et al. 1998).
The three case studies examined in this dissertation are the failure of the international
control over atomic energy to create an “Atomic Development Authority” and prevent the nuclear
arms race (1942–1946), the success of the Biological Weapons Convention in establishing an
international prohibition against biological weapons (1968–1972), and the failure of artificial
intelligence to receive widespread acceptance by states as an existential threat to humanity (2014–
present). I selected these case studies from the universe of cases based on several considerations.
First, the three case studies offer variation on the dependent variables (i.e., audience acceptance
and security action), which responds to concerns about the “selection bias” within the
securitization literature of studying only cases of successful securitizations (Geddes 1990; Ruzicka
2019). Second, the selection of the international control over atomic energy is based on the logic
of “paradigmatic cases” and “critical/crucial cases” (Flyvbjerg 2006; Levy 2008). It is a
paradigmatic case because it offers a useful metaphor or exemplar as the first historical case of
macrosecuritization (failure) (Flyvbjerg 2006). It is a crucial case (“most likely”) because it
contradicts many plausible hypotheses about macrosecuritization—that is, for many reasons the
international control over atomic energy should have succeeded but actually failed. This
maximizes the explanatory leverage gained from a single case study.
Third, the case studies on biological weapons and artificial intelligence are based on John
Stuart Mill’s logic of most similar/dissimilar cases. They are “most similar” cases because all three
raise the prospect of a new dual-use technology that could threaten both national security and
human survival. They are “most dissimilar” cases because they have different outcomes: the
13
biological weapons case offers an example of successful macrosecuritization, while the artificial
intelligence case is an example of macrosecuritization failure where states are unconvinced by the
securitization narrative of AI as an existential threat to humanity. Fourth, the case studies allow for
variation on issue area and historical context: they involve different issues and occur at different
moments in history. Finally, these case studies are interesting and under-appreciated in
International Relations: the international control over atomic energy is a fascinating historical case,
but not one that is well known in contemporary International Relations; the Biological Weapons
Convention is an important success of disarmament diplomacy, but far less studied than other
examples like the Nuclear Nonproliferation Treaty. Artificial intelligence is a “new” issue for
international relations that is yet to garner significant attention from IR scholars. By comparison,
the prohibition of nuclear weapons (Vuori 2010; Dalaqua 2013), climate change (Trombetta 2008;
Methmann and Rothe 2012; Von Lucke et al. 2014; Allan 2017; Paglia 2018), and public health
and disease (McInnes and Rushton 2011; Thomas and Yuk-ping 2018) have already received
substantial attention within the securitization literature, which reduces the opportunities for
making a novel contribution to securitization theory.
Admittedly, there are also good reasons to study other cases of macrosecuritization. For
instance, if differentiation of cases by “sector” or issue area were the most important consideration,
then one of the environmental cases—such as “global warming” (1979–1992), the “climate
emergency” (2017–present), or “biodiversity loss” (2018–present)—would make a good fit.
However, as the analysis of the universe of cases will show, the sector of an issue is not a
significant determinant of macrosecuritization outcomes. An emphasis on cases of “successful”
macrosecuritization would favour the cases of “nuclear proliferation” (1961–1967) or “nuclear
winter” (1982–1991). If the aim were to unpack an anomaly—or “deviant case” (Flyvbjerg 2006;
Levy 2008)—for securitization theory, then the “Ozone hole” (1985–1987) would be a good
choice, since the outcome of extraordinary action despite weak acceptance of the issue as an
existential threat contradicts the expectation of securitization theory. One final case of successful
macrosecuritization poses a potential challenge to the theory developed here: nuclear winter. In
this case, the two Cold War rivals successfully negotiated the Strategic Arms Reductions Treaties
(START I and II), making dramatic reductions to their nuclear arsenals despite the relative decline
of the Soviet Union and the importance of nuclear weapons to their national security interests.
Nevertheless, since these nuclear reductions neither eliminated the existential threat of nuclear
war, nor exposed the United States and the Soviet Union to the danger of a “first-strike” advantage,
14
the case does not appear to falsify my theory, which would require a narrative of humanity
securitization to clearly triumph over a narrative of national securitization, despite the existence of
conditions that my theory suggests should generally favour national securitization. Ultimately, the
decisions about case selection were based on an effort to balance many different considerations,
including the choice to not study two cases on nuclear weapons. That there are good reasons to
study other cases of macrosecuritization points to promising directions for future research.
1.4 Plan for the Dissertation
The structure of this dissertation is as follows. Chapter 2 offers a partly conceptual, partly empirical
discussion of the existential threats to humanity. This chapter is intended primarily for readers who
are unfamiliar with the growing literature on “existential risk,” and makes an original contribution
to this literature for those who are familiar with it. It begins with a brief intellectual history of the
rise and evolution of scholarly concern about humanity’s survival since the middle of the twentieth
century, and which during the first decades of the twenty-first century has become an established
interdisciplinary research agenda on existential risk. It develops an original conceptual framework
to distinguish and define “existential threats” on the basis of two criteria: the origins and scale of
a threat. An existential threat refers to any danger that has its origins in human agency and can
threaten, at the minimum, the destruction of modern global civilization, and, at the maximum, the
elimination of human beings. The chapter then examines the growing spectrum of existential
threats in the twenty-first century, which includes persistent threats to international peace and
security (e.g., nuclear war and bioterrorism), looming dangers from the large-scale degradation of
Earth’s natural environment (e.g., climate change, biodiversity loss, and pollution), and
prospective risks from increasingly powerful emerging technologies (e.g., biotechnology,
nanotechnology, and artificial intelligence).
Chapter 3 is the theory chapter. It develops the empirical puzzle and theoretical argument
behind the dissertation. Based on securitization theory and the concept of macrosecuritization, it
develops a constitutive theory of a particular type of macrosecuritization, which follows a
generalizable rhetorical structure: X poses an existential threat to humanity and therefore Y is
essential for survival! It goes on to define an instance or case of macrosecuritization as a
historically-bounded process that begins with a “macrosecuritization move,” when a securitizing
actor with a reasonable claim to speak with legitimacy on a particular subject frames the issue as
an existential threat to humanity, and ends once the interactions between the relevant securitizing
15
actors, functional actors, and audiences coalesce in a “security constellation,” which structures and
organizes the discourse and actions of states towards an issue in international relations. The chapter
identifies a universe of 10 historical cases of macrosecuritization and establishes the empirical
pattern of the recurrent failure of macrosecuritization. It develops the theoretical logic and
empirically tests multiple plausible hypotheses for macrosecuritization (failure), discarding those
which fail to account for the broader empirical patterns of macrosecuritization and showing those
which are most promising. This empirical analysis shows that only one variable reflects a
relationship of law-like consistency with the success and failure of macrosecuritization: great
power consensus. When the great powers have accepted that an issue constitutes an existential
threat to humanity and agreed to take extraordinary measures for survival, then macrosecuritization
has succeeded. But when one or more of the great powers had contested this understanding or
rejected the need for extraordinary action, then macrosecuritization has failed. Thus, the chapter
develops a theory of macrosecuritization (failure) that emphasizes the dynamics of great power
politics. Specifically, it aims to demonstrate why in some circumstances the great powers can come
to a consensus on macrosecuritization, while in others consensus is unreachable and
macrosecuritization fails.
Chapters 4–6 examine three case studies of macrosecuritization in international relations.
Chapter 4 explores the macrosecuritization failure of the international control over atomic energy
in the aftermath of the Second World War. This case is particularly puzzling, for despite strong
audience acceptance of the macrosecuritization discourse on the atomic bomb and American
leadership in seeking a system of international control (an “Atomic Development Authority”), the
diplomatic process in the United Nations Atomic Energy Commission broke down, leading to the
nuclear arms race between the United States and the Soviet Union. Chapter 5 analyzes the
successful macrosecuritization of biological weapons during the period between 1968 and 1972.
Despite the Cold War rivalry between the United States and the Soviet Union, the great powers
accepted the macrosecuritization discourse framing biological weapons as a particularly dangerous
and indiscriminate threat to humanity and agreed to a Biological Weapons Convention, an
unprecedented international treaty to prohibit and eliminate an entire category of “weapons of mass
destruction.” Chapter 6 examines the contemporary case of the macrosecuritization failure of
artificial intelligence from 2014 to the present. Despite growing concern amongst AI experts that
artificial intelligence—specifically, “artificial general intelligence” or “superintelligence”—could
pose an existential threat to humanity, dozens of states have produced “national AI strategies” in
16
recent years that either downplay or ignore such concerns. In each of these case studies, this
dissertation shows how the dynamics of great power politics and conflicting securitization
narratives have been crucial in shaping the possibilities of great power consensus and the success
and failure of macrosecuritization.
The conclusion considers the implications of the argument for international politics in the
age of existential threats. It contends that humanity faces an extremely dangerous moment of
history: the resurgence of great power rivalry under a growing specter of existential threats. The
escalating great power rivalry between the United States and China—driven by the decline of
American hegemony and the rise of China—could jeopardize the possibilities of achieving great
power consensus to neutralize a widening spectrum of existential threats, including nuclear war,
bioengineered pathogens, climate change, biodiversity loss, and artificial intelligence. During the
Cold War, the great power rivalry between the United States and the Soviet Union meant that
humanity lived for 45 years under a “nuclear sword of Damocles,” which was almost cut on several
occasions by accident, miscalculation, or madness. In the twenty-first century, the resurgence of
great power rivalry between the United States and China could pose an even greater threat to
human survival, not only because a great power conflict could escalate to nuclear war, but also
because great power rivalry could exacerbate the risks from competition over emerging
technologies, like artificial intelligence, or undermine great power cooperation on global
challenges, like climate change (Sears 2020d). In an era characterized by humanity’s growing
power and prospects for self-destruction, the return of great power politics could end in tragedy
for humankind.
17
Chapter 2
I think the odds are no better than fifty-fifty that our present civilization on Earth
will survive to the end of the present century.
— Martin Rees, 2002
Humankind now faces the triple crisis of nuclear war, climate change and
technological disruption. Unless humans realise their common predicament and
make common cause, they are unlikely to survive this crisis.
— Yuval Noah Harari, 2018
Our power has grown so great that for the first time in humanity’s long history we
have the capacity to destroy ourselves—severing our entire future and everything
we could become.
— Toby Ord, 2020
The Age of Existential Threats
The search for a theoretical explanation for the recurrent pattern of the macrosecuritization failure
of existential threats to humanity first requires an answer to the constitutive question: “what is an
existential threat to humanity?” This chapter aims to develop the concept of existential threats to
humanity as a class of phenomena and demonstrate that the emergence and proliferation of
existential threats in the twentieth and twenty-first centuries constitutes a revolutionary era of
history—an age of existential threats—defined by humanity’s growing power and prospects for
self-destruction (Sears 2021a). The chapter begins with a conceptual analysis of existential threats.
It makes a distinction between “catastrophic” and “existential” harm, which hinges on whether the
scale of potential destruction constitutes a theoretically plausible threat of the collapse of modern
global civilization or the elimination of human beings. It also distinguishes between “natural” and
“anthropogenic” dangers, which turns on whether the cause of a threat has its origins in human
agency. It proposes the definition of an “existential threat” as any danger that has its origins in
human agency which could bring about, at the minimum, the collapse of modern global
civilization, and, at the maximum, the elimination of human beings or their capacity to shape their
future on Earth.
18
The chapter then examines the growing spectrum of anthropogenic existential threats to
humankind, including persistent threats to international peace and security that could lead to
violent omnicide (e.g., nuclear war or bioterrorism); looming dangers from humanity’s large-scale
destruction of Earth’s natural environment that could bring about an inhospitable planet (e.g.,
climate change, pollution, and biodiversity loss); and prospective risks from a range of emerging
capabilities that could end in humanity’s loss-of-control over a singularly powerful technology
(e.g., biotechnology, nanotechnology, and artificial intelligence). It briefly examines five of the
leading existential threats to humanity in the twenty-first century—nuclear war, climate change,
biodiversity loss, bioengineered pathogens, and artificial intelligence—and concludes that
humanity’s growing capacity for self-destruction constitutes a revolutionary transformation of the
human condition, in which any one of an increasing number of existential threats could threaten
the survival of humankind.
2.1 “The End of the World”: Old Fears, New Thinking
The search for meaning in life and death is perhaps inherent to the human capacity for self-
reflection (Frankl 1946). This capacity for existential reflection is not merely a subjective quality
of individual human beings but also an intersubjective quality of human societies; for much of
religion, philosophy, and science is essentially a process of collective reflection about the nature
of existence. Within these distinct “ways of knowing,” questions about the end of human existence
loom large in our collective imaginations, from “armageddon” or the “apocalypse” in religious
eschatology to modern secular concerns about “existential risk” (Torres 2017; Vox 2017). While
fear and anxiety about “the end of world” is not unique to modern times (Moynihan 2020), there
is growing concern about humanity’s prospects for survival in the twenty-first century and beyond
under the specter of a broad and growing spectrum of dangers, such as nuclear war, climate change,
or artificial intelligence.
This concern has fueled the rise of popular anxiety about humanity’s existential perils.
There is now a well-established “dystopia” genre in popular culture, with science fiction movies
and television series that depict totalitarian or post-apocalyptic futures (e.g., Mad Max, Blade
Runner, The Hunger Games, The Walking Dead, Black Mirror, The Handmaid’s Tale, or Don’t
Look Up). This genre may derive creative inspiration from literary works of the past—like H. G.
Well’s The Time Machine or The World Set Free, Aldous Huxley’s A Brave New World, and
George Orwell’s 1984—but tends to reflect contemporary concerns about technological or
19
environmental catastrophes. While one interpretation of this burgeoning industry of dystopian
fiction is that it is mere storytelling to entertain a human population that has never lived better
(Pinker 2018), it also reflects deeper societal anxieties about the future of humanity. According to
the American Psychological Association (2017, 27), “climate anxiety” is now a widespread mental
health issue.
Existential anxiety about the survival of humanity is not only a popular phenomenon, but
also an elite phenomenon amongst activists, politicians, and scientists. Some of the leading
scholars of the twentieth and twenty-first centuries expressed their fears about humanity’s
prospects for survival, including Albert Einstein, Neils Bohr, Robert Oppenheimer, Enrico Fermi,
Sigmund Freud, Margaret Mead, Carl Sagan, Edward O. Wilson, and Stephen Hawking.
Transnational civil society movements—such as the Union of Concerned Scientists, the Campaign
to Abolish Nuclear Weapons, and Extinction Rebellion—frequently employ existential language
and images in their activism (Vuori 2010). Although they remain a minority, politicians and
governments invoke both negative concepts, like “nuclear annihilation” and “climate emergency,”
and positive concepts, like a “world without nuclear weapons” or “green economy.” Even
scientists are adopting the practice of signing and publishing open letters to “warn” the public
about existential threats, such as the over 15,000 scientists who signed the “World Scientists’
Warning to Humanity” on climate change (Ripple et al. 2017; 2020), the call for a moratorium on
“heritable gene editing” by leading biological scientists (Lander et al. 2019), and the “open letter”
by AI experts on the risks of artificial intelligence (Future of Life Institute 2019).
Furthermore, the publication of non-fiction books written by scholars and experts about
existential threats to humanity has ballooned in recent years (see Table 2.1). Until recently, the
study of existential threats proceeded primarily along disciplinary lines. The first serious secular
study of an existential threat was on nuclear war. International Relations—particularly its subfield
of International Security or Security Studies (see Walt 1991)—developed as an academic field
during the Cold War, when many of its leading scholars grappled with the significance of nuclear
weapons for international politics, and vice-versa, including the first (Morgenthau 1956; 1961;
1964; Herz 1957; 1959; Bull 1961; Niebuhr 1963), second (Jervis 1989; Mueller 1989; Waltz
1990), and third generation scholarship on the “nuclear revolution” (Craig 2003; Deudney 2007;
Tannenwald 2008; Lieber and Press 2020). In the late-1950s and early-1960s—in parallel to this
first-generation scholarship on the nuclear revolution—the hybrid academic and policy-oriented
literature on Strategic Studies developed on the problem of “nuclear deterrence,” spearheaded by
20
the RAND Corporation (Brodie 1959; Wohlstetter 1959; Kahn 1960; Schelling 1960). In the
1980s, a group of natural scientists—including biologists, climatologists, and geologists—studied
the potential environmental consequences of nuclear war and concluded that a large-scale nuclear
war could produce a dramatic drop in solar radiation, precipitation, and global average
temperatures, or “nuclear winter” (Schell 1982; Sagan 1983; Ehrlich et al. 1984; Toon et al. 2008;
Robock and Toon 2012). Finally, a literature on “nuclear safety” emphasized how the increasing
complexity of the human and technological systems behind nuclear “command-and-control”
created significant risks of human accident and/or technical errors that could lead to nuclear war
by miscalculation or accident (Blair 1985; Sagan 1995; Schlosser 2013).
Table 2.1: Non-Fiction Books on Existential Threats to Humanity
Author(s)
Year
Title
Bernard Brodie
1946
The Absolute Weapon
Dexter Masters & Katherine Way (eds.)
1946
One World, Or None
Herman Kahn
1960
On Thermonuclear War
Rachel Carson
1962
Silent Spring
Paul Ehrlich & Anne Ehrlich
1968
The Population Bomb
Isaac Asimov
1979
A Choice of Catastrophes
Donella Meadows
1972
The Limits to Growth
Jonathan Schell
1982
The Fate of the Earth
Paul Ehrlich and Carl Sagan
1984
The Cold and the Dark
Eric Drexler
1986
Engines of Creation
J. Tainter
1987
The Collapse of Complex Societies
World Commission on Environment
and Development
1987
Our Common Future
Scott Sagan
1995
The Limits of Safety
John Leslie
1996
The End of the World
Ray Kurzweil
1999
The Age of Spiritual Machines
Martin Rees
2003
Our Final Century
Francis Fukuyama
2003
Our Posthuman Future
Richard Posner
2004
Catastrophe
Richard Falk
2004
This Endangered Planet
Peter Ward & Donald Brownlee
2004
The Life and Death of Planet Earth
Jared Diamond
2005
Collapse
Ray Kurzweil
2006
The Singularity Is Near
Tilman Ruff (ed.)
2007
Securing Our Survival (SOS)
Nick Bostrom & Milan Cirkovic (eds.)
2008
Global Catastrophic Risk
Peter Ward
2008
Under a Green Sky
Emily Shuckburgh
2008
Survival
Fred Guterl
2012
The Fate of the Species
Nick Bostrom
2013
Superintelligence
James Barrat
2013
Our Final Invention
Craig Childs
2013
Apocalyptic Planet
21
Elizabeth Kolbert
2014
The Sixth Extinction
Yuval Noah Harari
2015
Homo Deus
Murray Shanahan
2015
The Technological Singularity
Lewis Dartnell
2015
How to Rebuild Civilization in the Aftermath of a
Cataclysm
Roy Scranton
2015
Learning to Die in the Anthropocene
Gerd Leonhard
2016
Technology vs. Humanity
Phil Torres
2017
Morality, Foresight & Human Flourishing
Peter Brannen
2017
The Ends of the World
Lisa Vox
2017
Existential Threats
Daniel Ellsberg
2018
The Doomsday Machine
Max Tegmark
2017
Life 3.0
Martin Rees
2018
On the Future
Steven Pinker
2018
Enlightenment Now
David Wallace-Wells
2019
The Uninhabitable Earth
Nathaniel Rich
2019
Losing Earth
Stuart Russel
2019
Human Compatible
Toby Ord
2020
The Precipice
Thomas Hoynihan
2020
X-Risk
Daniel Deudney
2020
Dark Skies
Note: This is an ongoing and selective list of non-fiction books on the subject of existential threats compiled
by the author. It does not include fiction books or non-book length manuscripts.
Nuclear war is not the only danger to receive serious attention as an existential threat. A
wide range of dangers have been cast in existential terms, including demographic and resource
pressures (Ehrlich and Ehrlich 1968; Meadows et al. 1972), climate change (Childs 2013; Scranton
2015; Wallace-Wells 2019), biodiversity loss (Ward 2008; Kolbert 2014; Brannen 2017),
biological weapons and bioengineered pathogens (Carlson 2003; Millett and Snyder-Beattie
2017), gene-editing (Doudna and Sternberg 2017), nanotechnology (Drexler 1986; 2006), and
artificial intelligence (Bostrom 2014; Shanahan 2015; Tegmark 2018; Russell 2019). While these
studies share a common concern for humanity’s survival, they do not “speak” to one another, nor
do they produce generalizable conclusions about humanity’s existential perils. Instead, they
represent warnings by experts in particular domains about specific threats to humankind.
This siloed approach to the study of existential threats has changed with the emergence of
an interdisciplinary research agenda on “catastrophic and existential risk” (CAER) (see especially
Leslie 1996; Bostrom 2002, 2013; Rees 2003, 2018; Posner 2004; Bostrom and Cirkovic eds.
2008; Garrick ed. 2017; Torres 2017; Avin et al. 2018; Baum et al. 2019; Ord 2020). The first book
to examine the broad spectrum of existential threats to human survival was Canadian philosopher
John Leslie’s (1996) The End of the World, which combined a philosophical exposition on a
probabilistic argument—known as the “Doomsday Argument”—which suggests that humanity
22
may be nearing its final generation, with a science-informed survey of various existential risks.
The Doomsday Argument, first proposed by the cosmologist Brandon Carter, is based on an
assumption of “anthropic reasoning” that there is nothing special about our temporal location in
human history, and then reasons based on an expanding human population that one should expect
that humanity is nearing its final generation (“doom soon”), rather than a longer trajectory (“doom
later”), since the latter would tend to put us amongst the earliest human beings to ever exist (Leslie
1996). Leslie then looks for “empirical” evidence that humanity may be approaching the “end of
the world” through a survey of various existential risks, including “war, pollution, and disease.”
In 2002, Nick Bostrom, an Oxford philosopher and Director of the Future of Humanity
Institute who was very much impressed by the Carter/Leslie Doomsday Argument (Bostrom
2001), defined an “existential risk” as “one that threatens the premature extinction of Earth-
originating intelligent life or the permanent and drastic destruction of its potential for desirable
future development” (also Bostrom 2013). Bostrom proposed a typology with four categories of
existential risks: “bangs,” or “sudden disasters” leading to extinction; “crunches,” where human
life continues but its prospects for achieving “post humanity” are “permanently thwarted”;
“shrieks,” where some form of “posthumanity” is reached but is only an “extremely narrow band
of what is possible and desirable”; and “whimpers,” whereby a “posthuman civilization” emerges
but “evolves in a direction that leads gradually but irrevocably to… the complete disappearance of
the things we value” (Bostrom 2002, 5). While Bostrom fails to explicate what he means by either
“Earth-originating intelligent life,” “permanent and drastic destruction,” or “potential for desirable
future development,” his work has been foundational and widely-cited in the literature on
existential risk (Bostrom 2002; 2013).
The study of existential risk now constitutes a recognizable interdisciplinary research
agenda (Beard and Torres 2020), with core works including Martin Rees’s (2003) Our Final Hour,
Richard Posner’s (2004) Catastrophe, Nick Bostrom and Milan Cirkovic’s (2008) Global
Catastrophic Risk, Phil Torres’ (2017) Morality, Foresight & Human Flourishing, and Toby Ord’s
(2020) The Precipice. There are also several research institutes that have sprung up to produce
scientific research—and policy advocacy—on existential risk, such as the Future of Humanity
Institute at Oxford University (FHI), the Centre for the Study of Existential Risk at Cambridge
University (CSER), the Stanford Existential Risks Initiative (SERI), the Future of Life Institute
(FLI), the Global Catastrophic Risk Institute (GCRI), the Berkeley Existential Risk Initiative
(BERI), the Global Priorities Project, and the Global Challenges Foundation. What they share is
23
an interest in the full spectrum of catastrophic and existential risks to humanity. While the CAER
research agenda is relatively new (see Beard and Torres 2020), it has made important contributions
to knowledge and understanding of catastrophic and existential risks. Arguably the main
contributions of this research agenda are the efforts to identify plausible threats that could be
catastrophic or existential (i.e., “horizon scanning”), and to elevate awareness and concern—
amongst policymakers and the general public—about threats to the survival of humanity.
Nevertheless, the research agenda has important limitations. First, the main contributors to
this research agenda have come predominantly from the natural sciences and/or technology and
future studies. As a result, social science research on the social processes behind existential
threats—such as psychology, sociology, economics, and political science—is comparatively
underdeveloped. Second, the extant research tends to emphasize technological risks (e.g.,
nanotechnology, artificial intelligence, or physics accidents) and even “natural” risks (e.g.,
asteroids, comets, and supervolancos) over military threats (e.g., nuclear and biological warfare)
and environmental dangers (e.g., climate change and biodiversity loss). It also appears biased
towards weird or “sexy” risks, like “grey goo” (Drexler 1986; 2006) or “superintelligence”
(Bostrom 2014), rather than “unsexy risks” (Kuhlemann 2018) or “boring apocalypses” (Liu et al.
2018), like climate change. Third, the research agenda has thus far neglected theory—both
constitutive and causal (Wendt 1998)—in favor of (descriptive) empirical research of catastrophic
and existential risks (for an exception, see Torres 2017). For example, while scholars tend to agree
that humanity faces a growing panorama of catastrophic and existential risks (Rees 2003; Posner
2004; Bostroms and Cirkovic 2008; Torres 2017 Ord 2020), none of the major works seek to
explain why—that is, what explains the proliferation of existential threats and/or why humanity
fails to neutralize them. Even foundational concepts like “humanity” and “survival” remain under-
theorized. Fourth, while the CAER research agenda is expanding, this scholarly community
remains overwhelmingly white, male, and English-speaking—this author included—which raises
critical questions about the nature and purpose of existing knowledge and understandings of
catastrophic and existential risks (Mitchell and Chaudhury 2020). In the words of Robert Cox
(1981, 128), “theory is always for someone and for some purpose.” Indeed, the CAER research
community shows a strong normative inclination towards utilitarian moral philosophy, especially
in the form of “transhumanist values” and the ethical arguments of the Effective Altruism
movement about the value of “future lives” (Beard and Torres 2020). Despite the new and
24
interdisciplinary nature of this research program, there is already a strong tendency towards
conceptual and normative homogeneity within the literature on existential risk.
Finally, the study of catastrophic and existential risk has made little impression on the field
International Relations, or vice-versa (for exceptions, see Harrington 2016; Mitchell 2017; Grove
2019; Mitchel and Chaudhury 2020; Deudney 2020; Pelopidas 2020; Sears 2020a; 2021a;
Kreienkamp and Pegram 2020; Nathan and Hyams 2021). The purpose of the rest of this chapter
is to build a bridge between the study of existential risk and International Relations. First, it seeks
to bring some of the core insights and findings of the CAER research agenda into the study of
International Relations. Second, it aims to make a novel contribution to the CAER research agenda
by developing an original framework and definition of existential threats.
2.2 What Is an “Existential Threat”?
The term “global catastrophic risk” (GCR) is increasingly used to refer to a category of threats that
are global in scope, catastrophic in intensity, and nonzero in probability (Bostrom and Cirkovic,
2008). Richard Posner (2004, 6) defines “catastrophe” as “an event that is believed to have a very
low probability of materializing but that if it does materialize will produce a harm so great and
sudden as to seem discontinuous with the flow of events that preceded it.” In general, the GCR
framework is concerned with what are believed to be low-probability, high-consequence scenarios
that threaten humankind as a whole (Kuhlemann 2018; Liu 2018; Avin et al. 2018). However, the
GCR framework is a broad category that does not account for salient differences in the origins and
scale of threats.
The typology developed here introduces two dichotomous variables, natural/anthropogenic
and catastrophic/existential, which are sensitive to the origins and scale of a threat. The first
variable distinguishes between “natural” and “anthropogenic” dangers, which turns on whether the
“cause” (or source) of a threat has its origins in human agency. For example, an asteroid (or comet)
impact represents a natural danger that is exogenous to human agency but could nonetheless be
catastrophic (or existential) for humanity, such as the Chicxulub asteroid that is one of the leading
explanations for the extinction of the dinosaurs (Kolbert 2014; Brannen 2017). Conversely, the
danger of nuclear weapons represents an anthropogenic threat, since their development and use
are endogenous to human agency, such as a terrorist organization’s acquisition and use of a nuclear
warhead against a major city (Ferguson and Potter 2005). This distinction between natural and
anthropogenic is not only a conceptual issue for clarity about the origins of the dangers facing
25
humanity, but also a practical issue of efficiently mobilizing humanity’s scare attention and
resources towards what could most contribute to threat reduction, since the dangers from
anthropogenic threats are typically on timescale of years, generations, or centuries, while many
natural risks occur on timescales of millennia or even millions of years (Tegmark and Bostrom
2005; Ord 2020).
The second variable distinguishes between “catastrophic” and “existential” dangers, which
hinges on whether a threat poses a theoretically plausible danger of bringing about civilizational
collapse or human extinction. Whereas catastrophic risks are often amenable to quantifiable
measures (e.g., fatality figures or economic losses) (Bostrom and Cirkovic 2008, 2–3), existential
threats imply some sort of qualitative break from the past (e.g., the “collapse” of human societies
or civilizations) (Diamond 2005; Kemp 2019). For instance, pandemics represent a catastrophic
threat that can—and frequently have—killed millions of people but fall short of posing an
existential threat to humankind (Ord 2020). Alternatively, if humans succeed in producing
artificial intelligence that far exceeds human-level “general” intelligence (i.e.,
“superintelligence”), then human extinction could be the “default outcome” (Bostrom 2014).
Again, the distinction between catastrophic and existential threats is not only a theoretical question
of distinguishing threats to humanity’s survival from lesser catastrophes, but also a practical matter
for understanding the changing circumstances of humanity’s threat environment, since there may
be a growing number of dangers that cross the threshold of constituting an existential threat to
humanity.
Together, these variables on origin and scale yield a simple typology (or 2x2 table) of four
ideal types of threats: natural-catastrophic, natural-existential, anthropogenic-catastrophic, and
anthropogenic-existential (see Table 2.2). It is this final type of anthropogenic existential threats
that represents the category of interest. However, a satisfactory definition of the concept requires
a deeper understanding of the constituent elements of “anthropogenic” and “existential.” The
following sections provide a more rigorous theoretical discussion of what it means for a threat to
have its origins in human agency and be on a scale that threatens humanity’s survival.
Table 2.2: Typology: The Origin and Scale of Threats
Catastrophic
Existential
Natural
Pandemic (“Spanish Flu”)
Asteroid impact (“Chicxulub”)
26
Anthropogenic
Nuclear terrorism (city centre)
Artificial intelligence (“superintelligence”)
Source: Sears (2020a; 2021b)
2.2.1 Natural vs. Anthropogenic
The natural/anthropogenic variable turns on whether the origins of a threat are exogenous or
endogenous to human agency. The key question is: would this danger exist if humans had no
impact on the world? Natural risks would exist even if humans had no impact on their external
environment, while anthropogenic threats would not exist if humans had no effect on the world.
For as long as humans have existed as a biological species (roughly 160,000 years as homo sapiens
sapiens), or as the sociocultural entity of civilization (approximately 10,000 years with the
adoption of written language, agriculture, and urban settlements), humanity has been vulnerable
to certain natural phenomenon that could bring about catastrophic (or existential) physical
destruction. The spectrum of natural catastrophic and existential risks includes the danger of
impacts from celestial objects (e.g., asteroids or comets), radiation from stars (e.g., solar flares or
gamma ray bursts), volcanic activity (e.g., “supervolcanos”), and fluctuations in Earth’s climate
(e.g., the glacial-interglacial cycle) (Leslie 1996; Rees 2003; Bostrom and Cirkovic eds. 2008).
These natural catastrophic and existential risks are exogenous to human agency, meaning that their
source or cause has no meaningful relationship to human (in)action—which is not the same as
saying that humans are powerfulness to prevent or mitigate the dangers. While these natural risks
cannot be discounted entirely, there are good reasons to believe that the probability of such events
occurring on timescales relevant for contemporary human action is extremely low (Bostrom and
Cirkovic eds. 2008; Ord 2020).
One natural catastrophic or existential risk that has captured human imagination and
catalyzed government action is the prospect of a large celestial object—like an asteroid or comet—
crashing into the Earth. In the 1980s, Luis and Walter Alvarez advanced the hypothesis that an
asteroid had collided with the Earth 66 million years, bringing about the Cretaceous-Tertiary (K-
T) mass extinction event that extinguished roughly 75% of all plant and animal species, including
the Dinosaurs (Alvarez et al. 1980; Kolbert 2014; Brannen 2017). Since the 1990s, NASA and
other space agencies have mapped the existence and trajectories of thousands of “near Earth
objects” (NEOs). Fortunately, they have found an inverse relationship between the size and number
of objects, which means that NEOs that could have catastrophic or existential consequences for
humanity are extremely rare. For example, NEOs with a diameter of approximately 1.5 kilometers
27
or greater that could produce catastrophic consequences occur on timescale that exceed the entire
existence of homo sapiens (Posner 2004, 27). There are only four known NEOs greater than 10
kilometers in size—a scale comparable to the K-T asteroid—with a probability of approximately
1 in 150 million chance of a collision occurring within this century (Ord 2020, 71). Thus, the
catastrophic or existential risk to humanity from asteroid or comet impacts appears to be
vanishingly small.
Another natural catastrophic or existential risk are “supervolcanos.” The geological record
reveals that life on Earth has been ravaged by no less than five “mass extinction” events over the
past 500 million years: the End-Ordovician, Late-Devonian, End-Permian, Late-Triassic, and End-
Cretaceous (Ward 2008; Kolbert 2014; Erwin 2015; Wignall 2017; Brannen 2017). Some of the
“big five” extinctions have been linked to volcanic activity, including the Permian Extinction (or
the “Great Dying”) roughly 252 million years ago, when 96% of all marine species and 70% of
terrestrial vertebrate species became instinct (Brannen 2017). A supervolcano is defined as a
volcano that has produced an eruption scored at 8 or more on the Volcanic Explosivity Index.
Supervolcanos may pose a greater threat to humanity than asteroids due to their greater probability,
with an estimated frequency of once in 50,000 years and the last known event being the Orunai
eruption in New Zealand approximately 27,000 years ago (Dosseto et al. 2011). The eruption of a
supervolcano near Toba, Indonesia approximately 75,000 years ago is believed to have caused a
severe drop in global temperatures and a “bottleneck” in the human population, which may have
declined to mere thousands of humans for thousands of years—perhaps the human species’ closest
brush with extinction (Bostrom and Cirkovic 2008, 13). Nevertheless, the probability of a
catastrophic or existential eruption from a supervolcano over the course of the next century remains
extremely low (Ord 2020, 75).
The classification of threats as “natural” or “anthropogenic” implies a clear-cut distinction
in the source of a threat being exogenous or endogenous to human agency. Asteroids and
supervolcanoes are clear examples of natural risks that are entirely exogenous to human action.
Yet there are other dangers, like pandemics, that are natural in origin but are influenced by human
agency (see Koblentz 2010, 109). A pandemic generally refers to the outbreak of an infectious
disease that spreads across a large geographic area and affects a substantial number of people. The
World Health Organization (2018, 2) defines a pandemic as “the worldwide spread of a new
disease.” Pandemics are natural risks because they originate in the naturally occurring biological
processes of pathogens that infect—and are transmitted between—humans. Nevertheless, human
28
agency can influence the scope, intensity, and probability of pandemics in myriad ways (Ord 2020,
126), such as human actions that affect, inter alia, public health and sanitation, vaccines and
antibiotics, and agricultural practices and human interaction with wildlife. For instance, there have
been serious warnings about how poor practices with antibiotics may contribute to antimicrobial
resistance with potentially catastrophic consequences (O’Neill, 2014; Yuk-ping and Thomas
2018), or how the combination of animal agriculture and human encroachment into wild spaces
increases the risk of viruses jumping between species. Indeed, one of the leading explanations for
the origins of SARS-CoV-2 (or COVID-19) is that the virus originated in a wild animal species
but eventually made the “jump” to humans through an intermediary species, possibly in a seafood
market that sold live animals in Wuhan, China, between October and December 2019. Pandemics
are therefore something of a middle case: natural in origin but shaped by human action.
Humanity also faces threats that have their origins in human agency. The most obvious
threat is violence—although some argue that violence has its origins in the biological imperatives
of evolution and therefore might be considered a “natural” threat (Somit 1990; Thayer 2000; and
Gat 2009). Arguably, the threat of large-scale violence has been the principal threat to human
societies for centuries if not millennia (Keeley 1996; Gat 2006). In international relations, this
generally takes the form of war between nation-states. Some scholars argue that the threat of war
is on the decline (Muller 1989; Pinker 2011; Roser 2016), pointing to the decrease in the frequency
and severity of wars—especially wars between the great and major powers—in the twentieth and
twenty-first centuries. Scholars have proposed a variety of explanations for the “Long Peace”
(Gaddis 1986; Jervis 2002), including, inter alia, the structural effects of “bipolarity” and
American “hegemony” during and after the Cold War (Waltz 1964; 1967; Ikenberry 2000),
deterrence from nuclear weapons (Jervis 1989; Waltz 1990), the spread of liberal democracy and
“democratic peace” (Doyle 1983), the growth of economic and “complex” interdependence
(Keohane and Nye 1977), the creation of international institutions (Axelrod and Keohane 1985;
Oneal and Russett 1999), or the emergence of “pluralistic security communities” and the normative
rejection of war as a continuation of international politics by other means (Deutsch 1957; Adler
and Barnett 1998; Wendt 1999).
No single explanation can fully account for peace between the great and major powers
(Jervis 2002). For example, nuclear deterrence does not account for peace between non-nuclear
states or between nuclear and non-nuclear states, while democratic peace or pluralistic security
communities cannot account for peace with or between non-democracies or states outside the
29
community. Unfortunately, many of the potential sources of peace between the great and major
powers appear to be eroding—such as democracy, economic interdependence, and American
hegemony—which could have important consequences for international peace and security in the
future. More generally, the history of “major” wars between the great powers suggests that these
conflicts tend to occur roughly once per century, driven by the uneven growth in relative power
and structural conflicts between rising and declining great powers (Organski 1958; Modelski 1978;
Gilpin 1981; Levy 1985), which means that it may be too soon to draw conclusions about the
“obsolescence” of great power wars based on the empirical record of peace since 1945. The
deterioration of many sources of peace, the rise and fall of the great powers, the occurrence of
proxy wars, the increase in military competition, and the preparation by states for “future” wars
all point to the continuing threat of war and large-scale violence.
Humanity faces other threats that have their origins in human agency, especially those that
emanate from humanity’s relationships with the environment and technology. First, humanity’s
large-scale destruction of Earth’s natural systems is producing a wide array of environmental
problems—such as biodiversity loss, climate change, deforestation, desertification, fresh-water
scarcity, ocean acidification, ozone depletion, and various forms of pollution—which threaten to
reduce the habitability of the planet (Rockstrom et al. 2009; Wallace-Wells 2019; Ripple et al.
2020). Humanity has become one of the driving forces of environmental change, which is why the
term “the Anthropocene” is increasingly used to distinguish the present geological epoch from the
“Holocene,” the unusually temperate period since the end of the last Ice Age roughly 11,700 years
ago when human civilization—perhaps not coincidently—began and flourished (Crutzen 2002;
Steffen et al. 2007). The danger is that humanity’s impacts on Earth’s natural environment could
exceed the “planetary boundaries” that constitute a “safe operating space for humanity”
(Rockstrom et al. 2009, 472).
Second, humanity’s rapid scientific and technical development of several emerging
technologies—such as artificial intelligence, biotechnology, cybernetics, nanotechnology,
robotics, and quantum computing—raises the prospect of humanity losing control over its
increasingly complex and powerful technology (Danzig 2018; Deudney 2018). Ray Kurzweil
(2005) proposed a “law of accelerating returns,” whereby the development of technology occurs
at an accelerating rate and that whenever technology approaches some barrier to growth a new
technology is invented to cross that barrier, allowing the process of technological growth to
resume. For example, “Moore’s Law” describes the exponential growth in computing power,
30
whereby the number of transistors in an integrated circuit approximately doubles every two years.
Similarly, the cost of DNA sequencing has continued to drop from around $100 million per genome
in 2001 to roughly $1,000 in 2019. The danger posed by technology is that humanity’s power for
destruction could grow beyond the capacity of our systems of control to prevent destruction (Sears
2021b). Bostrom (2013, 25) succinctly describes the risk: “If we continually sample from the urn
of possible technological discoveries… then we risk eventually drawing a black ball: an easy-to-
make intervention that causes extremely widespread harm and against which effective defense is
infeasible.”
Figure 2.1: Natural and Anthropogenic Threats
In summary, humanity faces both natural and anthropogenic dangers, which turns on
whether the origin of a threat is exogenous or endogenous to human agency. Natural dangers are
exogenous to human agency, which includes cosmological and terrestrial phenomenon over which
humans exercise no influence (e.g., asteroid impacts), as well as other naturally occurring
phenomenon over which humans may exercise some degree of influence over their scope,
intensity, or probability (e.g., pandemics). Anthropogenic threats are endogenous to human
agency, which includes the direct agency of intentional harm by humans (e.g., war), as well as the
indirect agency of the unintentional consequences of human actions (e.g., environmental
degradation) (see Figure 2.1). While some scholars see a crucial difference between “threats”
(“security”) and “risks” (“safety”) by making a distinction between (in)direct or (un)intentional
harm (Aradau and Van Munster 2007; Corry 2012; Jore 2017; Dijkstra et al. 2018; Kirk 2020),
this framework sees the natural or anthropogenic origins of the threat as the more salient
distinction. Ultimately, it is anthropogenic threats that pose the far greater danger of existential
consequences for humankind in the twenty-first century.
2.2.2 Catastrophic vs. Existential
The catastrophic/existential variable concerns the scale of a threat—that is, whether the scope and
severity of potential destruction threatens the survival of humanity. The core question is: does it
31
pose a threat of either the collapse of modern global civilization or the extinction of human beings?
The scale of a threat can be conceptualized in terms of varying degrees of scope (i.e., how
widescale is the threat?), and intensity (i.e., how severe is the threat?). A threat may range in its
scope from local to global, and in its severity from negligible to annihilation (see Table 2.3).
Variation in scope and severity do not represent rigid differences in scale but are simply meant to
illustrate that at some point “a quantitative change becomes a qualitative change” (Jervis 1989,
13). Most of the dangers that humans face are relatively limited in their scope and/or severity; that
is, humans typically confront threats at the individual to national-levels, and/or levels of damage
that are negligible to disruptive. Threats that are only faced by individuals (e.g., homicide), or are
negligible in consequences (e.g., 1% inflation) rarely warrant public attention (i.e., the area in
white). Conversely, threats that are subnational in scope and ranging from the tolerable to severe
(e.g., earthquakes), or are tolerable in severity but ranging from the subnational to regional (e.g.,
drug trafficking) may receive public attention and generate policy responses (i.e., in light grey).
Next there are threats that are greater in either severity (e.g., state failure) and/or scope (e.g., global
recession), which receive significant public attention and are priorities of public policy and global
governance (i.e., grey).
Catastrophes are extreme events at the upper end of the scale in both scope and severity (in
dark grey), which “produce a harm so great and sudden as to seem discontinuous with the flow
of events that preceded it” (Posner 2004, 6). Historically, great power wars have been catastrophic,
such as the First and Second World Wars that killed up to 20 million and 85 million people,
respectively. Another example of a catastrophic threat are pandemics, especially extreme cases
like the Bubonic Plague (the “Black Death”) that killed between 25 and 200 million people in
fourteenth century Europe, and H1N1 Influenza (“Spanish Flu”) that killed between 17 and 100
million globally between 1918 and 1920. There are also many examples of catastrophic scenarios
that could happen in the future, such as an increase in global average temperatures by several
degrees Celsius (~3.0°C or more), which could leave entire regions of the globe uninhabitable for
human beings (Sherwood et al. 2010; Wallace-Wells 2019). Yet these catastrophes may
nevertheless fall short of threatening the continuation of modern global civilization or the survival
of human beings. That is, they do not pose an existential threat to humanity.
32
Table 2.3: Catastrophic & Existential Threats to Humanity
Note: The white area represents threats below the threshold of public attention; the light-grey area
represents threats that receive public and policy attention; the grey area represents serious threats that
receive significant public attention and policy prioritization; the dark-grey area represents catastrophic
threats; and the near-black area represents existential threats.
The word “catastrophe” conveys the idea of human suffering on a large scale (Bostrom and
Cirkovic eds. 2008), but does not necessarily imply a sense of finality, like the ideas of
“apocalypse” or “Armageddon” in the monotheistic religions. Whereas catastrophic threats can, in
principle, be operationalized in quantitative terms—such as millions or tens of millions of
casualties—an existential threat implies the qualitative state-change of an “ending.” For
individuals this finality is death, the end of life. Death is an inevitability for every living
organism—bacteria, plant, animal, or human. Yet the certainty of death does not negate the
existential threat of death, which concerns the uncertainty surrounding the timing (when?) and
circumstances (how?) of death. The same is true for collective entities, like species. All biological
species face inevitable extinction, the end of a species. While the outcome is assured, the timing
and circumstances of extinction are radically uncertain. Scientists estimate that there are roughly
8.7 million species on Earth (+/- 1.3 million) (Sweetlove 2011). Some of these species will be
extinguished quickly, while others could endure for millions of years. The estimated lifespan for
the average mammalian species is roughly one million years. Since homo sapiens sapiens emerged
33
in Africa roughly 160,000 years ago, the natural rate of extinction would suggest a natural lifespan
of another 840,000 years. Will the human species survive to reach this natural lifespan?
Similarly, social entities face existential threats to their survival—or the termination of the
social and material conditions for the reproduction of their mode of existence. Throughout history,
most of the human societies—clans, tribes, city-states, dynasties, empires, nations, and
civilizations (Carneiro 1970; Spruyt 1994; Tilly 1990; Herbst 2000; Phillips and Sharman 2015)—
that have ever existed have ceased to exist. Indeed, what Carl Sagan said of biological species may
be equally true of human societies: “Extinction is the rule. Survival is the exception.” Many of the
historic cases of societal collapse were geographically isolated and of comparatively low-levels of
social and technical complexity, such as the insular societies of the Greenland Norse and Easter
Islanders (Diamond 2005; Kemp 2019). Yet many of history’s “great civilizations” ultimately
collapsed, such as the Akkadian, Mayan, and Roman civilizations. Modern nations that claim
historical lineage to ancient civilizations—such as China, Egypt, Iran, and India—tend to obscure
the histories of societal collapse. China, for example, maintains the historical origins of Chinese
“civilization” in the Shang Dynasty (1570–1045 BCE), despite the repeated collapse of dynasties
and states, and the fact that today’s People’s Republic of China shares little beyond the
“territoriality” of a common geography with the Shang (Ruggie 1993)—a society that practiced
ritualized human sacrifices (Tanner 2009, 42–43).
What does it mean for human societies to “collapse”? According to Joseph Tainter (1988,
4), societal collapse implies the “rapid and significant loss of an established level of socio-political
complexity.” For Michael Lawrence and Thomas Homer-Dixon (2021, 2), it is “a process of social
transformation involving the acute decline of social complexity relative to its earlier growth”
(original emphasis). The timing and circumstances of societal collapses vary widely, from abrupt
shocks to slow decays (Liu et al. 2018; Cassio 2019). Luke Kemp (2019) finds the average lifespan
of 87 ancient civilizations to be 336 years, with some lasting less than 50 years (e.g., the Third
Dynasty of Ur, Phrygia, the Nanda Empire, the Qin Dynasty, and the Kanva Dynasty), and a few
surpassing a millennium (e.g., the Vedic, Olmecs, Kushite, Aksumite civilizations). Historically,
there have been many causes or drivers of societal collapse. For Jared Diamond (2005), the main
driver of societal collapse is overpopulation relative to the environmental carrying capacity
(“overshoot”), combined with socio-cultural intransigence to change course. Lawrence and
Homer-Dixon (2021) identify three “causal pathways” towards societal collapse: diminishing
marginal returns of growing social complexity; endogenous problem growth, or the tendency for
34
common problems facing societies to escalate; and adaptive lag, or the obstacles that prevent a
society from adapting to changing circumstances. Kemp (2019) identifies several factors behind
the collapse of ancient civilizations, including climactic change (e.g., droughts), environmental
degradation (e.g., resource depletion), wealth inequality (e.g., oligarchy), political centralization
(e.g., tyranny), growing complexity (e.g., institutions and bureaucracies), external shocks (e.g.,
war and disease), or randomness/bad luck. Many of the sources and drivers of societal collapse are
endogenous to human societies. In the words of the historian, Arnold Toynbee, “Great civilizations
are not murdered. They commit suicide” (Quoted in Kemp 2019). Societal collapse may also
involve multiple causes. Roman civilization, for example, experienced a drawn-out process of
decline, driven by political corruption, imperial over-expansion, and environmental degradation,
followed by external aggression, with Rome being sacked by the Visigoths in 410 and the Vandals
in 455 (Beard 2015). Following the collapse of Roman civilization, Europe experienced a roughly
1000-year period of relative decline and stagnation, including the “Dark” (500–1000 CE) and
“Middle Ages” (1000–1500 CE).
The distinction between catastrophic and existential threats implies finality. But what is
implied by the end of humanity? The definition of an existential threat to humanity can take on
different meanings, depending on one’s understanding of what constitutes “humanity” or
“humankind”—a point which has thus far eluded CAER scholars. As Jonathan Schell (1982, 6–7)
wrote about the ambiguity of American and Soviet understandings of the threat of nuclear war:
[T]hey have not been precise about what level of catastrophe they were speaking
of, and a variety of different outcomes, including the annihilation of the belligerent
nations, the destruction of “human civilization,” the extinction of mankind [sic],
and the extinction of life on Earth, have been mentioned, in loose rhetorical fashion,
more or less interchangeably… The annihilation of the belligerent nations would
be a catastrophe beyond anything in history, but it would not be the end of the world.
The destruction of human civilization, even without the biological destruction of
the human species, may perhaps be called the end of the world, since it would be
the end of that sum of cultural achievements and human relationships which
constitutes what many people mean when they speak of “the world.” The biological
destruction of mankind [sic] would, of course, be the end of the world in a stricter
sense. As for the destruction of all life on the planet, it would be not merely a human
but a planetary end—the death of the earth (Schell 1982, 6–7).
Today, there is significant confusion and debate about the meaning of “existential risk,”
which stems from the ambiguity about the referent object of “humanity.” There are at least four
35
possible interpretations of the meaning of humanity—the Darwinian, Hegelian, Cartesian, and
Indigenous interpretations—which lead to distinct conceptions of existential threats.
Humanity as species, based on the biological processes of evolution that produced the
genetic make-up of anatomically modern homo sapiens sapiens;
Humanity as civilization, based on the historical processes of globalization that gave rise
to the modern global civilization of interconnected human societies across the world;
Humanity as intelligence, based on the cognitive processes of reflexivity and learning that
led to a being that is both self-aware and capable of shaping itself and the world;
Humanity as Earthly, based on the natural processes that made Earth a hospitable planet
for human beings and upon which human—and non-human—life depends.
The Darwinian interpretation of humanity emphasizes the biological nature of the human
species. This yields a seemingly straightforward conception of existential threats to humanity: an
existential threat is anything that threatens the extinction of homo sapiens sapiens as a biological
species. Yet progress in biotechnology, neuroscience, and/or artificial intelligence could lead
human beings to radically alter their biological makeup in ways that could signify the extinction
of the human species—or, rather, its replacement by a “post” or “transhuman” species (Fukuyama
2003; Kurzweil 2006; Tegmark 2017). As Jennifer Doudna, one of the discoverers of the
CRISPR/Cas-9 system of “gene editing,” writes:
For billions of years, life progressed according to Darwin’s theory of evolution…
Today, things could not be more different. Scientists have succeeded in bringing
this primordial process fully under human control. Using powerful biotechnology
tools to tinker with DNA inside living cells, scientists can now manipulate and
rationally modify the genetic code that defines every species on the planet,
including our own… Practically overnight, we have found ourselves on the cusp of
a new age in genetic engineering and biological mastery—a revolutionary era in
which the possibilities are limited only by our collective imagination” (Doudna and
Sternberg 2017, xiii).
Similarly, Elon Musk’s company, Neuralink, seeks to produce brain-computer interfaces that could
eventually make humans into cyborgs, and which seems to be making progress towards this end
(Wakefield 2021). Even if humans did not intentionally alter their biology but merely survived for
millions of years, natural selection would change the genetic makeup of humans. From the
Darwinian interpretation, humanity’s possible “transhumanist” future—or simply genetic drift—
constitutes an existential threat to humankind as the biological species of homo sapiens sapiens.
36
The Hegelian interpretation of humanity emphasizes the history of human civilization,
especially the emergence of contemporary global civilization. As a sociocultural entity, human
civilization has its ancient origins in Mesopotamia between 5,000 and 10,000 years ago, with
certain revolutionary changes in the human condition, such as the adoption of agriculture, written
language, and urban settlements, making humans what Aristotle called a “political animal.” For
several millennia, multiple civilizations coexisted in relative isolation (e.g., Mesopotamia, Egypt,
Indus Valley, Shang China, and Mesoamerica) (Kennedy 1987; Harari 2014). However, the process
of globalization and development over the past several centuries have led interconnectedness and
interdependence to gradually replace the isolation and independence of human societies (Buzan et
al. 1993; Mann 2012). This has made it possible to speak of humanity as a common global
civilization—that is, as a single, interconnected “global village” (Deudney 2007). Indeed, this
interpretation of human civilization is formally recognized in the United Nations Educational,
Scientific and Cultural Organization’s (UNESCO) concept of “world heritage.”
From this Hegelian interpretation, an existential threat to humanity is anything that
threatens the ability to sustain and reproduce modern global civilization—essentially, a rapid and
significant decline of societal complexity, including the ideational (political, economic, and
cultural) and material (demographic, technological, and geographic) conditions behind the
contemporary mode of human existence. This conception of existential threats is implicit in Martin
Rees’s (2003, 8) gloomy “fifty-fifty” estimate that “our present civilization on Earth will survive
to the end of the present century.” Of course, the Hegelian interpretation raises its own questions:
what ideational and material elements constitute contemporary global civilization, and what
constitutes a collapse? What about the significant ideational differences and material disparities
between nations around the world, or societies that do not participate in the “global village”? What
is the relationship between “societal multiplicity” and global civilization (Rosenberg 2016)?
The Cartesian interpretation of humanity emphasizes human intelligence, which takes its
cue from the Enlightenment, especially Rene Descartes’ aphorism, “I think, therefore I am.” It
emphasizes human beings as self-aware and problem-solving agents that are free to determine their
own future. This interpretation animates Nick Bostrom’s widely cited definition of an existential
risk (see also Baum et al. 2019; Ord 2020): “An existential risk is one that threatens the premature
extinction of Earth-originating intelligent life or the permanent and drastic destruction of its
potential for desirable future development” (Bostrom 2002; 2013). There are many problems with
Bostrom’s Cartesian definition of existential risk. Curiously, Bostrom’s definition makes no
37
mention of humans (or humanity), but only “Earth-originating intelligent life,” which may or may
not be human. One could imagine a scenario leading to human extinction (e.g., total nuclear war),
but that eventually opens an evolutionary niche for intelligent life on Earth that is biologically
unrelated to homo sapiens sapiens; or another scenario in which humans create a superior form of
artificial intelligence (“superintelligence”) that eliminates humans but goes on to spread across the
cosmos. Neither of these scenarios necessarily constitutes an existential risk by Bostrom’s
definition, and yet few would consider this to be the survival of humanity. Owen Cotton-Barrett
and Toby Ord (2015, 2) seem to notice this problem when they re-articulate Bostrom’s definition
as “the permanent and drastic destruction of [humanity’s] potential for desirable future
development” (insertion in the original).
Secondly, the restriction of existential threats to “permanent and drastic destruction” is
problematic, especially when taken outside the context of the human species. After all, the recovery
of life on Earth from the “big five” mass extinctions suggests that destruction may be “drasticbut
probably not “permanent.” On each occasion, life on Earth eventually recovered and flourished,
including the evolution of homo sapiens sapiens (Wignall 2017). More generally, a definition of
existential risk that insists on permanence poses the epistemological problem of being unable to
distinguish ex ante between threats of the temporary and unrecoverable collapse of human
civilization—in other words, between a “dark age” and an “existential catastrophe.” This may lead
scholars to underestimate serious threats to humanity whereby the causal pathway to either
permanent collapse or human extinction is unclear (e.g., nuclear war or climate change), and to
overestimate more speculative threats that could lead to human extinction (e.g., “gray goo” or
“superintelligence”) (Drexler 1986; 2007; Bostrom 2014). Toby Ord (2020, 167), for example,
recently assessed the existential risk from nuclear war and climate change to be just 1 in 1,000,
respectively, compared to the 1 in 10 risk from “unaligned artificial intelligence.”
Thirdly, the notion of “premature extinction” presents similar problems: if the lifespans of
species or the duration of civilizations are radically uncertain, then “premature” loses its meaning.
Since mammal species tend to endure for one million years, one might consider the extinction of
homo sapiens sapiens at between 160,000 and 200,000 years to be “premature.” However, it may
be the fate of intelligent species or civilizations to “live fast and die young”; or there may be
essentially no limits to their longevity. Scholars who contemplate the “Fermi Paradox,” for
instance, do not know whether the non-observance of intelligent alien civilizations is because
humans are alone in the universe, or if alien civilizations have emerged but suffered—endogenous
38
or exogenous—destruction, or if they exist but have not—for lack of capability or will—made
contact with us (Webb 2015; Sears 2020b). Uncertainty about humanity’s future is not merely an
epistemological condition of limited knowledge, but rather an ontological principle that rejects
determinism about humanity’s survival (Katzenstein and Seybert eds. 2017; Adler 2020).
Finally, Bostrom’s definition of existential risk emphasizes “the destruction of [the]
potential for desirable future development.” Toby Ord (2020, 37) also sees “the destruction of
humanity’s longterm potential” as the essence of existential risk. Yet this conception of existential
risk poses a significant dilemma for moral philosophy: what is the meaning of “humanity’s
longterm potential” or “desirable future development”? The search for an answer to this question
is as old as philosophy: that is, the meaning of “the good life.” It presumes that humanity’s
“potential” is either an “objective” or “absolute” moral standard, or at least amenable to
“subjective” or “relative” moral understandings that are universally and permanently acceptable.
For instance, in Ord’s discussion of what he calls the “Long Reflection,” he claims that the
“ultimate aim of the Long Reflection would be to achieve a final answer to the question of which
is the best kind of future for humanity” (Ord 202, 191; emphasis added). This is unlikely to be the
case. Consider the following: should humanity seek to colonize the galaxy, or to live in ecological
balance with the Earth? Normative questions about humanity’s “desirable” future(s) are likely
subject to irresolvable debates. Ultimately, what one considers “existential success” may be
another’s “existential tragedy.” There are, for example, radical “environmentalists” who believe
that human extinction would represent a boon for the recovery and flourishing of biodiversity on
Earth and would therefore like to bring it about (Torres 2017). Conversely, Bostrom (2005) sees
great potential in “transhumanist values,” in which “human nature” represents a “work-in-
progress” and “need not be the endpoint of evolution.” Ord’s (2020) discussion of “Our Potential”
focuses on the possibility of humanity—presumably, a post-biological humanity—continuing for
millions (or billions) of years and expanding across millions (or billions) of stars (or galaxies).
These scholars are heavily influenced by utilitarian moral philosophy and ethical arguments that
place value on essentially limitless “future lives.” In a recent critique, Audra Mitchell and Aadita
Chaudhury (2020) argue:
It is often claimed that the ‘end of the world’ is approaching—but whose world,
exactly, is expected to end?… [D]espite their claims to universality, we argue that
these ‘end of the world’ discourses are more specifically concerned about
protecting the future of whiteness (original emphasis).
39
Their critique points to the limitations of any definition of existential risk that hinges on the notion
of “humanity’s potential.”
The Indigenous interpretation of humanity emphasizes the integral relationship between
humans and Earth, the planet on which human life emerged and ultimately depends. This
perspective rejects the anthropocentrism of the human-nature “dualism,” which sees human beings
as separate from—and superior to—all other forms of life on the planet, as well as the instrumental
use of Earth by and for humans (Aoki Inoue and Franco Moreira 2016). In this interpretation, the
interdependence of life on Earth is not merely a product of the emergent complexity of independent
organisms and species interacting and influencing one another in their efforts to survive and thrive,
but rather the essence of Earth as an organic whole—a living organism. While there are some
elements of this line of thinking in Western philosophy—from Jean Jacques Rousseau’s Discourse
on Inequality to Carl Sagan’s a Pale Blue Dot—the idea of an integral relationship between human
beings and Earth is more intimately associated with indigenous knowledge and philosophy, such
as the Andean conception of “Pacha Mama” (“World Mother” or “Mother Earth”).
From the Indigenous interpretation, anything that threatens the habitability of Earth for
(human) life would constitute an existential threat to humanity. This could entail the rapid and
dramatic deterioration or collapse of the planet’s geological and ecological “life support” systems,
which could render Earth an inhospitable planet. Earth’s mass extinction events are examples of
this danger, since a “sixth mass extinction” could threaten the habitability of Earth for human and
non-human life (Kolbert 2014; Brannen 2017). Another existential threat from the indigenous
interpretation is the prospect of permanent extraterrestrial colonization, since this would threaten
to terminate the relationship between humanity and Earth and therefore a constituent element of
what it means to be human. The main limitation of the Indigenous interpretation is that it is unclear
whether this represents a unique conception of “humanity,” or rather its negation; for it may be
possible for humans to survive—and even flourish—during a mass extinction event, or to survive
and thrive in some form by colonizing other worlds.
Table 2.4: The Meanings of “Humanity” and “Existential Threats”
Interpretation
End of Humanity
Existential Threats
Darwinian
The extinction of homo
sapiens sapiens
Bioengineered pathogen”: The design
of a highly virulent virus wipes out the
majority of the human species and leads
to humanity’s eventual extinction
40
Hegelian
The collapse of modern
global civilization
“Nuclear war”: A total nuclear war and
winter reduces humanity to a pre-
civilizational state
Cartesian
The elimination of the
human capacity to
determine its future
“Superintelligence”: The development of
artificial superintelligence is misaligned
with humanity’s freedom or survival
Indigenous
The termination of
Earth’s habitability
Mass Extinction Event”: The rapid and
dramatic deterioration of biodiversity
leaves Earth inhospitable for humanity
Since an “existential threat” is only meaningful in relation to that which is “existentially
threatened” (Buzan et al. 1998), the absence of a consensus understanding of what constitutes
humanity raises serious problems for a definition of existential threats to humanity (see Table 1.4).
One approach to this problem would be to come up with a definition that is based on a particular
interpretation of humanity and move on. Another approach is to develop a general definition that
seeks to encompass multiple possible interpretations of humanity. The framework here follows the
latter approach.
2.2.3 Definition of “Existential Threats”
The conceptual demarcations between “natural” and “anthropogenic,” and between “catastrophic”
and “existential” dangers allows us to distinguish anthropogenic existential threats as a class of
phenomenon from other categories of potential harm (natural/catastrophic, natural/existential, and
anthropogenic/catastrophic) (see Table 1.2). Therefore, an anthropogenic existential threat
(hereafter “existential threat”) refers to any danger that has its origins in human agency and
threatens, at the minimum, the collapse of modern global civilization, and, at the maximum, the
elimination of human beings or their capacity to shape their future on Earth.
A final clarification should be made here about the difference between an “existential
threat” and an “existential tragedy.” Whereas an existential tragedy refers to the actual occurrence
of the destruction of humanity, an existential threat is about its possibility (Ord 2020, 37). As
Ulrich Bech (2006, 332) said about risk,
Risk does not mean catastrophe. Risk means the anticipation of catastrophe…
Risks are not ‘real’, they are becoming real’. At the moment at which risks become
real—for example, in the shape of a terrorist attack—they cease to be risks and
become catastrophes… Risks are always events that are threatening (original
emphasis).
41
The same is true of existential threats, which are about the possibility of the destruction of
humankind. If an existential threat transpires, then it becomes an existential tragedy.
Existential threats therefore “exist” in the realm of uncertainty. Uncertainties abound in the
study of existential threats—about the possibilities, timing, and circumstances of dangers. Two
types of uncertainty warrant special mention. The first is epistemological uncertainty, which
concerns the capacity to “know” or acquire knowledge about existential threats. Much of the
CAER research agenda aims to reduce epistemological uncertainty: that is, to identify or
“discover” existential threats—to turn “unknown-unknowns” into “known unknowns”—and
reveal causes and effects and ideally quantify potential harm—to make “known-knowns” from
“known-unknowns.” For example, geologists have only recently discovered the existence and
eruption of supervolcanos, and astronomers have dedicated substantial time and resources to
mapping the trajectories of near-earth objects. Similarly, much of the research on nuclear winter
and climate change seeks a better understanding of the “Earth system,” while AI experts are
engaged in an ongoing debate about the technical feasibility of artificial “general” intelligence
(Everitt et al. 2018; Grace et al. 2018). However, there may be fundamental limits to human
knowledge and understanding—whether from the limitations of scientific empiricism, or for the
shear complexity of the world (Bernstein et al. 2000). The epistemological challenges posed by
existential threats are particularly severe, since humanity cannot by definition “observe” its own
extinction—a special type of observational selection bias, known as “anthropic reasoning” or
“survivor bias,” which necessarily implies that N = 0” (Leslie 1996; Bostrom 2001). To
paraphrase Descartes, “I think, therefore I am not extinct.”
The second is ontological uncertainty, which concerns the “reality” of existential threats.
The worldview of classical physics is one of an objective and deterministic world: the arrow of
time points forward in a linear fashion, and effects follow mechanistically from causes.
Conversely, ontological uncertainty implies an unsettled and indeterminate world. It is the world
of quantum physics, where Heisenberg’s uncertainty principle is not a consequence of empirical
imprecision but a fundamental property of quantum systems. The CAER research agenda
frequently makes (or disputes) claims about existential threats in the form of cause-and-effect
relationships: e.g., if total nuclear war occurs, then modern global civilization will be destroyed.
This approach to the study of existential threats presumes an objective or deterministic world. It
responds to uncertainties about the immaterial with techniques designed to make the unknown
knowable (“risk”) (Knight 1921), such as “Bayesian” probabilities (Ord 2020, 167). It seeks to
42
turn the study of existential threats into a problem of Newtonian physics and—against Einstein—
to quantify the unquantifiable. This is the wrong way to think about existential threats. In doing
so, scholars reify humanity’s capacity for survival as if it were some objective or deterministic—
or at least probabilistic—category rather than a contingent and indeterminate outcome that is
shaped by human agency. As Benoit Pelopidas (2020, 6) argues, there are two “unknowables”
about nuclear war: “we do not know, cannot know and will never know in advance when exactly
nuclear war will happen and whether humankind will survive it” (original emphasis). Yet scholars
rarely explicitly recognize such ontological uncertainties behind existential threats. For example,
much of the debate about whether nuclear war or climate change constitute an existential threat is
not actually about the accuracy of scientific models, but rather subjective judgments about whether
experts expect humanity to survive or succumb to the threat in question should it materialize.
Simply put, humanity’s “survivability” is not a “knowable entity” (Pelopidas 2020, 6). Thus, the
study of existential threats must inhabit a world of radical uncertainty, where human knowledge
of existential threats is imperfect and human survivability is indeterminate.
2.3 The Spectrum of Existential Threats
Humanity in the twenty-first century faces the proliferation of existential threats to modern global
civilization or even human extinction. The spectrum of existential threats is broad and growing,
including persistent threats to international peace and security that could bring about violent
omnicide (e.g., nuclear war or bioterrorism) (Schell 1982; Koblentz 2003; 2010), looming dangers
from the large-scale destruction of the natural environment that could lead to an inhospitable planet
(e.g., climate change, pollution, and biodiversity loss) (Kolbert 2014; Wallace-Wells 2019; Ripple
et al. 2020), and prospective risks of humans losing control over increasingly powerful emerging
technologies (e.g., biotechnology, nanotechnology, or artificial intelligence) (Drexler 2013;
Bostrom 2014; Doudna and Sternberg 2018). The situation is exacerbated by the increasing
complexity of the relationships between social, environmental, and technological systems
(Lawrence and Homer-Dixon 2020; Kreienkamp and Pegram 2020), which could produce, inter
alia, lethal combinations (e.g., nuclear war and winter) (Sagan 1983; Toon, Robock, and Turco
2007; Robock and Toon 2012), feedback loops (e.g., runaway climate change) (Kump et al. 2003;
Steffen et al. 2018), small triggers (e.g., cyber attack and nuclear escalation) (Gartzke and Lindsay
2017; Acton 2018), human accidents (e.g., viruses escaping from laboratories) (Rees 2003; Merler
et al. 2013; Furmanski 2014), technical errors (e.g., false alarms in nuclear early-warning systems)
43
(Blair 1985; Sagan 1995; Schlosser 2013), epistemic uncertainties (e.g., about safe planetary
boundaries and climate tipping points) (Rockstrom et al. 2009; Steffen et al. 2018), technological
disruption (e.g., artificial intelligence destabilizing nuclear deterrence) (Geist and Lohn 2018;
Scharre 2018; Boulanin ed. 2019), technological diffusion (e.g., terrorists employing
bioengineered pathogens) (Rees 2003; Posner 2004; Torres 2017), control problems (e.g., an
“intelligence explosion” leading to misaligned superintelligence) (Bostrom 2014; Shanahan 2015;
Tegmark 2018; Russell 2019), or the cascading collapse of coupled systems (e.g., the
“synchronous failure” of ecological and social systems) (Diamond 2005; Homer-Dixon et al.
2015).
Although there may be others, the leading existential threats to humanity in the twenty-
first century are arguably nuclear war, climate change, biodiversity loss, bioengineered pathogens,
and artificial intelligence (see Leslie 1996; Rees 2003; Posner 2004; Torres 2017; Ord 2020;
Deudney 2020). The existential threat from nuclear weapons is the prospect of a “total war”
between the leading nuclear powers—the United States and Russia—leading to hundreds of
millions of deaths immediately in the belligerent nations (OTA 1979; Minson 2020), followed by
billions more from the inhospitable environmental conditions of “nuclear winter,” which could
ultimately threaten civilizational collapse or human extinction (Schell 1982; Sagan 1983; Ehrlich
et al. 1984; Robock and Toon 2012; Helfand 2013). The existential threat from climate change is
if Earth’s climate system passes certain “tipping points” (IPCC 2018), leading to an unstoppable
“runaway” warming process that ultimately stabilizes in a “Hothouse Earth” climate that is
inhospitable for modern global civilization (Steffen et al. 2018; Ripple et al. 2017; 2020; 2021;
UNDP 2020), and perhaps even too hot for human habitation (Sherwood et al. 2010; Wallace-
Wells 2019). The existential threat of biodiversity loss is that a dramatic and rapid reduction in the
number and variety of plant, insect, and animal species—i.e., a mass extinction event (Ward 2008;
Payne and Clapham 2012; Kolbert 2014; Ceballos 2015; Brannen 2017)—could undermine the
natural ecosystems upon which human prosperity and survival ultimately depends, including the
critical “life support systems” of air, food, and water (UNEP 2000; Rocktrom et al. 2009; IPES
2019; WWF 2020; Center for Biological Diversity 2020). The existential threat of
biotechnology/synthetic biology is that rapid advances in these fields could be used to synthesize
and release—intentionally or unintentionally—a virulent pathogen with higher lethality and
communicability than natural pathogens—e.g., so-called “gain-of-function” capabilities—leading
to a “bioengineered pandemic” that spreads around the world, culminating in civilizational
44
collapse or human extinction (Carlon 2003; Koblentz 2003; 2010; Rees 2003; 2018; Bostrom and
Cirkovic eds. 2008; Torres 2017; Bostrom 2019). Finally, the existential threat from artificial
intelligence is the possible creation of an AI system that far exceeds human-level general
intelligence—or “superintelligence”—and that is not properly “aligned” with human values or
survival and is beyond human control (Bostrom 2014; Shanahan 2015; Tegmark 2017; Russell
2019) (see Appendices 1-3 for a more detailed discussion of select existential threats).
Table 2.5: Existential Threats and Scenarios
Existential Threat
Scenario
Nuclear weapons
Nuclear war/winter: A total nucle ar war between the nuc lear powers ki lls
hundreds of millions from nuclear explosions and billions more from
nuclear winter, leading to civilizational collapse or human extinction
Climate change
Hothouse Earth: Climate change passes certain tipping points, unleashing
a process of runaway global warming leading to an inhospitable planet for
modern civilization or human beings
Biodiversity loss
Mass extinction: The loss of biodiversity leads to the collapse of Earth’s
life support systems, including food and clean water, making the planet
inhospitable for modern civilization or human beings
Biotechnology/Synthetic
biology
Bioengineered pandemic: The design and release of a highly virulent
pathogen with optimum lethality and communicability spreads across the
globe, leading to civilizational collapse or human extinction
Artificial intelligence
Superintelligence: The creation of an AI system that far exceeds human-
level general intelligence becomes misaligned with human values or
survival and beyond human control
Overall, this widening spectrum of existential threats (see Table 2.5)—from war, the
environment, and technology—suggests that humanity has entered a particularly dangerous
moment in its history—an age of existential threats (Sears 2021b)—characterized by our
increasing power and prospects for self-destruction. We now turn to the question of why states
have failed to take decisive action against these existential threats for the security and survival of
humankind.
45
Chapter 3
In our present predicament… it is tempting to apply to scholarly enterprise the Latin
saying “primo vivere deinde philosaphari” by freely translating it into, “Let us think
first of all about how to survive, thereafter about everything else.” But thinking
about how to survive means thinking about international politics. While a past is
not remote in which other matters, such as economic affairs or domestic politics,
loomed larger in the daily concerns of people than those which then were perhaps
fittingly called “foreign” affairs, today the latter’s impact on the well-being and,
indeed, survival of every one us has become only too patent in an age in which the
black threat of nuclear annihilation hovers over all. In the field of international
politics, where events now govern the very basis of everything else—namely, the
survival of mankind [sic] in the literal, physical sense of the term—studying fact,
explanations, and solutions is no longer mere “philosophizing” but an essential part
of that “life” which is at stake.
— John Herz, 1959
Theory of Macrosecuritization Failure
Humanity faces a growing spectrum of existential threats in the twenty-first century that endanger
modern global civilization and even human survival. Why have states repeatedly failed to do what
is possible to eliminate the existential threats to humankind? This chapter develops a mid-range
theory of “macrosecuritization failure” in international relations, or the process whereby an actor
with a reasonable claim to speak with legitimacy on an issue frames it as an existential threat to
humanity and offers hope for survival by taking extraordinary action, but fails to catalyze a
response by states that is sufficient to reduce or neutralize the danger and ensure the security of
humankind.
The chapter is organized as follows. The first section builds on the literature on
securitization theory in International Relations (Buzan et al. 1998), especially the concept of
“macrosecuritization” (Buzan and Wæver 2009). Macrosecuritization refers to securitization
processes involving higher-level referent objects than the conventional focus on the security of
“nations” or “states.” The second section looks more closely at a particular type of
macrosecuritization discourse: the macrosecuritization of humanity. It is constituted by a distinct
and generalizable rhetorical structure: “X poses an existential threat to humanity and therefore Y is
essential for survival!” The third section answers the question of what is an “instance” of
macrosecuritization, identifies the universe of cases of macrosecuritization, and establishes the
empirical puzzle of the recurrent failure of macrosecuritization. The fourth section postulates
46
several theoretical hypotheses and explanatory variables that could plausibly explain
macrosecuritization (failure) and then tests them against the historical record. It discards those that
fail to account for the empirical patterns of macrosecuritization (failure) and retains those which
are most promising. The final section develops a theory of macrosecuritization (failure) that
emphasizes the dynamics of great power politics. Specifically, it addresses why in certain
circumstances the great powers are able to come to a consensus on macrosecuritization, while in
others great power consensus is not possible, leading to macrosecuritization failure.
The central argument is that the structure of the international system implies that
macrosecuritization is by necessity a process of “securitization under anarchy.” In the absence of
a world political authority that can “speak” and “do security” on behalf of humankind,
macrosecuritization depends on the power and capabilities of the great powers. Great power
consensus is therefore an essential condition for macrosecuritization. The possibility of consensus
between the great powers, however, is shaped by conflicting securitization narratives about
“humanity securitization” and “national securitization.” Whether the great powers are more
receptive to a narrative of humanity securitization or national securitization depends on three
conditions: (1) the stability of the distribution of power between the great powers in the
international system; (2) the power and interests of securitizing actors vis-á-vis state audiences
within the domestic security constellations of the great powers; and (3) the threat-related beliefs
and perceptions of the political leaders of the great powers. When these conditions favour a
security narrative of humanity securitization, then great power consensus can mobilize
extraordinary action in international relations to reduce or neutralize an existential threat. But when
these conditions favour a security narrative of national securitization, then great powers politics
will make consensus impossible and end in macrosecuritization failure.
3.1 Securitization Theory
In International Relations, the term “existential threats” is most closely associated with the
Copenhagen School (CS) of securitization theory (especially Wæver 1995; Buzan et al. 1998;
Balzacq ed. 2010; Balzacq et al. 2015). In securitization theory, to call something a matter of
“security” is to frame the issue as an existential threat to some referent object requiring
extraordinary measures for survival. In the words of Barry Buzan, Ole Wæver and Jaap de Wilde
(1998, 21), “Security is about survival. It is when an issue is presented as posing an existential
47
threat to a designated referent object… The special nature of security threats justifies the use of
extraordinary measures to handle them.”
Securitization has three core features: survival discourse, exceptional measures, and
political consequences (Buzan et al. 1998, 24–26). First, securitization involves a special type of
discourse (or “speech act”), whereby a “securitizing actor” invokes the rhetorical structure (or
“grammar”) of “survival in the face of existential threats” (Buzan et al. 1998, 27). The
distinguishing rhetorical features of the “securitizing move” are survival, urgency, and necessity,
“because if the problem is not handled now it will be too late, and we will not exist to remedy our
failure” (Buzan et al. 1998, 26). Secondly, securitization requires relevant security actors (or
“functional actors”) to take extraordinary measures for security. The key here is action—
specifically, the mobilization of special powers and capabilities to neutralize a threat “by any
means necessary.” Thirdly, securitization implies consequences for the political relations in or
between societies. The political essence of securitization is to go beyond the rules, norms, and
institutions that characterize “normal politics”; that is, “the breaking of rules” (Buzan et al. 1998,
24–26). In short,
Security is the move that takes politics beyond the established rules of the game
and frames the issue either as a special kind of politics or as above politics… The
issue is presented as an existential threat, requiring emergency measures and
justifying actions outside the normal bounds of political procedure (Buzan et al.
1998, 23–24).
Securitization is both a constitutive and a causal theory of international security (Wendt
1998; Guzzini 2011; Balzacq et al. 2014). On the constitutive side, securitization theory provides
a novel understanding of what security is as a sociopolitical practice in international relations. It
also aims to understand how security issues are constituted in terms of the properties and relations
that characterize the units and structures of a system (a “security constellation”) (Buzan et al 1998,
42–45; Buzan and Wæver 2009). On the causal side, securitization theory seeks to explain why or
how an issue emerges—or fails to emerge—as a “security issue.” Securitization theory offers an
explanatory framework of “causal mechanisms,” “dynamics,” and “facilitating conditions” or
“constraints” behind the process of securitization, as well as the social and political effects on
international relations (Buzan et al 1998, 31–33; Guzzini 2011). This entails analysis of the identity
and relations between actors, since not all actors possess equal powers of securitization. As Buzan,
Wæver, and de Wilde (1998, 31–32) write:
48
[The] relationship among subjects is not equal or symmetrical, and the possibility
for successful securitization will vary dramatically with the position held by the
actor. Security is thus very much a structured field in which some actors are placed
in positions of power by virtue of being generally accepted voices of security, by
having the power to define security. This power, however, is never absolute… No
one conclusively ‘holds’ the power of securitization… To study securitization is to
study the power politics of a concept.
Put simply, securitization theory seeks to understand and explain, “Who can ‘do’ or ‘speak’
security successfully, on what issues, under what conditions, and with what effects?” (Buzan et al
1998, 27, 31).
3.1.1 The Theoretical Framework of Securitization
The theoretical framework of securitization can be broken down into several constituent elements
(Buzan et al. 1998; McDonald 2008; Balzacq ed. 2011). They include:
Referent object: The entity to be protected, whether human beings, societies, or something
they value;
Existential threat: Some danger to the survival of the referent object;
Extraordinary measures: The actions taken for security that go beyond the constraints of
“normal politics”;
Securitizing actor: The actor(s) who performs the speech act of framing a particular issue
as an existential threat;
Functional actor: The actor(s) whose power and interests shape decisions, behaviors,
interactions, and outcomes in the issue area that is being securitized;
Audience: The actor(s) who is the target of the securitizing move and accepts or rejects the
legitimacy of exceptional measures for security.
While these elements of securitization theory are functionally distinct, in practice they may refer
to the same entity or actor. For example, “the state” is frequently the referent object, securitizing
actor, and functional actor for “national security” (Buzan et al. 1998, 36). Importantly, the referent
object is ontologically prior to the existential threat,
1
which contrasts with other theoretical
approaches that would suggest that “security is defined and valorized by the threats which
challenge it” (Ullman 1983, 133). Finally, securitization theory is based on a social ontology, in
which security is intersubjective and socially constructed” (Buzan et al. 1998, 31; also
MacDonald 2008; Balzacq ed. 2010). Securitization theory explicitly rejects a materialist ontology
or the endeavor to distinguish between “objective” and “subjective” threats (Wolfers 1952).
1
Buzan, Wæver and de Wilde (1998, 21) write, “[an] existential threat can only be understood in relation to the
particular character of the referent object in question.”
49
Securitization theory matured as an approach to the study of international security within
the context of the “War on Terrorism” in the aftermath of the terrorist attacks of September 11,
2001, when many of the actions of the United States Government appeared to confirm the
theoretical logic of a dominant actor invoking a discourse of security and taking extraordinary
measures—including torture, surveillance, and war—for “national” and “international security.”
Much of the empirical research employing securitization theory has focused on the (“successful”)
securitization of terrorism (Buzan 2006). Yet securitization theory is, in principle, generalizable to
any “security issue,” and scholars have expanded the empirical research agenda to explore other
issue areas, such as migration (Huysmans 2006), organized crime (Stritzel 2012), public health
(McInnes and Rushton 2011; Bengtsson and Rhinard 2019; Thomas and Yuk 2020), and climate
change (Trombetta 2008; Brzoska 2009; Methmann and Rothe 2012; Lucke et al. 2014; Allan
2017; Paglia 2018).
Scholars have also begun to study cases of “securitization failure,” in order to address the
methodological problem of “selection bias” in studying only cases of “successful” securitization
(Geddes 1990; Salter 2011; Balzacq et al. 2015; Ruzicka 2019). They have also attempted to refine
securitization theory to better understand and explain the nature and roles of the audience (Cote
2016), non-state securitizing actors (Dalaqua 2013), science (Berling 2011; Paglia 2018),
legitimacy (Olesker 2018), identity (Cardoso 2018), images and metaphors (Williams 2003; Vuori
2010), and ethics and normativity (Wæver 1995; Floyd 2011; Roe 2012). Finally, scholars have
engaged in meta-theoretical debates about the nature of securitization theory (Guzzini 2011;
Balzacq et al. 2014). Overall, the growing theoretical and empirical literature on securitization
theory has made significant progress towards gaining “an increasingly precise understanding of
who securitizes, on what issues (threats), for whom (referent object), why, with what results, and,
not least, under what conditions (i.e., what explains when securitization is successful)” (Buzan et
al. 1998, 32).
3.1.2 The Concept of Macrosecuritization
In securitization theory, the referent object of security can range from individual human beings to
the whole of humankind. According to Buzan, Wæver and de Wilde, “In principle, securitizing
actors can attempt to construct anything as a referent object” (1998, 36). In practice, however,
securitization typically focuses on the “mid-level” of nation-states (“national security”) (Buzan et
al 1998; Balzacq ed. 2011), and, to a lesser extent, the “lower-level” of individuals or subnational
50
units (“human security”) (Watson 2011; Stritzel 2012), or “higher-level” of regional organizations
(“security communities”) (Buzan and Wæver 2003; Huysmans 2006; Bueger 2013). More
recently, Barry Buzan and Ole Wæver (2009) introduced the concept of macrosecuritization to
unpack “higher order securitizations” in international relations (see Figure 3.1). The main question
for macrosecuritization is “at what level is the referent object?” (Buzan and 2009, 258).
Macrosecuritisations are defined by the same rules that apply to other
securitisations: identification of an existential threat to a valued referent object and
the call for exceptional measures. The key difference is that they are on a larger
scale than the mainstream collectivities at the middle level (states, nations) and seek
to package together securitisations from that level into a ‘higher’ and larger order
(Buzan and Wæver 2009, 257).
Macrosecuritizations can produce security constellations, or larger scale patterns where a
set of interlinked securitisations become a significant part of the social structure of international
society” (Buzan and Wæver 2009, 256). Security constellations may vary in terms of the scope
and intensity of their political consequences in international relations. The relevant question is:
“what is the degree of comprehensiveness”? At the lower end, security constellations may remain
“niche securitizations” that are of interest only to a narrow range of actors and audiences in
international relations, like the “War on Drugs” or human trafficking (Buzan and Wæver 2009,
258). At the higher end, security constellations can become “overarching securitizations” in
international relations, such as the Cold War or War on Terrorism, which “impose a hierarchy” on
lower-level securitizations and shape the identities, interests, and relations between states (Buzan
and Wæver 2009, 256–257).
51
Figure 3.1: Macrosecuritization: Scope and Comprehensiveness
Buzan and Wæver (2009) raise the important question of the origins, nature, and dynamics
of macrosecuritizations. They argue that “universalist beliefs and claims” are one of the principal
sources of macrosecuritizations, and identify four types of “universalisms” (Buzan and Wæver
2009, 260–261):
Inclusive universalisms: Ideological beliefs about the best way to improve the human
condition for all humankind (e.g., Christianity, Islam, Communism, Liberalism);
Exclusive universalisms: Ideological beliefs about the superiority of a particular group and
its right to rule over all humankind (e.g., Chinese Tianxia, European imperialism, Nazism);
Existing order universalisms: Political claims about the threats to the existing international
order (e.g., Westphalian sovereignty, the “rules-based international order”);
Physical threat universalisms: Claims about dangers that threaten the survival of all
humankind (e.g., nuclear weapons, climate change, artificial intelligence).
While Buzan and Wæver (2009) are correct that the “universalist” logics behind these
macrosecuritization discourses lead easily to expansionist conceptions of their referent object,
which, at the maximum, can encompass all of humanity, arguably only the final type of
macrosecuritization invokes a discourse about the survival of humankind. It constitutes a unique
kind of macrosecuritization because it seeks to “securitize humanity.”
52
Special attention to the macrosecuritization of humanity as a class of phenomenon that is
distinct from other types of (macro)securitization raises new questions and opportunities for
progress in securitization theory (Elman and Elman 2002): What are the rhetorical features that
constitute macrosecuritization discourses about the survival of humanity? Who are the securitizing
actors, functional actors, and audiences that make up security constellations? What factors make
macrosecuritization more or less possible? In short, who can “speak” and “do security” on behalf
of humankind, on what existential threats, under what conditions, and with what effects?
3.2 The Macrosecuritization of Humanity
This section explores the rhetorical structures and security constellations that constitute the
macrosecuritization of humanity as a distinct class of securitization phenomenon in international
relations. This section therefore provides a constitutive theory of the macrosecuritization of
humanity as a precursor to a causal theory of what explains the success and failure of
macrosecuritization in international relations.
3.2.1 The Discourse of the Macrosecuritization of Humanity
The macrosecuritization of humanity follows a simple, generalizable rhetorical structure: “X”
poses an existential threat to humanity and therefore “Y” is essential for survival! This rhetorical
structure is demonstrated in the table, “Discourse Analysis on Macrosecuritization” (see Appendix
5), which analyzes excerpts taken from a dozen illustrative examples of macrosecuritization
discourses on various issues, at different times, and by distinct actors, but all reflecting the general
discursive pattern of framing a particular issue as an existential threat to humanity and calling for
extraordinary measures for survival. What it reveals is that the discourse on the
macrosecuritization of humanity is constituted by three core rhetorical features.
The first core feature is that these macrosecuritization discourses frame “humanity” as the
referent object of security. In practice, the referent object of humanity may take on different
names—including “humanity,” “humankind,” “humans,” “human beings,” “the human race,”
“homo sapiens,” “human society,” “global society,” “mankind,” “man,” “civilization,” “modern
civilization, “life on Earth,” “the planet,” and “the world” (see Appendix 5). Since
macrosecuritization discourses cannot be separated from the worldviews of securitizing actors
(Buzan and Wæver 2009), the vocabulary for the referent object inevitably reflects beliefs and
assumptions about what constitutes “humanity.” This language may reflect a biological conception
53
of humanity (e.g., “human beings” or “homo sapiens”), a social one (e.g., “civilization” or “human
society”), or a hybrid one (e.g., “humankind” or “future generations”). It also means that the
language may reflect the prejudices of the securitizing actor—such as the sexism of “mankind,”
the racism behind “civilization,” or the anthropocentrism of “the world” (Howell and Richter-
Montpetit 2020; Mitchell and Chaudhury 2020).
While this vocabulary may vary—and be problematic—the securitization of humanity
always invokes the idea of a universal humanity. This idea is captured by phrases like “common
humanity” (see example no. 2 in Appendix 5), “all of humanity” (no. 9), the “fate of humanity”
(no. 1, 9), the “security of all humanity” (no. 4), the “principles of humanity” (no. 4), “extinction
of humanity” (no. 3), “self-interest of mankind” (no. 2), “the good of mankind” (no. 5), “for the
sake of all mankind” (no. 6), “the conscience of mankind” (no. 6), “future generations” (no. 2, 4,
7, 11), and by general references made to “human life” (no. 5), “human health” (no. 10–11),
“human welfare” (no. 1), “human well-being” (no. 9), “human resources” (no. 1), “human
invention” (no. 8), “human activities” (no. 7), “human values” (no. 12), and “human survival” (no.
4, 7). Grammatically, it is reflected in the tendency of macrosecuritization discourses to speak in
the first-person plural—an all-encompassing “we,” “us,” and “our” (no. 1–2, 5, 7–12). Arguably,
one of the best illustrations of the idea of a universal humanity is found in the 1988 UN General
Assembly Resolution 42/52 (“Protection of Global Climate for Present and Future Generation of
Mankind”), which states:
Welcoming with appreciation the initiative…entitled “Conservation of climate as
part of the common heritage of mankind”… Convinced that climate change affects
humanity as a whole and should be confronted within a global framework so as to
take into account the vital interests of all mankind [sic.]. 1. Recognizes that climate
change is a common concern of mankind [sic.], since climate is an essential
condition that sustains life on earth. 2. Determines that necessary and timely action
should be taken to deal with climate change within a global framework (UNGA
1988, 133).
In short, this type of macrosecuritization discourse frames a universal humanity as the referent
object of security.
The second core feature of the securitization of humanity is that it invokes the existence of
an existential threat to humankind. This type of macrosecuritization discourse is unique because
it frames the stakes of an issue in terms of humanity’s survival, whether understood in terms of the
societal destruction of “civilization” (e.g., no. 1), or the biological extinction of the “human
species” (e.g., no. 3). Macrosecuritization discourses are sometimes explicit in calling an issue an
54
existential threat, such as Carl Sagan’s (1983, 291) warning that nuclear winter poses a “real
danger of the extinction of humanity” (no. 3). In other instances, macrosecuritization discourses
may be vague or ambiguous—intentionally or unintentionally—in describing an existential threat,
such as a UN report which asserted that biological weapons “endanger man’s future” [sic.] (UNSG
1969, 3), the “World Scientists’ Warning” claiming that climate change threatens the “fate of
humanity” (Ripple et al. 2020, 9), or the World Wildlife Fund report on biodiversity loss which
claims that “the future of nearly 8 billion people are at stake” (WWF 2020, 4). More generally,
macrosecuritization discourses often frame the severity of an issue in terms of a spectrum ranging
from the catastrophic to the existential. The Treaty on the Prohibition of Nuclear Weapons (no. 4),
for example, claims that “the catastrophic consequences of nuclear weapons cannot be adequately
addressed, transcend national borders, [and] pose grave implications for human survival” (UNGA
2017). The key point is not that macrosecuritizations make an assertion that humanity will be
destroyed, but that the stakes of an issue call into question humanity’s prospects for survival. As
one group of prominent scientists concluded about the danger of “nuclear winter”: “[t]he question
of the survival of the human species is now at issue” (Ehrlich et al. 1984, 75).
Macrosecuritization discourses invariably describe a situation that poses an existential
threat to humanity—such as the nuclear “arms race” and the existence of large nuclear arsenals
(e.g., no. 1–4), the destabilization of the climate or decline of biodiversity leading to an
inhospitable planet (e.g., no. 7–11), or the development of a potentially dangerous and
uncontrollable technology (e.g., no. 5–6, 12). The situation that poses an existential threat may
already exist, like the “continued existence of nuclear weapons” (e.g., no. 4), or is expected to exist
in the future, such as the “impending crisis” of climate change (e.g., no. 7). Macrosecuritization
discourses frequently describe the existential threat as constituting a novel situation for
humankind, such as a revolutionary technology (e.g., 1, 5, 12), or unprecedented environmental
conditions (e.g., no. 7–11). For example, Neils Bohr called atomic energy a “veritable revolution
in human resources” that posed “unprecedented dangers” (in Masters and Ways eds. 1946, ix–x;
see also Brodie 1946), while the World Wildlife Fund (2020, 1–2) contends that “Nature is
declining globally at rates unprecedented in millions of years.”
Revolutionary and unprecedented circumstances often imply uncertain and unpredictable
consequences. For instance, a UN report on biological weapons states that if biological warfare
were ever to occur, “no one could predict how enduring the effects would be and how they would
affect the structure of society and the environment in which we live” (UNSG 1969, 87–88).
55
Macrosecuritization discourses often emphasize the uncertainty surrounding an issue, which could
make it an existential threat to humanity. Grammatically, macrosecuritization discourses are filled
with tentative language, such as the qualifiers and conditionals “could,” “might,” “may,”
“possible(ly)” or “potential(ly)” (e.g., no. 1, 3, 5, 7, 8, 9, 12). The relationship between
unprecedented circumstances and uncertain dangers is illustrated in the summary document for the
1988 Toronto Conference on Climate Change:
Humanity is conducting an unintended, uncontrolled, globally pervasive
experiment whose ultimate consequences could be second only to a global nuclear
war. The Earth’s atmosphere is being changed at an unprecedented rate… Far-
reaching impacts will be caused by global warming… The best predictions
available indicate potentially severe economic and social dislocation for present
and future generations (UNEP 1988, 292).
Thus, macrosecuritization discourses point to a situation—existing or potential—that poses
revolutionary or unprecedented dangers that could threaten the very survival of humankind.
The third core feature of the securitization of humanity is the call for extraordinary
measures for the security of humankind. Macrosecuritization discourses highlight the inadequacy
of existing efforts—i.e., “normal politics”—for responding to an issue (e.g., no. 1, 11). This is
exemplified by the World Scientists’ Warning, which clearly points to the failure of normal politics
to adequately respond to climate change: “Despite 40 years of global climate negotiations, with
few exceptions, we have generally conducted business as usual and have largely failed to address
this predicament” (Ripple et al. 2019, 1). Macrosecuritization discourses seek to mobilize decisive
action to neutralize an existential threat. The security measures may seek to prevent a danger from
emerging (e.g., superintelligence), or to mitigate—and ultimately eliminate—a danger that already
exists (e.g., nuclear disarmament). They generally emphasize the need for international
cooperation and global solutions, since national measures alone are insufficient: “No country can
tackle this problem in isolation” (UNEP 1988, 292).
The ends of macrosecuritization are no less than the transformation of human societies—
international politics, the global economy, cultural values—while the stakes of the issue—human
survival—justify the demand for extraordinary action (e.g., no. 1, 10, 11). Niels Bohr called for
the “abolition of barriers hitherto considered necessary to safeguard national interests but now
standing [sic.] in the way of common security against… a deadly challenge to civilization itself”
(in Masters and Ways eds. 1946. ix-x). The World Scientists’ Warning declared that, “[t]o secure
a sustainable future, we must change how we live” (Ripple et al. 2019, 3). The World Wildlife
56
Fund (2020, 4–5) urged a “deep cultural and systemic shift” and “a transition to a society and
economic system that values nature.” And world leaders at the 2020 UN Summit on Biodiversity
recognized that, “We are in a state of planetary emergency… A transformative change is needed:
we cannot simply carry on as before” (Leaders’ Pledge for Nature 2020).
Macrosecuritization discourses thus offer hope for survival by taking extraordinary action
to neutralize an existential threat to humanity. In doing so, they emphasize the role of human
agency and choice in determining the outcome of an issue. Yet the scope of human agency in
macrosecuritization discourses is often reduced to a choice between action and inaction—life and
death. For instance, Bernard Baruch, the American representative to the United Nations Atomic
Energy Commission, expressed in a 1946 speech: “My Fellow Citizens of the World… We must
elect World Peace or World Destruction” (Department of State 1960, 7–8). Albert Einstein said
about the existential threat of the atomic bomb, “There is, in my opinion, only one way out” (in
Masters ed. 1946, 76). Sometimes macrosecuritization discourses take an optimistic view of
human agency, framing the issue as both an existential threat and an opportunity to improve
humanity. For William J. Ripple et al. (2020, 11), “[t]he good news [about climate change] is that
such transformative change, with social and economic justice for all, promises far greater human
well-being than does business as usual.” Other times, however, macrosecuritization discourses
reflect a cynical or pessimistic view, offering only a fleeting hope for survival. For example, David
Wallace-Wells (2017) writes that, “absent a significant adjustment to how billions of humans
conduct their lives, parts of the Earth will likely become close to uninhabitable, and other parts
horrifically inhospitable, as soon as the end of this century.” The general message behind
macrosecuritization discourses is that “we still have a chance to put things right” (WWF 2020, 4),
but extraordinary action is necessary and time is running out (Vuori 2010; Levin et al. 2012): “It
is imperative to act now” (UNEP 1988, 292). The securitization of humanity therefore demands
that urgent, decisive, and transformative action be taken for the security and survival of humankind
(e.g., no. 4, 6, 7, 10, 11). In the words of the atomic scientists, “Time is short. And survival is at
stake” (in Masters ed. 1946, 79).
In summary, the securitization of humanity is a type of macrosecuritization discourse that
follows a generalizable rhetorical structure with three core elements: (1) the referent object of a
universal humanity; (2) a situation that poses an existential threat to humanity; (3) and hope for
survival through extraordinary action to transform humankind. These elements are present—and
appear frequently (see Table 3.2)—in all examples of macrosecuritization discourses. They even
57
use similar language and vocabulary to describe the referent object, existential threats, and
extraordinary measures (see Figure 3.1).
Table 3.1: Frequency Count of Rhetorical Features
Rhetorical Feature
Total Count
Relative Frequency
Universal Humanity
135
42.6
Existential Threat
110
34.7
Extraordinary Measures
72
22.7
Figure 3.2: The Vocabulary of Macrosecuritization
There is a final noteworthy feature of macrosecuritization discourses: the use of dramatic
language, metaphors, and imagery (Williams 2003)—like the religious imagery of “Armageddon”
(e.g., no. 2) and “doomsday” (e.g., no. 8), or military metaphors of being “at war”—in order to
evoke a response—fear, hope, and action—from their audience. Carl Sagan (1983, 291) described
the “cold, dark, radioactivity, pyrotoxins and ultraviolet light” of a “nuclear winter,” which could
“imperil every survivor on the planet.” Some climate scientists have taken to describing the worst-
case scenario of climate change as a “Hothouse Earth,” which could make the planet uninhabitable
for humanity (Steffen et al. 2018; Ripple et al. 2019). Some environmentalists claim that “nature
is unravelling and that our planet is flashing red warning signs of vital natural systems failure”
(WWF 2020, 4). And some scholars of artificial intelligence suggest that the possible creation of
“superintelligence” could mark the dawn of a “postbiological” (Moravec 1988) or “Post-Human
58
Era” (Vinge 1993). Indeed, some macrosecuritization discourses go so far as to claim that the issue
poses the single greatest challenge ever confronted by humanity (e.g., 1, 12). As Nick Bostrom
(2014, vii) writes about artificial intelligence: “This is quite possibly the most important and most
daunting challenge humanity has ever faced. And—whether we succeed or fail—it is probably the
last challenge we will ever face.”
3.2.2 The Actors and Audiences of Macrosecuritization
Who are the actors and audiences of macrosecuritization? Macrosecuritization, like all
securitizations, is a process in which actors are differentiated by role or function, including
securitizing actors, de-securitizing actors, functional security actors, and audiences—although one
actor may have multiple functions. Unlike other forms of “low” or “mid-level” securitizations,
which may be entirely domestic affairs, macrosecuritization is an international phenomenon
(Buzan and Wæver 2009). What this means is that macrosecuritizations are likely to produce more
complex security constellations, where actors with different identities and interests around the
world seek to influence the global narrative around a particular issue. It also means that the actors
and audiences of macrosecuritization shape, and are shaped by, the broader structure and dynamics
of international relations, which privileges the power and interests of some actors over others.
Macrosecuritizations vary in terms of the relative homogeneity and heterogeneity of the
actors and audiences that constitute security constellations. In some cases, macrosecuritizations
are matters of high-level diplomacy in which the only important actors are states and
intergovernmental organizations (see Figure 3.3). This is demonstrated by the “long, complex and
intensive” disarmament and arms control diplomacy between 1961 and 1967 that culminated in
the Nuclear Nonproliferation Treaty (Documents on Disarmament 1968, 36). Not only were states
the principal securitizing actors (most notably Ireland, Canada, Great Britain, the Soviet Union,
and the United States) and de-securitizing actors (particularly France and the People’s Republic of
China), but they were also the primary audience and functional actors as the nuclear weapons states
and non-nuclear weapons states that would decide the issue of nuclear nonproliferation.
Intergovernmental organizations also played an important role in achieving the NPT, particularly
the Eighteen-Nation Committee on Disarmament for providing a diplomatic forum for multilateral
treaty negotiations, the UN General Assembly for promoting ratification and adherence, and the
International Atomic Energy Agency for the monitoring and verification of nuclear safeguards.
59
In other cases, macrosecuritization is a heterogenous process between multiple types of
actors. The contemporary “climate emergency” illustrates this more complicated picture, with
national and local governments, intergovernmental and nongovernmental organizations, fossil fuel
producers and renewable energy companies, investors and insurance companies, climate scientists
and indigenous communities, traditional and social media, transnational civil society and even
individuals influencing the discourse and action on climate change (see Figure 3.4). The principal
securitizing actors on climate change have been NGOs spearheading a transnational movement of
climate change activism, as well as a transnational epistemic community of climate scientists
working either independently as academic researchers (see Steffen et al. 2018; Ripple et al. 2017;
2019), or in a more official capacity within the Intergovernmental Panel on Climate Change
(IPCC) or the World Meteorological Association (WMO) (see especially IPCC 2018; WMO
2020). Dozens of states have also assumed the role of securitizing actors in declaring a “climate
emergency”—including Canada, the European Union, Japan, South Korea, the United Kingdom,
and the United States (Cedemia 2021)—and some key political leaders are now calling climate
change an “existential threat,” such as US President Joe Biden and UN Secretary-General António
Guterres. Transnational movements like “Extinction Rebellion” and even vocal individuals like
Greta Thunberg have also emerged as securitizing actors on climate change. Together, this diverse
set of actors have contributed to a macrosecuritization discourse that frames climate change as a
catastrophic or existential threat to humanity (Trombetta 2018; Von Lucke et al. 2014; Allan 2017;
Paglia 2018).
Climate change has, however, a long history of being a contested issue, with de-securitizing
actors pushing a counter-narrative to confuse the causes and/or downplay the consequences of
climate change (Jamieson 2014; Rich 2019). The fossil fuel industry has been instrumental in
sewing doubt and division on the issue, including whether climate change is real, human-driven,
catastrophic, or requires urgent action. Conservative news media, like Fox News, have taken on a
central role in conveying this counter-narrative to the public. The U.S. Republican Party has even
become an institutionalized de-securitizing actor in American domestic politics. Individuals, too,
are important actors for climate change denialism, from media personalities like Tucker Carlson
to political leaders like US President Donald Trump. Despite this more complicated picture, states
remain critical as the audience and functional actors on climate change (especially “developed”
and “emerging economies”), since they are responsible for negotiating treaties—such as the United
Nations Framework Agreement on Climate Change, the Kyoto Protocol, and the Paris
60
Agreement—and drafting legislation or developing policies, regulations or programs on climate
change, including developing emissions reductions targets for their respective countries. Sub-state
and local governments—especially cities—are increasingly picking-up-the-slack by pursuing
more aggressive emissions reductions. Intergovernmental organizations also provide critical
forums for international discussion, such as the G7, G20, and the World Economic Forum. Major
companies in the fossil fuel and renewable energy industries and large investors and financial
institutions are also among the functional actors that influence the speed and scale of a global
energy transition away from fossil fuels.
Figure 3.3: Nuclear Non-Proliferation
61
Figure 3.4: The Climate Emergency
Notes: These diagrams are meant only for illustrative purposes of the comparative complexity of the
security constellations behind the cases of nuclear non-proliferation and the climate emergency. Shapes
represent the functional differences of actors: macrosecuritizing actors are circles, de-securitizing actors are
triangles, functional actors are diamonds, and audiences are squares. Colours represent the differences in
the identities of actors: states are in blue, intergovernmental organizations in red, nongovernmental
organizations in yellow, political parties in orange, epistemic communities in green, news media in light
grey, corporations in dark grey, civil society in purple, and individuals in pink.
62
Despite significant variation in actors and audiences, there are also some general patterns
that emerge in the securitization of humanity. The first is the importance of individuals as
securitizing actors. The role of individuals in macrosecuritization is surprising, since securitization
theory—and, more broadly, “grand theory” (Morgenthau 1948; Bull 1977; Keohane and Nye
1977; Waltz 1979; Gilpin 1981; Ruggie 1982; Wendt 1999; Mearsheimer 2001)—typically
downplays the importance of individuals (Buzan et al. 1998; Balzacq ed. 2011; Balzacq 2019).
Yet individuals frequently become the “voice” and “face” of an issue. On nuclear weapons,
individuals like Neils Bohr, Albert Einstein, and Robert Oppenheimer emerged as early champions
for the international control over atomic energy, while several American presidents—including
Harry S. Truman, John F. Kennedy, Lyndon B. Johnson, Richard Nixon, Ronald Reagan, and
Barack Obama—publicly recognized nuclear weapons as an existential threat. On biodiversity
loss, one sees the figure of David Suzuki and hears the voice of David Attenborough calling to
preserve the world’s pristine spaces and protect species from extinction. On artificial intelligence,
Nick Bostrom catalyzed contemporary concern for the existential risk of “superintelligence,” while
Stuart Russell gave it credibility as one of the world’s leading AI experts. On climate change,
individuals like James Hansen warned the public about the consequences of a warming planet, Al
Gore elevated the concern in American politics, and Greta Thunberg unleashed the outrage of an
entire generation of young people by speaking truth to power. Against the expectation that a single
person is powerless to change the world, individuals frequently play a critical role in
macrosecuritization.
The second is that non-state actors often play a leading role in the international dynamics
of framing an issue as an existential threat to humanity. This finding is consistent with earlier
research on non-state actors—including NGOs and the media—in the emergence and diffusion of
international norms, such as the prohibition of anti-personnel landmines in the 1990s (Nadelmann
1990; Finnemore and Sikkini 1998; Price 1998). In particular, epistemic communities—i.e.,
communities of practice that exist between scientists and/or technical experts who possess a shared
interest in and privileged knowledge about a particular issue (Adler 1992; 2008)—typically find
themselves in the role of securitizing actors. For example, atomic scientists and climate scientists
have—intentionally or unintentionally—taken on the roles of securitizing actors on nuclear
weapons and climate change, respectively. In 1947, Einstein wrote in a letter: “We scientists
recognize our inescapable responsibility to carry to our fellow citizens an understanding of the
simple facts of atomic energy and its implications for society. In this lies our only security and our
63
only hope” (Einstein 1947). In 2019, this sentiment was reaffirmed in a letter published in
BioScience and signed by thousands of scientists around the world:
Scientists have a moral obligation to clearly warn humanity of any catastrophic
threat and to “tell it like it is.” On the basis of this obligation… we declare, with
more than 11,000 scientist signatories from around the world, clearly and
unequivocally that planet Earth is facing a climate emergency (Ripple et al. 2019,
1).
Whether or not it represents a “moral obligation,” scientists have a privileged position as
securitizing actors because their specialized knowledge allows them to speak with legitimacy on
an issue (Berling 2011). Securitization requires a securitizing actor to persuade its audience about
the legitimacy of an existential threat. Scientists possess an elite social status derived from a form
of knowledge-based authority that gives them significant structural and productive power in
macrosecuritization (Barnett and Duvall 2005). In this way, the legitimacy of a speech act may
derive from the status and authority of the speaker rather than the rhetorical or substantive content
of the speech act (Buzan et al. 1998). Two qualifications are immediately necessary about the
power and position of scientists in macrosecuritization. One is that scientists can also be de-
securitizing actors. When there is debate—or perceived debate—between scientists about an issue
(Risse 2000), then this contestation can erode the structural and productive power of scientists,
since the knowledge-based-authority of one scientist can be cancelled out by that of another
scientist. Another is that some audiences may not be susceptible—and may even be opposed
(Drolet and Williams 2019; Mede and Schafer 2020)—to the scientific knowledge/reasoning, or
the elite status of scientists. In particular, scientists may reduce the effectiveness of a speech act
when they employ scientific jargon or point out caveats in their research, which may be appropriate
for an audience of other scientists but may be lost on policymakers and the general public. In other
words, the influence of scientists as securitizing actors may be weakened either by scientific
contestation from de-securitizing actors, or by speech acts that are poorly constructed for the
relevant audience.
Finally, in addition to taking on the role of securitizing and de-securitizing actors, states
are always the primary audiences and functional actors in macrosecuritization. This is not by
accident. Macrosecuritizations are “higher order securitizations,” but there is no higher political
authority above sovereign states in international relations (Waltz 1979). States are therefore
privileged actors in macrosecuritization, with the dual function of being the principal audience that
accepts or rejects an issue as an existential threat to humanity, and the main functional actors that
64
determine what actions to take for security. Non-state actors may also be audiences or functional
actors in macrosecuritization, but their importance is secondary to states. For instance, securitizing
actors may seek to influence how civil society understands an issue, especially in democracies
where elected governments are accountable to the voting public. When they do so, their purpose
is generally strategic, such as trying to influence public policy or put public pressure on
governments to act (e.g., by “naming and shaming”) (Nadelmann 1990; Price 1998). Non-state
actors can also be functional actors, such as the IAEA—an IGO—or Climate Action Tracker—an
NGO—which are involved in the monitoring and verification of treaty compliance.
Intergovernmental organizations may even gain some degree of agency to set the agenda and
develop policy in international relations, such as the UN Secretary General using their “good
offices” to foreground nuclear disarmament, or the IPCC interpreting the consequences of climate
change in stark terms. Yet their authority and capabilities are unavoidably constrained by the states
that are their principals (Nielson and Tierney 2003). Ultimately, the buck stops at the sovereign
state.
3.3 The Puzzle of Macrosecuritization Failure
What is an instance of macrosecuritization? Nuclear weapons and climate change may pose
existential threats to humanity, but this does not make them instances of macrosecuritization.
Macrosecuritization requires, like all forms of securitization, the constituent elements of discourse,
action, and consequences. Macrosecuritization is a historically bounded phenomenon in which one
or more securitizing actors frame a particular issue as an existential threat to humanity and call for
extraordinary measures for survival. For macrosecuritization to occur, the audience must accept
this framing of the issue and functional security actors must take extraordinary action to reduce or
neutralize the existential threat to humanity’s survival. When macrosecuritization is successful,
the political consequences are a new security constellation that structures and organizes the
discourse and actions of states towards the issue in international relations. In essence, humanity’s
survival becomes the supreme concern.
The efforts to frame an issue as an existential threat to humanity do not necessarily end in
macrosecuritization. In practice, when securitizing actors invoke such a discourse, it may be of
little or no consequence. Words do not necessarily provoke action. Macrosecuritization should be
understood as a process that connects survival discourses, extraordinary measures, and political
consequences in international relations. This process is dynamic and intersubjective, since
65
different actors may have different ideas or perceptions about the issue, which interact to influence
the broader narrative and patterns of action. Nor does macrosecuritization occur in a political
vacuum. This is a space of—often intense—contestation and competition, populated by multiple
actors each with their own identity, interests, and power to shape the outcome. Since
macrosecuritizations are “higher order securitizations,” they are almost by definition high-stakes
issues in international relations. As Buzan and Wæver write (2009, 259), Macrosecuritisations
are necessarily launched as candidates for top-rank threats (though they may not make it).” In
short, macrosecuritization aims to break free from “normal politics” in order to structure and
organize how we think about and respond to an issue “for the survival of humankind,” but
macrosecuritization can fail.
3.3.1 The Universe of Cases
A case of macrosecuritization (failure) begins with a macrosecuritization move, when a
securitizing actor with a reasonable claim to speak with legitimacy on a particular subject frames
the issue as an existential threat to humanity. A case ends once the interactions between the
relevant securitizing actors, audiences, and functional actors coalesce in a (new) security
constellation, which structures and organizes the patterns of discourse and action on an issue in
international relations (see Figure 3.5). There are also instances when a would-be securitizing
actor calls an issue an existential threat to humanity, but that do not constitute cases of
macrosecuritization because the actor lacks a reasonable claim to speak with legitimacy on the
issue (e.g., a layperson or a conspiracy theorist spreading apocalyptic narratives on social media),
or because the securitizing speech act occurs in relative isolation and fails to reach its audience
(e.g., a book or article that nobody reads). There may be instances of securitizing speech acts that
generate discussion and concern within a particular epistemic community, but that fail to emerge
on the international agenda—such as expert concerns about “designer viruses” from synthetic
biology (Rees 2003; Posner 2004), or “grey goo” from nanotechnology (Drexler 1986; 2006). The
question of distinguishing cases of macrosecuritization (failure) from instances of issue non-
emergence is an empirical matter that ultimately requires interpretation and judgement.
66
Figure 3.5: A Case of Macrosecuritization
When the phenomenon of macrosecuritization is conceptualized in this way, it is possible
to identify multiple historical cases of macrosecuritization. In surveying the historical record from
the middle of the twentieth century until today, this project identifies a total of 10 cases of
macrosecuritization (see Table 3.2). As with all historical events, the timelines of cases of
macrosecuritization are inherently fuzzy and require interpretation of what salient events and
critical junctures constitute the temporal boundaries for the case. This universe of cases provides
an empirical basis to describe the rhetorical elements and political relations that constitute cases
of macrosecuritization in the practice of international relations, and to explain variation in the
outcomes of security constellations.
Table 3.2: Cases of Macrosecuritization
Issue
Start Year
End Year
International control of atomic energy
1942
1946
Proliferation of nuclear weapons
1961
1967
Biological weapons
1968
1972
Ozone hole
1985
1987
Nuclear winter
1982
1991
Global warming
1979
1992
Prohibition of nuclear weapons
2007
2017
Artificial intelligence
2014
Present
Climate emergency
2017
Present
Biodiversity loss
2018
Present
67
The outcomes of macrosecuritization can be evaluated both in terms of discourse and action
(the “dependent variables”). The first measure asks: to what extent does the audience of a
macrosecuritization move accept and adopt a macrosecuritization discourse in framing the issue
as an existential threat to humanity? In practice, the cases of macrosecuritization can be arranged
along a “spectrum of discourse,” with either “weak,” “moderate,” or “strong acceptance” by the
audience of a macrosecuritization discourse (see Table 3.3). Audience acceptance can be arranged
along an ordinal scale between zero and five. On one side of the spectrum, the audience ignores or
minimally recognizes that an issue may pose an existential threat to humanity (“weak
acceptance”). In the middle of the spectrum, the audience may accept the legitimacy of the claim
that the issue constitutes an existential threat to humanity, but this frame is either contested or
overshadowed by other ways of framing the issue (“moderate acceptance”). On the other end of
the spectrum, the audience accepts that the survival of humanity is a serious or even the dominant
concern about the issue (“strong acceptance”). Strong acceptance of a macrosecuritization
discourse is evidence of successful macrosecuritization, while weak or moderate acceptance is
evidence of complete or partial macrosecuritization failure.
Table 3.3: The Spectrum of Discourse
The second measure asks: how exceptional are the actions taken by functional actors for
the security and survival of humanity? Macrosecuritization cases can be organized on a “spectrum
of action,” with functional actors taking either “weak,” “moderate,” or “strong action” to neutralize
an existential threat (see Table 3.4). Security action can also be categorized on an ordinal scale
between zero and five. On the one end, functional actors may dismiss the need for security
measures, or they may contemplate new security measures that fail to materialize in international
relations (“weak action”). In the middle of the spectrum, functional actors may introduce new
68
security measures, but which either fail to reduce or have limited effectiveness in reducing the
existential threat (“moderate action”). On the other side of the spectrum, functional actors may
take strong or even extraordinary measures to significantly reduce or neutralize the existential
threat to humanity. Strong action to reduce or eliminate an existential threat is evidence of
successful macrosecuritization, while weak or moderate action that does not diminish an existential
threat is evidence of complete or partial macrosecuritization failure.
Table 3.4: The Spectrum of Action
When the cases of macrosecuritization are arranged along these two dimensions of discourse
and action, they reveal significant variation in the outcomes of macrosecuritization (see Figure
3.6 and Table 3.5). In quantitative terms, the mean values for both audience acceptance and
security action are 2.8 (i.e., “moderate acceptance”) and 2.9 (“moderate action”), respectively.
These mean values imply two things. First, macrosecuritization moves generally lead only to
moderate levels of audience acceptance and security action: that is, partial macrosecuritization.
Second, there appears to be a positive relationship between audience acceptance and security
action, whereby higher levels of audience acceptance are correlated with higher-levels of security
action. This relationship is most clearly demonstrated by the cases of nuclear proliferation (4, 4),
nuclear prohibition (3, 3), global warming (2, 2), and artificial intelligence (1, 1), where there is a
perfect correlation between discourse and action. Since governments are generally both the
relevant audience of macrosecuritization discourses and the functional actors of security action, it
69
stands to reason that higher levels of audience acceptance would yield stronger levels of security
action.
Figure 3.6 Macrosecuritization Outcomes: Discourse and Action
Table 3.5: Outcomes of Macrosecuritization
Issue
Discourse (X)
Action (Y)
Atomic energy
4 (Strong)
1 (Weak)
Nuclear proliferation
4 (Strong)
4 (Strong)
Biological weapons
3 (Moderate)
4 (Strong)
Global warming
2 (Moderate)
2 (Moderate)
Nuclear winter
3 (Moderate)
4 (Strong)
Ozone layer
1 (Weak)
5 (Strong)
Nuclear prohibition
3 (Moderate)
3 (Moderate)
Artificial intelligence
1 (Weak)
1 (Weak)
Climate crisis
4 (Strong)
3 (Moderate)
Biodiversity loss
3 (Moderate)
2 (Moderate)
70
Average
2.8 (Moderate)
2.9 (Moderate)
However, these conclusions are skewed by an outlier case: the “Ozone hole.” The Ozone
case is an outlier not only because it is the only case where functional actors took extraordinary
measures to neutralize a threat, but also because the audience only weakly identified the Ozone
hole as an existential threat to humanity. In other words, it is highly unlikely in this case that
audience acceptance of a macrosecuritization discourse explains the extraordinary measures taken
to address the issue. Without the outlier case of the Ozone hole, the mean value for audience
acceptance becomes 3.0 (“moderate acceptance”) and security action becomes 2.7 (“moderate
action”). While there is still a generally strong and positive correlation between audience
acceptance and security action, it is a weaker correlation and one that suggests a tendency for
discourse to exceed action. More generally, this universe of cases shows a broad pattern of
macrosecuritization failure. Indeed, only one case—nuclear proliferation—achieves the threshold
of both strong audience acceptance and strong security action, making it a relatively clear case of
successful macrosecuritization. No case reflects the ideal type of macrosecuritization, whereby the
survival of humanity becomes the paramount concern on the issue and catalyzes extraordinary
action by states to neutralize the existential threat to humanity, as imagined by scenarios such as a
“world without nuclear weapons” or a “carbon net zero” global economy.
3.3.2 The Recurrent Failure of Macrosecuritization
Macrosecuritization moves have repeatedly failed to catalyze extraordinary measures for the
survival of humankind. Rather, when securitizing actors frame an issue as an existential threat to
humanity, their exhortations are sometimes contested or dismissed by their audience, and other
times lead functional actors to take action that is insufficient to neutralize the existential threat.
Given that the survival of humanity may be at stake, one might reasonably expect
macrosecuritization to be the norm. Yet the empirical record of macrosecuritization reveals a
pattern of recurrent failure.
Why the macrosecuritization of humanity repeatedly fails calls for investigation. The
securitization literature lacks a theoretical explanation of macrosecuritization failure. The research
agenda on macrosecuritization is in its early stages with only a few empirical studies, such as Juha
Vuori’s (2010) analysis of the Atomic Scientists’ longstanding use of the “Doomsday Clock” as a
visual symbol or image of macrosecuritization and de-securitization; Renata Dalaqua’s (2013, 9)
study of how the transnational “anti-nuclear movement” in the post-Cold War era framed a Nuclear
71
Weapons Convention as the “only way to protect humankind from the threat posed by the existence
of nuclear weapons”; and Catherine Yuk-ping Lo and Nicholas Thomas (2018; 2020) examination
of the macrosecuritization of the threat of antimicrobial resistance to public health in China and
East Asia. While these studies offer some illustrative case studies of the nature and dynamics of
macrosecuritization in international relations, their limited empirical scope means that they do not
provide anything close to a theoretical explanation for the broader pattern of macrosecuritization
(failure) of the existential threats to humanity.
In Security: A New Framework for Analysis, Buzan, Wæver and de Wilde suggest that the
size or scale of the referent object may be a “crucial variable in determining what constitutes a
successful referent object of security.” They point out that both the “microsecuritizations” of
individuals or small groups “seldom establish a wider security legitimacy,” and that
macrosecuritizations to “construct all of humankind as a security referent” have been attempted,
but have “rarely been able to compete” with the securitization of “middle-level” referent objects,
such as nations and states.
In practice, the middle-scale limited collectivities have proved the most amenable
to securitisation as durable referent objects. One explanation for this success is that
such limited collectivities (states, nations, and as anticipated by Huntington,
civilisations) engage in self-reinforcing rivalries with other limited collectivities
and that such interaction strengthens their “we” feeling… If rivalry is a facilitating
condition for successful securitisation, then middle-level collectivities will always
have an advantage in this respect over the system-level. Somehow the system level
candidates are still too subtle and too indirect to trigger the levels of mass identity
necessary for securitisation. Lacking the dynamic underpinning of rivalry, their
attempt at universalist political allegiance confronts the middle level collectivities
and loses… Security is an area of competing actors, but it is a biased one in which
the state is still generally privileged as the actor historically endowed with security
tasks and most adequately structured for the purpose (Buzan et al. 1998, 36–37).
While the authors articulate the core deductive logic for a theoretical explanation of
macrosecuritization failure, they do not fully develop it into a theoretical framework of
macrosecuritization, nor do they provide an empirical assessment of this hypothesis.
Similarly, Buzan and Wæver’s (2009) later discussion of the concept of
macrosecuritization provides some useful insights into the nature and dynamics of
macrosecuritization (Buzan and Wæver 2009, 259). The success or failure of macrosecuritization
may depend on many factors, including the identity and power of the securitizing actors, the
persuasiveness of the macrosecuritization discourse, and the receptiveness of the audience.
72
In thinking about the sources of macrosecuritisations and constellations, one has
also to ask who generates them, how, and for what purpose? Macrosecuritisations,
like all securitisations, require securitising actors, appropriate speech acts, and
responsive audiences. In addition they require some expansive dynamic capable of
subsuming other securitisations. Only if they acquire a supportive audience on an
appropriate scale and a possibility to operate as the interpretive framework for other
securitisations do they have the possibility of becoming more than niche
securitisations, however macro their logic and however ambitious the aspiration of
their promoters (Buzan and Wæver 2009, 265).
However, since Buzan and Wæver were concerned with different types of macrosecuritization—
i.e., inclusive universalisms, exclusive universalisms, existing order universalisms, and physical
threat universalisms—they do not explore what factors and dynamics help explain variation in
outcomes of the securitization of humanity. Indeed, Buzan and Wæver’s discussion makes this
type of macrosecuritization failure all the more puzzling, since one might reasonably expect there
to be an “expansive dynamic” and a “supportive audience” on a global scale when humanity’s
survival is at stake. Overall, the securitization literature does not provide a satisfactory solution to
the empirical puzzle of the recurrent failure in international relations of the securitization of
humanity.
3.4 What Does (Not) Explain Macrosecuritization?
There is a scholarly debate over the meta-theory of securitization, or “what kind of theory—if
any—is securitization?” (Balzacq et al. 2014). While the original formulation of securitization
theory emphasized the constitutive question of “what is security?” (Buzan et al. 1998, 21–23),
there is also interest in explanatory theory, which asks the causal question: “what explains when
securitization is [un]successful?” (Buzan et al. 1998, 32). Securitization theory, implicitly or
explicitly (Guzzini 2011), postulates a causal relationship between the discourse, action, and
consequences of securitization. In its original formulation, the interest in an explanatory theory of
securitization is evident in the conceptual distinction between “securitization” and “securitizing
moves” (Buzan et al. 25), and the question of what factors and conditions can “trigger” and
“facilitate” securitization (Buzan et al. 1998, 32–35). It is also evident in the formulations of
securitization theory that see securitization as a process (Balzacq ed. 2011, 1–3), and emphasize
the “context” of securitization and the “power relations” between actors and audiences (Balzacq
et al. 2016). In empirical research, the interest in causal explanation is apparent in the appreciation
for cases of both successful and failed securitization (Salter 2011; Ruzicka 2019).
73
An explanatory theory of macrosecuritization should answer the causal question: why does
the macrosecuritization of humanity succeed or fail? This question is motivated by the desire to
shed light on the puzzle of why the macrosecuritization efforts to frame a particular issue as an
existential threat to humanity sometimes succeed, but more frequently fail to catalyze
extraordinary action for the survival of humankind. To do so, a theory of macrosecuritization
should present a picture that is simple and intelligible from a world that is otherwise complex and
messy (Waltz 1979; Bernstein et al. 2000; Wæver 2009; Gunitsky 2019). It should identify which
factors and forces are most important, how they relate to one another, and how it all hangs together.
As Kenneth Waltz (1979, 8) wrote: “A theory is a picture, mentally formed, of a bounded realm or
domain of activity. A theory is a depiction of the organization of a domain and of the connections
among its parts. The infinite materials of any realm can be organized in endlessly different ways.
A theory indicates that some factors are more important than others and specifies the relations
among them.”
Therefore, a theory of macrosecuritization should describe the “structure” of
macrosecuritization, which constitutes and organizes the relations between the actors and
audiences in international relations, and shapes and constrains their power and interests to affect
the process and outcomes of macrosecuritization. It should tell us something about the rhetorical
content that makes some macrosecuritization discourses compelling and others unable to persuade
their audiences, and which types of actors are most influential in determining the outcomes of
macrosecuritization and what sorts of concerns animate their decisions. It should be sufficiently
generalizable to account for recurrent patterns that exist across cases of macrosecuritization,
including the occasional success and—especially—the repeated failures, while appreciating the
idiosyncrasies of history that make each case unique, or the limitations involved in making
comparisons across issue areas. Ultimately, a theory of macrosecuritization should provide a clear
and coherent statement of its core explanatory logic—i.e., a causal story (Sears 2017, 26)—for the
successes and failures of the macrosecuritization of humanity. Only by doing so can a theory of
macrosecuritization contribute to our knowledge of who can “speak” and “do security for
humankind, on what issues, under what conditions, and with what consequences in international
relations.
One useful exercise for developing a theory of macrosecuritization is to begin by deducing
hypotheses and variables from existing theories, evaluating these hypotheses and variables against
the empirical record, and discarding those that fail to account for the general patterns of
74
macrosecuritization. This method takes its cue from the scientific principle of “falsification”
(Popper 1959), the logic of causal inference (King et al. 1994), and simple empirical tests of
theoretical hypotheses (Mearsheimer and Walt 2013). While there are several theoretical
hypotheses that provide plausible explanations for the success and failure of macrosecuritization,
the following analysis shows that most of them do not stand up against empirical testing as a
theoretical explanation for macrosecuritization (failure) (see Appendix 6 and Appendix 7).
3.4.1 The Nature of the Threat
The first type of explanation is that the nature of an issue can explain macrosecuritization. The
first hypothesis, derived from securitization theory, is that the sector of an issue may explain the
outcomes of macrosecuritization. In Security: A New Framework for Analysis, Buzan et al (1998)
see the logic of security as applicable, in principle, to a wide range of issues—or sectors—but that
in some sectors—e.g., the military sector—securitization may become “institutionalized,” while
in other sectors—e.g., the environmental sector—the legitimacy of a security frame may be
contested (Deudney 1991; Von Lucke et al. 2014; Paglia 2018).
Securitization can be either ad hoc or institutionalized. If a given type of threat is
persistent or recurrent, it is no surprise to find that the response and sense of urgency
become institutionalized… The need for drama in establishing securitization falls
aways, because it is implicitly assumed that… this issue has to take precedence,
that it is a security issue (Buzan et al. 1998, 27–28).
The sector of an issue can be conceptualized as an independent variable to test the hypothesis that
the institutionalized securitization of certain sectors—particularly the military security—makes
some issues more amenable than others to macrosecuritization.
The evidence does not support this hypothesis. The universe of cases contains four cases
from the military sector, four cases from the environmental sector, one hybrid case from the
military and environmental sectors (nuclear winter), and one case from the science and technology
sector (artificial intelligence). The cases show substantial variation in the dependent variables of
audience acceptance and security action in both the military and environmental sectors. Moreover,
neither the military nor the environmental sector shows a general tendency towards successful
macrosecuritization: the average values of audience acceptance and security action are 3.4 and 3.0
in the military sector, and 2.6 and 3.2 in the environmental sector (i.e., “moderate” outcomes).
Important cases of failure in the military sector—like the international control over atomic energy
in the 1940s—and success in the environmental sector—like the Ozone hole in the 1980s—
75
demonstrate that the sector of an issue is an insufficient explanation for macrosecuritization
outcomes.
The second hypothesis is that the direct/indirect nature of harm can explain
macrosecuritization. A growing number of IR scholars—informed by the concepts of “risk”
(Knight 1921; Beck 2006; 2008) and/or “governmentality”—draw a distinction between the
conventional understanding of “security” from “threats,” which emphasizes direct or intentional
harm like violence and war (Walt 1991), and an alternative conception of “safety” from “risks,”
which emphasizes indirect or unintentional harm from uncontrollable circumstances (Aradau and
Van Munster 2007; Corry 2012; Methmann and Rothe 2012; Jore 2019). In these risk-based
approaches, an issue is framed as a “risk” not to justify exceptional measures outside the bounds
of normal politics (i.e., “securitization”), but rather to legitimize novel practices or technologies
of governance that become the “new normal” of everyday politics (Aradau and Van Munster 2007;
Dillon 2007; Corry 2012; Methmann and Rothe 2012). They lead to the hypothesis that issues that
involve indirect or unintentional—but still existential—harm are unlikely to lead to
macrosecuritization (i.e., extraordinary measures), but rather to macro-riskification (i.e., new
forms of “risk-management” or “governmentality”) (Corry 2012; Methmann and Rothe 2012).
The empirical record does not lend strong support for the riskification hypothesis. There
are four cases that involve direct forms of harm (“threats”), two of which end in
macrosecuritization (nuclear proliferation and biological weapons), and two of which end in
macrosecuritization failure (international control over atomic energy and the prohibition of nuclear
weapons). There are also six cases that involve indirect types of danger (“risks”), with two leading
to macrosecuritization (the Ozone layer and nuclear winter), and the other four ending in
macrosecuritization failure (global warming, artificial intelligence, the climate emergency, and
biodiversity loss). Arguably, these final cases do not reflect the expectation of riskification, since
they generally failed to legitimize new large-scale practices of everyday “risk management” or
“governmentality”—with the possible exception of the climate emergency (Corry 2012). Whether
an issue poses the danger of “direct” or “indirect” harm is therefore a poor explanation for the
outcomes of macrosecuritization.
The third hypothesis is that uncertainty surrounding an issue can explain
macrosecuritization. Many IR theories emphasize the importance of uncertainty in shaping state
behavior and international outcomes (see Rathbun 2007). Whether understood in the
epistemological sense of limited knowledge or the ontological sense of inherent indeterminacy
76
(Seybert and Katzenstein 2017; Adler 2020), human uncertainty over how to understand an issue
can be expected to shape human responses. For securitization theory, this leads to the hypothesis
that audience (un)certainty about whether an issue constitutes an existential threat may influence
the prospects of macrosecuritization (Berling 2011; Lucke et al. 2014; Allan 2017; Paglia 2018).
Uncertainty can be operationalized in terms of the degree of scientific or epistemic consensus
about the nature of an issue: a strong consensus amongst scientists—and the existence of scientific
“facts”—that an issue constitutes an existential threat is conducive to macrosecuritization, while
contestation and debate between scientists—about the plausibility, timing, and/or consequences—
undermine macrosecuritization.
Again, the cases of macrosecuritization do not reveal a strong relationship between the
degree of scientific (un)certainty and the outcomes of macrosecuritization. Amongst the 10 cases
of macrosecuritization, there are six cases in which there is a general consensus that the issue could
pose an existential threat to humanity, and four cases in which there exists substantial scientific
contestation and debate. Amongst the cases of scientific consensus, two culminate in
macrosecuritization (nuclear proliferation and nuclear winter), while amongst the cases of
scientific contestation, two end in macrosecuritization (biological weapons and the Ozone hole), a
result at odds with the hypothesized relationship. Interestingly, the presence of scientific
uncertainty about an issue is sometimes invoked to justify macrosecuritization based on the logic
of the “precautionary principle” (e.g., biological weapons and the Ozone hole), while in other cases
scientific uncertainty and debate generate skepticism or doubt that undermines macrosecuritization
(e.g., global warming and artificial intelligence). Thus, while scientific (un)certainty about an issue
may provide insights into the idiosyncrasies of specific cases, it does not provide a generalizable
explanation for either the success or failure of macrosecuritization.
The fourth hypothesis is that the temporality of an issue can explain macrosecuritization.
Although there is growing interest in time in International Relations (Hutchings 2008; McIntosh
2015; Hom 2018; Drezner 2021), it is the study of existential risk that informs the hypothesis that
human perceptions of time—or, more specifically, temporal distance—shape our collective
responses to threats (see Bostrom 2013; Liu et al. 2018; Baum et al. 2019; and Ord 2020). It is also
based on the concept of “discounting” in microeconomics, which implies a decrease in present
value (or concern) for future goods (or catastrophes) (Weitzman 2009). Scholars of existential risk
typically criticize human political, economic, and security institutions for their “short-termism”
and see a greater emphasis on “long-termism” as a solution to existential risks (Fisher 2019; Ord
77
2020). This leads to the hypothesis that the perceived time horizon of a threat can explain
macrosecuritization: namely, the greater the temporal distance of a threat, the less likely
macrosecuritization is to occur. The independent variable of time horizons is operationalized in
terms of “existing” (zero years), “short-term” (one to 20 years), “medium-term” (20 to 50 years),
and “long-term” (greater than 50 years).
The empirical record shows that time-horizons do not provide a good explanation for
macrosecuritization. Although there may be a relationship between the underwhelming nature of
security action on climate change and the long-term nature of the threat—a “boring apocalypse”
(Liu et al. 2018)—the empirical record clearly shows that the macrosecuritization of existing and
short-term threats can fail (e.g., the international control of atomic energy and the Treaty on the
Prohibition of Nuclear Weapons), while long-term threats can succeed (e.g., the Ozone hole).
Although human beings and institutions may indeed discount concerns about the wellbeing and
survival of “future generations,” time horizons alone do not explain the failure of humanity’s
responses to existential threats.
3.4.2 Domestic Politics
The second type of explanation is that domestic politics can explain macrosecuritization. The idea
that domestic politics may drive macrosecuritization outcomes derives its inspiration from theories
of “Innenpolitik(Zakaria 1992), or the “second image” (Waltz 1959) in International Relations
(Allison 1971; Snyder 1991; Ripsman et al. 2016). It also finds empirical support in the
securitization literature from Thomas and Yuk-ping (2020), who emphasize the role of
bureaucratic competition for scarce budgetary resources in the macrosecuritization of
antimicrobial resistance in China. Arguably, the central premise of “Liberal” IR theory is that
“state-society relations…have a fundamental impact on state behavior in world politics”
(Moravcsik 1997, 513). The potential impacts of domestic politics on macrosecuritization are
examined in two variables: regime type and political party. The regime type variable classifies the
political regime—either “democracy” or “authoritarian”—of the existing great/major powers in
the international system. Drawing on democratic peace theory (see Doyle 1983; 1986), it leads to
the fifth hypothesis that democracies are more likely to support macrosecuritization because of the
political pressure that public opinion can exert on democratic state audiences to take an issue
seriously through mechanisms of electoral accountability. The political party variable classifies
the party that holds power in the United States—either “Democrat” or “Republican.” It yields the
78
sixth hypothesis that a Democrat-controlled White House is more likely to favor
macrosecuritization because of the Democratic Party’s stronger preferences for “liberal
internationalism” (Deudney and Ikenberry 2021).
The empirical record does not support either hypothesis that regime type or political party
provide a generalizable explanation for macrosecuritization outcomes—although this does not
imply that domestic politics is unimportant for explaining state behavior in specific cases. Regime
type is a poor predictor of the behavior of the great/major power, since the leading democracies
(e.g., the United States and the European Union) and authoritarian states (e.g., the Soviet
Union/Russian Federation and the People’s Republic of China) have come out as both champions
and opponents of macrosecuritization. Similarly, the political party in power in Washington says
little about the directions of American foreign policy or the prospects of macrosecuritization, as
both Democrat and Republican leadership have at times supported and other times resisted
macrosecuritization. While domestic politics may be important to idiosyncratic explanations of
specific cases of macrosecuritization—such as the resistance of the Republican Party to the
macrosecuritization of climate change (Jamieson 2014; Rich 2019)—neither regime type nor
political parties provide a generalizable explanation for the success and failure of
macrosecuritization.
3.4.3 International System
The third type of explanation is that the structure of the international system can explain
macrosecuritization. In neorealist theory, the central structural variable is the number of great
powers—or “poles”—in international politics (Waltz 1979). The “polarity” of the international
system is determined by measuring the distribution of capabilities amongst states, selecting which
states possess the relative national capabilities to be considered great powers, and counting the
number of great powers in the international system: a system of one great power is “unipolar,” a
system of two great powers is “bipolar,” and a system of three or more great powers is “multipolar”
(Waltz 1979; Thompson 1986; Wohlforth 1999; Mearsheimer 2001). The potential impacts of
structure can be explored by classifying the polarity of the international system for each case of
macrosecuritization. However, there are plausible reasons to suspect, a priori, that both a system
of more or fewer great powers could be conducive to macrosecuritization: a system of more great
powers could mean that there are multiple great powers to pressure each other into taking certain
security actions for humanity, while a system of fewer great powers could imply that there are
79
weaker pressures on great powers to emphasize relative over absolute gains. Kenneth Waltz was
inconsistent in his views on polarity in international relations. On the one hand, he argued that
“smaller is better” (Waltz 1979, 134–136), believed in the stability of bipolar systems (Waltz
1967), and held the balance-of-power to be a source of stability in international politics (Waltz
1979; 1988). On the other hand, Waltz believed that unipolar systems are particularly unstable
(Waltz 1993; 2000) and argued that “more may be better” when it comes to nuclear proliferation
and international security (Waltz 1981).
Which hypothesis—more or fewer great powers—is consistent with neorealist theory? The
seventh hypothesis is that bipolar systems are most conducive to macrosecuritization. Despite
Waltz’s assertion that the “smallest number possible…is best of all” (Waltz 1979, 136), which
would suggest that unipolar systems are preferable, Waltz (1993; 2000) and other neorealists have
generally seen unipolarity as particularly dangerous and unstable because of the impossibility of a
balance-of-power (Layne 1993; Jervis 2009; Monteiro 2011/12; for an exception see Wohlforth
1999). The theoretical logic here is that bipolar systems should be most conducive to “great power
management” of macrosecuritization (Waltz 1979, 194; also Bull 1977; Cui and Buzan 2016),
since each great power should possess some degree of power and capabilities to influence the
other, while the relatively limited number of great powers should also mitigate the problems of
coordination and collective action between multiple great powers. In essence, bipolarity represents
the “Goldilocks” number of not too many, not too few great powers in macrosecuritization.
The empirical record shows mixed results for a theoretical explanation of
macrosecuritization that emphasizes the polarity of the international system. The historical record
of macrosecuritization contains four cases of unipolarity, six cases of bipolarity, and zero cases of
multipolarity. The first conclusion is that unipolarity is seemingly not conducive to
macrosecuritization: the United States has resisted every effort towards macrosecuritization during
the “unipolar moment” (Krauthammer 1990/91). This seems to support the structural perspective
that unipolar systems are particularly dangerous and unstable in international relations (Layne
1993; Waltz 1993; 2000), since few constraints exist on the sole great power and international
outcomes depend significantly on its foreign policy (Jervis 2009; Monteiro 2011/12). The second
is that bipolarity is comparatively favorable to macrosecuritization, with four cases ending in
“strong” security action and two—atomic energy and global warming—ending in failure (i.e.,
“weak” and “moderate” action, respectively). This lends some empirical support to the hypothesis
that bipolar systems are relatively conducive to macrosecuritization (Waltz 1979, 204; see also
80
Bull 1977; Cui and Buzan 2016). The third is that there is no empirical evidence to either support
or deny that multipolar systems are conducive to macrosecuritization; for there simply has never
been a multipolar system during the age of existential threats (Sears 2021a).
There is a another, more nuanced hypothesis about polarity that requires some additional
discussion: structural transitions. Although they have been classified as either unipolar or bipolar
systems, there are actually six cases where the process of macrosecuritization has occurred during
a period of structural transition—i.e., a shift in the polarity of the international system. There is a
rich tradition on power transition theory (Organski 1958; Modelski 1978; Gilpin 1981; DiCicco
and Levy 1999), which generally emphasizes how the decline in the power of the preeminent state
in the international system—a “hegemon”—relative to another state—a “rising power”—can lead
to instability and war (Gilpin 1981; 1988), and/or the weak provision of international public goods
(Keohane 1984; Gilpin 1987; Ikenberry and Nexon 2019). However, this literature has largely
ignored the implications of different types of structural transitions in the polarity of the
international system, whether an increase or decrease in the number of great powers (Sears 2018).
The eighth hypothesis is that historical periods characterized by the rise and fall of great powers—
or structural transitions—are not conducive to macrosecuritization, since the great and major
powers are more likely to emphasize national interests about the increase/decline in their relative
power and capabilities than concerns about the security and survival of humankind. Structural
transitions are operationalized as a change in the polarity of the system within the temporal
boundaries of a case (i.e., a change occurs between the start date and end date), while a “stable”
system is one in which the polarity of the system remains the same.
The historical record of macrosecuritization lends empirical support to the hypothesis that
structural transitions are not conducive to macrosecuritization, with five out of six cases of
macrosecuritization during a structural transition ending in failure. Furthermore, the
disaggregation of structural transitions leaves only three cases of “stable” bipolar systems (nuclear
proliferation, biological weapons, and the Ozone layer), all of which end in macrosecuritization.
This lends empirical support to the hypothesis that (stable) bipolar systems are conducive to
macrosecuritization, while periods of structural transition are not. Yet there is one case that poses
a strong empirical challenge to this hypothesis about structural transitions: nuclear winter. The fact
that the United States and the Soviet Union were able to agree to dramatic reductions in their
nuclear arsenals with the START I and II treaties at a time when the United States was investing
heavily in its “Strategic Defense Initiative” (“Star Wars”) and the Soviet Union faced relative
81
decline and eventual collapse cautions against drawing strong, generalizable conclusions. This
example of cooperation between the United States and the Soviet Union at the end of the Cold War
shows that structural transitions in international relations may make macrosecuritization more
difficult, but they do not doom them to failure.
3.4.4 Great Power Politics
The fourth type of explanation is that the dynamics of great power politics can explain
macrosecuritization (Goddard and Nexon 2016). One of the important great power dynamics in
international relations is “hegemony” (Ikenberry and Nexon 2019), especially the idea that
hegemonic international orders are characterized by the existence of a single hegemonic state that
possesses preponderant power and capabilities to shape the values, norms, and institutions of
international relations (Ikenberry and Nexon 2019; Quinn and Kitchen 2019). The hegemonic state
has an interest in preserving the status quo and therefore takes an active role in setting the
international agenda and providing international public goods—including the stability of the global
economy and the maintenance of international peace and security—to uphold the hegemonic
international order (Gilpin 1981; Keohane 1984). The hegemonic state also seeks to suppress
challenges to the status quo, such as the relative growth in the capabilities of major powers (Gilpin
1988; Wohlforth 1999), or resistance to its preferred values and norms from “recalcitrant” states
(Monteiro 2011/12). There is no guarantee that the hegemonic state will support or oppose
macrosecuritization, which could either reinforce or undermine the hegemonic international order,
but the preponderant power and capabilities of the hegemonic state mean that its position
determines the fate of an issue. Therefore, the ninth hypothesis is that the existence of a hegemonic
“patron” or “spoiler” can explain the success and failure of macrosecuritization.
The historical record provides substantial empirical support for the hypothesis that a
hegemonic patron or spoiler shapes the outcomes of macrosecuritization. Despite there being two
great powers—or “superpowers”—during the Cold War, the United States was the most powerful
state in the international system and can therefore be classified as a hegemonic state (see Modelski
1978; Gilpin 1981; Thompson 1986; Kennedy 1987; Wolhforth 1999; Gaddis 2005). During and
after the Cold War, the United States has exercised an incomparable degree of influence over the
outcomes of macrosecuritization: when the United States has acted as a hegemonic patron, the
result has typically been successful macrosecuritization; but when the United States has acted as a
hegemonic spoiler, macrosecuritization has invariably ended in failure. American foreign policy is
82
therefore a strong—but not perfect—predictor of the prospects of macrosecuritization. There is
one exception to the pattern of American hegemony: the international control over atomic energy.
Despite the United States assuming the position of a hegemonic patron in the aftermath of the
Second World War, the American plan for the international control over atomic energy ended in
failure. The United States as a hegemonic power has been preeminent but not omnipotent in
deciding the fate of macrosecuritization.
The other important dynamic is great power rivalries in international relations, especially
how competition and cooperation amongst the great powers affect the prospects of
macrosecuritization. Arguably, the core premise of “realist” theories of international relations is
that the struggle for power and security between states—particularly the great powers—is the
central dynamic of international politics (Morgenthau 1948; Waltz 1979; Gilpin 1981;
Mearsheimer 2001). The concern for national power and security shapes not only how the great
powers perceive an issue, but also whether competition or cooperation will dominate their behavior
and interactions. On the one hand, when the great powers perceive an issue as central to their
national power/security interests, then they are likely to focus primarily on concerns about relative
gains/losses in their national capabilities (Grieco 1988; Mearsheimer 1994/95), and great power
rivalries will prevent macrosecuritization. On the other hand, when the great powers perceive an
issue as peripheral to their national power/security interests, then they can make absolute or
“shared” interests their main concern, and great power cooperation and “management” can enable
macrosecuritization (Bull 1977; Cui and Buzan 2016). Thus, the tenth—and final—hypothesis is
that great power consensus can explain the success and failure of macrosecuritization.
The historical record appears to confirm the hypothesis that great power consensus explains
the outcome of macrosecuritization. In all four cases of successful macrosecuritization, both the
United States and the Soviet Union/Russian Federation came out in support of macrosecuritization,
while in all six cases of macrosecuritization failure, either the United States, the Soviet
Union/Russian Federation, and/or the People’s Republic of China have stood in opposition to
macrosecuritization. The existence or absence of great power consensus is thus perfectly correlated
with the success and failure of macrosecuritization (see Table 3.8). No other explanatory variable
reflects the same law-like relationship with the outcomes of macrosecuritization. Therefore, the
theory of macrosecuritization developed below will seek to explain why great power consensus is
so important to the success and failure of macrosecuritization, and what conditions shape the
prospects for great power consensus on macrosecuritization.
83
3.5 Theory of Macrosecuritization Failure
The empirical record strongly suggests that great power consensus is a necessary condition for
macrosecuritization (see Appendix 6 and Appendix 7). When the great powers can come to a
shared understanding that an issue poses an existential threat to humanity and agree to take
extraordinary measures for survival, then great power consensus has led to macrosecuritization in
international relations. But when one or more of the great powers contests this understanding or
rejects the call for extraordinary measures, then the result has been macrosecuritization failure. In
short, great power consensus is an essential condition for macrosecuritization.
The reason is that the dynamics of macrosecuritization are deeply embedded in the political
structure of international relations. The political structures behind the processes of “normal”
securitizations and macrosecuritizations are an important but under-appreciated distinction in
securitization theory (Buzan et al. 1998; Buzan and Wæver 2009; Balzacq ed. 2010). Unlike the
“lower-level” processes of securitization, which occur within nations and are subject to the
hierarchic political authority of states, the “higher-level” macrosecuritizations are necessarily
international processes of securitization under anarchy. International anarchy means that there is
no world political authority that can “speak” and “do security” on behalf of humankind. Instead,
macrosecuritizations occur within a heterogenous international system, one not only constituted
by “societal multiplicity” (Rosenberg 2016), but also a multiplicity of actors that are differentiated
by their character (e.g., states, firms, nongovernmental organizations), and functions (e.g.,
securitizing actors, audiences, and functional security actors) (Ruggie 1984; Buzan et al. 1993;
Buzan and Albert 2010). In a world characterized by the absence of a central political authority
and the existence of a multiplicity of actors, macrosecuritization is shaped primarily by the most
powerful actors in the international system: the great powers.
What is a great power? International Relations theory lacks a consensus understanding of
what it means to be a “great power,” which is not surprising since there continues to be scholarly
debate about the meaning of “power” in international relations (Hart 1976; Barnett & Duvall 2005;
Nye 2011; Guzzini 2013; Baldwin 2013). Alternative definitions of power can lead to differing
conceptions about what a great power is and who the great powers are (Quinn & Kitchen 2019).
For instance, a state may be considered a great power because it possesses superior national
material capabilities relative to other states (Waltz 1979; Gilpin 1981; Kennedy 1987; Wohlforth
1999), because it exercises greater influence over the behavior of other states or the outcomes of
international relations (Morgenthau 1945; Aron 1966), or because of its hegemony in shaping the
84
rules, norms, and institutions of the international order (Keohane 1984; Ikenberry 2001; 2014).
Still others understand a great power as a form of political status in international relations (Paul,
Larson, and Wohlforth eds. 2014), whereby the position and ranking of states within the “great
power club” is based on shared perceptions of status amongst nations (Zala 2017). The English
School, for example, understands great power status as based on the recognition of both their
“special rights” and “special responsibilities” in international relations (Bull 1977, 196; Clark
2009; Cui & Buzan 2016). Yet differences in understandings do not necessarily negate the
theoretical utility of the concept or the practical task of identifying the great powers. As Waltz
(1979, 131) wrote, the question of which states are great powers in an era is “an empirical one, and
common sense can answer it.”
For the purposes of this dissertation, the “great powers” refer to only the most powerful
states in the international system, based on (1) their national capabilities as a relative share of the
international distribution of capabilities, and (2) their degree of political influence over other states
in international politics. For example, the United States has been a great power since the end of
the Second World War, and so to call any other state a great power implies placing them in the
same ranks as the United States. Arguably, only two other states have possessed a large enough
relative share of the international distribution of capabilities or exercised a comparable degree of
political influence over international politics to put them within the rank of the great powers
alongside the United States: the Soviet Union from the end of the Second World War until its
collapse, and the People’s Republic of China today. While there may be other states whose national
capabilities and political influence places them above other states, such as the United Kingdom
after the Second World War or the Russian Federation after the Cold War, these states are
considered “major powers” because their capabilities and influence do not reach a comparable
level to that of the great powers (see Figure 3.7).
85
Figure 3.7: The Great Powers in International Relations
The great powers hold a privileged position in the political structure of macrosecuritization
both for social and material reasons. First, the great powers enjoy unequal status in international
relations. Although states are in principle equal in their political authority (i.e., “sovereign
equality”), they are in practice unequal in political power. Only a select few states enjoy great
power status. The unequal status and prestige of the great powers comes with both “special rights”
and “special responsibilities” (Bull 1977), such as their privileged position and influence in
international institutions (e.g., the permanent membership and “veto” rights of China, France,
Russia, the United Kingdom, and the United States in the UN Security Council), and the
expectation that they bear greater responsibility for the “management” of global challenges and
crises (e.g., the “responsibility to protect” civilian populations around the world from serious
violations of international humanitarian or human rights law) (Cui and Buzan 2016; Bernstein
2020). Great power status introduces a degree of de facto hierarchy under de jure anarchy in
international relations, including on matters of the global economy and international peace and
security (Lake 1996; 2007; Ikenberry and Nexon 2019). The unequal status of the great powers
makes them the key audience for macrosecuritization, since their acceptance (or rejection) of a
macrosecuritizing move is critical to shaping the overall narrative of how an issue is understood
in international relations.
86
Second, the great powers possess a disproportionate share of the distribution of capabilities
in international relations—i.e., relative power. The great powers are typically ranked amongst the
most powerful states in international relations due to their quantitative and qualitative advantages
in national capabilities, including, inter alia, their military prowess, economic wealth, advanced
technology, abundant natural resources, vast territories, and large populations (Morgenthau 1948;
Waltz 1979; Wohlforth 1999; Mearsheimer 2001; Monteiro 2014; Beckley 2018). Some of the
national capabilities of the great powers—such as nuclear weapons and energy consumption—
have material implications for the existential threats to humanity, which means that the great
powers have an unequal capacity to affect the security and survival of humankind (Sears 2021b).
For example, there are nine known nuclear weapons states, but probably only two of them, the
United States and Russia, possess sufficiently large nuclear arsenals to threaten human survival
(Sears 2021), although China is actively increasing its nuclear arsenal to an extent that could pose
an existential threat in the future (Cooper 2021). Similarly, while all nations contribute to climate
change, two nations, China and the United States, account for a far greater share of greenhouse
gas emissions than the rest in terms of their current and historic emissions, respectively, although
the European Union, India, and Russia are also important contributors to global emissions.
In short, the great powers are the principal functional security actors in macrosecuritization
because their national capabilities make them indispensable for taking effective action to neutralize
existential threats. While other major and middle powers may also play an important political and
material role, their impact on the outcomes of macrosecuritization is second to that of the great
powers. Ultimately, the great powers are the essential actors in macrosecuritization because they
are both the main sources of existential threats and the principal actors needed to eliminate them
(Sears 2020b; 2021).
3.5.1 Conflicting Securitization Narratives
The great powers possess unequal power and capabilities to affect the outcomes of
macrosecuritization, which explains why great power consensus is critical but not why it occurs
or fails. The argument here is that macrosecuritization fails because of conflicting securitization
narratives that lead the great powers to prioritize their national interests over human survival. A
securitization narrative is a story told by a securitizing actor to a relevant audience, which frames
an issue in terms of security—i.e., as a matter of survival—and describes the nature of the referent
object, the existential threat, and the extraordinary measures for security. Typically, a
87
securitization narrative is repeated over and over by the securitizing actor with the purpose of
shaping how the audience—states—thinks about and acts towards the issue. But securitization
narratives do not exist in isolation; rather, multiple securitization narratives can coexist and
compete to influence the thinking and decisions of states on an issue. On the one hand,
securitization narratives of “humanity securitization” call on states—especially the great powers—
to cooperate in taking urgent action beyond the normal patterns of international relations for the
security of humankind. On the other hand, securitization narratives of “national securitization”
frame the same issue as a threat to the nation and call on the state to take unilateral action to protect
its vital national interests. These securitization narratives are always in tension to some degree,
since they are based on conflicting premises about the referent object of security: national
securitization calls on states to put the nation first, while humanity securitization urges states to
prioritize the security of humankind.
The contradictions between the security interests of nations and humanity manifests itself
in different measures deemed necessary for survival. Humanity securitization often requires states
to sacrifice some national interests and/or acquiesce to new international powers that erode
national sovereignty, while national securitization typically requires states to take steps that
exacerbate—or at least fail to ameliorate—the existential threats to humanity, such as the pursuit
of a technology that is inherently dangerous, but which could provide a strategic advantage relative
to other states. Since international anarchy implies that there is no world political authority to speak
and do security on behalf of humankind, macrosecuritization depends on states—particularly the
great powers—which are predisposed to think and act on the basis of their respective national
interests. Thus, it is the political structure of international relations and the contradictions between
national security and human survival that explains the recurrent failure of macrosecuritization—
or, put differently, the triumph of national securitization.
Despite this disjuncture between humanity and the nation-state, macrosecuritization is not
doomed to failure. Under the right conditions, the great powers may accept that an issue constitutes
an existential threat to humanity and take extraordinary measures for security—i.e., great power
consensus on macrosecuritization. The theoretical framework developed here is based on the
analytical insights from three cases studies of macrosecuritization (failure): international control
over atomic energy, the Biological Weapons Convention, and AI revolution in international
relations. The framework takes an eclectic approach to the puzzle of macrosecuritization failure in
international relations (Sil and Katzenstein 2010; Lake 2013), which cuts across various theoretical
88
traditions—including realism, liberalism, and constructivism—and different levels-of-analysis
(Waltz 1959; Taliaferro et al. 2016). The framework postulates three core variables to explain the
relative influence of the competing security narratives of humanity securitization and national
securitization over the great powers.
The international system: the structure and stability of the distribution of power and
capabilities between the great powers;
The state: the power and interests of securitizing actors vis-á-vis state audiences within the
domestic security constellations of the great powers;
The individual: the ideas and beliefs of the political leaders and policymakers of the great
powers.
These three variables are not always of equal importance and may vary in significance across
cases. When they favor a narrative of humanity securitization, then great power consensus is
possible and macrosecuritization should occur; but when they favor a narrative of national
securitization, then great power politics should lead to macrosecuritization failure.
The first system-level variable concerns the structural forces of the international system,
especially how the stability of the distribution of power and capabilities between the great and
major powers shapes the intensity of great power rivalries. The basic premise is that the stability
of the distribution of power shapes the intensity of great power rivalries, which in turn influences
the susceptibility of the great powers to conflicting narratives of humanity securitization and
national securitization. When there is a stable distribution of power, then this can ameliorate
concerns about relative gains/losses in national power and security and make the great powers
more amenable to a narrative of humanity securitization, which calls on them to take extraordinary
action for the survival of humanity. In contrast, when there is an unstable distribution of power,
then this can exacerbate concerns about relative power and security and make them more
susceptible to a narrative of national securitization, which demands that the great powers put their
national interests first. Thus, the stability of the distribution of power is the key system-level driver
that shapes the intensity of great power rivalries, which in turn influences the prospects for great
power consensus on macrosecuritization.
What is meant by a “stable” distribution of power? A stable distribution of power exists
when there is a rough equilibrium in the distribution of capabilities between two or more great
powers (Morgenthau 1948; Waltz 1979). Unipolarity is almost by definition unstable, since the
distribution of capabilities is heavily skewed towards a single great power (Layne 1993; Waltz
89
1993; 2000; Wohlforth 1999; Jervis 2009; Monteiro 2011/12; 2014). While unipolarity may in
certain circumstances be conducive to macrosecuritization—for instance, if a hegemonic great
power is willing to assume differential responsibility for neutralizing an existential threat as an
“international public good” (Gilpin 1981; Keohane 1985)—the preponderance of a single great
power means that there is little that other states can do to compel an intransigent great power to
take action to neutralize an existential threat, such as on nuclear disarmament or climate action.
Since a multipolar system has never existed in the age of existential threats (Sears 2021a), there is
little that can be said empirically about the prospects for great power consensus on
macrosecuritization. However, there are good a priori reasons to believe that multipolar systems
are not particularly conducive to macrosecuritization, since systems of three great powers are
prone to imbalances (Schweller 1998) and coordination challenges are likely to increase with
higher numbers of great powers (Waltz 1979; Axelrod and Keohane 1985), making great power
cooperation on macrosecuritization less likely. Put simply, more great powers means more
potential veto points on macrosecuritization.
Another requirement for a stable distribution of power is that changes in the nature and
distribution of capabilities are not destabilizing for the structure of the international system. The
distribution of power in international relations is dynamic, due to the structural forces of
differential or uneven growth: i.e., the changing nature and distribution of capabilities between
states (Morgenthau 1948; Gilpin 1981; Layne 1993). When the evolution of technology produces
revolutionary changes in the nature or sources of power, such as the “nuclear revolution” (Brodie
1945; Jervis 1989) or the “AI revolution” (Horowitz 2018; Scharre 2019), or when the shifting
distribution of capabilities brings about the “rise and fall of great powers” (Kennedy 1987), then
these changes can destabilize the distribution of power and bring about a structural transition—or
“power transition” (DiCicco and Levy 1999)—in the international system (Gilpin 1981). The
existence of an unstable distribution of power increases the intensity of great power rivalries,
which leads the great powers to prioritize the interests of relative gains/losses in national power
rather than cooperating on shared security interests, like mitigating existential threats to humanity.
Under the structural condition of an unstable distribution of power, the great powers are more
likely to treat existential threats as distributional issues—e.g., who is more/less threatened, or who
should assume more/less responsibility to neutralize the threat—rather than as absolute dangers,
which require them to take action for the survival of humankind no matter its consequences for
their national interests. In sum, a stable bipolar system is the most conducive to great power
90
consensus on macrosecuritization, since great power rivalries should be of lower intensity, making
them more amenable to security narratives of humanity securitization and creating opportunities
for great power cooperation and “management” of existential threats (Bull 1977; Waltz 1979, 204–
209; Cui and Buzan 2016).
The second state-level variable concerns the internal structures and political dynamics that
constitute the domestic security constellations of the great powers, especially the power and
interests of securitizing actors vis-á-vis state audiences. The basic premise is that powerful
securitizing actors within the state—especially within the national security and/or foreign policy
establishments—have an outsized influence over a state audience’s receptiveness to conflicting
securitization narratives (Snyder 1991). When powerful securitizing actors within the state adopt
a narrative of humanity securitization then this should amplify the political influence of this
narrative over the state audience, shaping the discourse and action of the great power to be more
conducive to macrosecuritization. However, when powerful securitizing actors take on a narrative
of national securitization then this should weaken the influence of humanity securitization over the
state audience, making the domestic political conditions of the great powers antithetical to
macrosecuritization. The key is to pay attention to the structure of the domestic security
constellations of the great powers, especially who are the principal securitizing actors, what
securitization narratives they adopt, and what their relationships are to state audiences.
What makes an actor “powerful” within the domestic security constellations of the great
powers? Powerful securitizing actors possess the authority and legitimacy to “speak” or “do
security” on an issue (Buzan et al. 1998). The authority and legitimacy of an actor within domestic
security constellations may derive from different sources, such as political authority (e.g., public
office), institutional authority (e.g., government departments/agencies), epistemic authority (e.g.,
scientific/technical expertise), sectoral authority (e.g., relevant industries or companies), or
charismatic authority (e.g., cultural popularity). In general, the relevant departments and agencies
within the state bureaucracy are the most powerful securitizing actors in domestic security
constellations, since they possess the institutionalized authority of the state to develop and
implement national policy (Allison 1971; Thomas and Yuk-ping 2020), such as the Department of
Defense (DoD) on issues of the military and defence, the National Science and Technology
Council (NSTC) on issues of science and technology, the Center for Disease Control and
Prevention (CDC) on issues of public health, or the Department of State (DoS) on matters of
foreign policy in the U.S. Government. In other instances, the scientific community may become
91
powerful securitizing actors if an issue is highly esoteric or technical, such as climate science or
artificial intelligence. Private companies or industry lobbies can be important actors when an issue
implicates their industry, such as the fossil fuel industry on climate change or the mining and
agricultural industries on biodiversity loss. Sometimes even popular individuals can become
powerful securitizing actors, such Robert Oppenheimer on the atomic bomb or Greta Thunberg on
climate change. Of course, these powerful actors have their own interests and agendas, and so may
either favor a narrative of humanity securitization or national securitization. Ultimately, the choice
of security narrative by the most powerful securitizing actors—especially those within national
security and foreign policy establishments of the state—should have significant political influence
over how state audiences respond to the securitization narratives of humanity securitization and
national securitization, which in turn shapes the prospects for great power consensus on
macrosecuritization.
The third individual-level variable concerns the beliefs and perceptions of the political
leaders of the great powers, especially those ideational factors that shape how political leaders
interpret and perceive security narratives (Jervis 1976). The core premise here is that the beliefs
and perceptions of political leaders shape the persuasiveness of the claims made by securitization
narratives about the nature of a threat and the appropriate measures for security, which in turn
influences the decision-making of the great powers and the prospects for consensus on
macrosecuritization. When the beliefs and perceptions of political leaders make them amenable to
a security narrative of humanity securitization, then this should make great power consensus on
macrosecuritization more likely. But when the beliefs and perceptions of political leaders make
them susceptible to a narrative of national securitization, then this should undermine the prospects
for great power consensus on macrosecuritization.
What makes a security narrative persuasive with respect to the beliefs and perceptions of
political leaders? Obviously, the ideas and beliefs of political leaders are shaped by idiosyncratic
factors, such as emotional beliefs or ideological “worldviews” (Mercer 2010). For example, U.S.
President Ronald Reagan and Soviet President Mikhail Gorbachev’s shared strong convictions
about the danger of nuclear war, while U.S. President Donald Trump is extremely skeptical about
the dangers of climate change. Such idiosyncratic factors behind leadership beliefs and perceptions
are inescapably an empirical question. Nevertheless, there may also be certain generalizable
ideational factors about the beliefs and perceptions of political leaders that make particular security
narratives more or less persuasive. One central factor is a political leader’s ideas and beliefs about
92
great power relations, especially how they perceive the intentions and capabilities of other great
and major powers. When political leaders perceive the intentions and/or capabilities of another
great power as hostile and/or offensive, then this should make them more susceptible to a narrative
of national securitization and weaken the prospects for great power consensus on
macrosecuritization. Alternatively, when political leaders perceive the intentions and/or
capabilities of another great power as benign and/or defensive, then this should make them more
amenable to a narrative of humanity securitization and strengthen the prospects for great power
consensus.
Another ideational factor concerns the collective ideas and beliefs about an issue, since
how political leaders understand and interpret threats is not wholly subjective but rather
intersubjective (Wendt 1992; Huysmans 1998; Buzan et al. 1998). The views of political leaders
are embedded within broader societal beliefs about the credibility of an existential threat and the
feasibility of security measures to diminish it. One societal-level factor that shapes the
persuasiveness of securitization narratives is the existence or absence of a consensus understanding
amongst the scientific and technical experts who possess specialized knowledge on an issue. When
there is epistemic consensus amongst scientific/technical experts that an issue constitutes an
existential threat to humanity, then this should strengthen the persuasiveness of a narrative of
humanity securitization; but when there is scientific uncertainty and debate—e.g., about the
plausibility, timing, or consequences—then this should weaken the persuasiveness of humanity
securitization. Ultimately, the existence (or absence) of epistemic consensus can either contribute
to (or detract from) the persuasiveness of a security narrative of humanity securitization towards
political leaders, since how political leaders interpret and perceive threats is embedded within
collective ideas and beliefs.
To summarize the argument, the structure of the international system means that
macrosecuritization is necessarily a process of “securitization under anarchy.” In the absence of a
world political authority, the great powers are the principal actors with the power and capabilities
to “speak” and “do security” for humankind. Great power consensus is therefore an essential
condition for macrosecuritization, but the prospects for consensus are shaped by conflicting
securitization narratives about the middle-level referent object of the nation-state and the higher-
level referent object of humanity. Macrosecuritization fails when narratives of national
securitization fueled by great power rivalries triumph over narratives of humanity securitization.
In essence, fear of “the other” great power overshadows fear of the existential threat to humanity.
93
Whether the great powers are more receptive to a narrative of humanity securitization or national
securitization depends on (1) the stability of the distribution of power and capabilities in the
international system; (2) the power and interests of securitizing actors vis-á-vis the state in
domestic security constellations; and (3) the beliefs and perceptions of political leaders (see Figure
3.7). When these conditions favor a narrative of humanity securitization, then great power
consensus can lead to extraordinary action by states for the security and survival of humankind;
but when the conditions favor national securitization, then great power rivalries can prevent
consensus from emerging and lead to the failure of macrosecuritization. The next three chapters
explore these dynamics in case studies of the international control over atomic energy, the
Biological Weapons Convention, and the AI revolution in international relations.
94
Figure 3.8: Conflicting Security Narratives and Great Power Consensus
95
Chapter 4
My Fellow Citizens of the World. We are here to make a choice between the quick
and the dead. That is our business. Behind the black portent of the new atomic age
lies a hope which, seized upon with faith, can work our salvation. If we fail, then
we have damned every man [sic] to be the slave of Fear. Let us not deceive
ourselves: We must elect World Peace or World Destruction.
— Bernard Baruch, 1945
The International Control over Atomic Energy
Thus began the U.S. Representative to the United Nations Atomic Energy Commission (UNAEC),
Bernard Baruch, in presenting the American plan for the international control over atomic energy.
Behind the apocalyptic rhetoric of Baruch’s speech on the morning of July 14, 1946, lies a genuine
anxiety—shared by the atomic scientists and American policymakers—that the atomic bomb
represented not merely a threat to the national security of the United States, but to the very survival
of humankind. Baruch expressed what by the summer of 1946 had become a compelling
securitization narrative about the atomic bomb in the United States: that the only way to avoid a
nuclear arms race and atomic warfare between states that could threaten the destruction of modern
civilization was to establish the international control over atomic energy.
The “Baruch Plan” envisioned the sacrifice of an unprecedented degree of national
sovereignty to an international authority, an “Atomic Development Authority (ADA),” which
would possess extraordinary new powers and authority over the development and uses of atomic
energy in all countries. The justification for the international control over atomic energy was the
revolutionary destructive power of the atomic bomb and the fear that, in the absence of an effective
system to prevent states from acquiring the atomic bomb, states would seek security through
possession of the atomic bomb, leading inevitably to a nuclear arms race and atomic warfare. The
atomic bomb posed an existential threat but the international control over atomic energy
represented, in Baruch’s words, “the last, best hope” for humankind (Lieberman 1970, 300).
Therefore, the Baruch Plan proposed the creation of an Atomic Development Authority, which
would be “entrusted [with] all phases of the development and use of atomic energy,” followed by
the abolition of the atomic bomb, including the cessation of the production and destruction of
existing stockpiles of atomic weapons by the United States. This grand bargain—a quid pro quo
96
between the United States and other countries—represented the crux of the American proposal: if
the world would accept the international control over atomic energy, then the United States would
surrender its “atomic monopoly.” Unfortunately, the great powers were unable to reach consensus
and the international control over atomic energy failed, leading to the nuclear arms race between
the United States and the Soviet Union and the emerging “Cold War” (Gaddis 1972; Bernstein
1974; Herken 1980; Kearn 2010).
Why did the international control over atomic energy fail? This chapter examines the
international control over atomic energy as a historical case of macrosecuritization failure. It
argues that the failure of macrosecuritization had its origins in the conflict between two
securitization narratives about the atomic bomb. On the one hand, the narrative of humanity
securitization framed the atomic bomb as an existential threat to humanity, whereby the only hope
for security from a nuclear arms race and atomic warfare between states was the extraordinary
measures of a system of international control over atomic energy. On the other hand, the narrative
of national securitization framed the rival great power—the United States or the Soviet Union—
as an existential threat to the nation-state and possession of the atomic bomb as a vital source of
military power and national security. The argument here is that the failure of the international
control over atomic energy has its origins in the triumph of national securitization over humanity
securitization as the dominant security narrative on the atomic bomb. The theoretical framework
points to three variables to explain the primacy of national securitization over the thinking and
decisions of the great powers: (1) the instability of the distribution of power between the great
powers in the postwar international system; (2) the adoption of a narrative of national securitization
by powerful securitizing actors within the national security and foreign policy establishments of
the great powers; and (3) the ideas and beliefs that shaped the American and Soviet political
leadership’s threat perceptions of the atomic bomb and the intentions and capabilities of the other
great power. Ultimately, the interests of the great powers in national security through military
power came into conflict with the idea of protecting humanity from the existential threat of the
atomic bomb through the international control over atomic energy. Neither the United States nor
the Soviet Union was willing to entrust their national security to the good intentions of the other
when it came to the atomic bomb, nor would they compromise their own demands for security to
do what the other side required, making great power consensus on the international control over
atomic energy impossible. In essence, macrosecuritization failed because fear of “the Other” came
to subsume fear of “the Bomb.”
97
The rest of the chapter explores the sources and dynamics of the macrosecuritization failure
on the international control over atomic energy through a combination of historical and discourse
analysis. The analysis is organized into three historical periods: “Emergence” (July 1944–August
1945), “Evolution” (August 1945–June 1946), and “Demise” (June 1946–December 1946) (see
Appendix 9 for a timeline). In each section, the analysis examines:
The rhetorical content of security narratives about the atomic bomb (e.g., in policy
memoranda, technical reports, public speeches, and diplomatic agreements), to show how
they framed the referent object, existential threat, and extraordinary measures of security;
The identity and relations between the securitizing actors and audiences (e.g., scientists,
bureaucrats, diplomats, heads-of-state, governments, and the general public), in order to
demonstrate who held the power and authority to “speak” and “do security” on the atomic
bomb;
The historical context and critical junctures behind the major policy decisions on the atomic
bomb, in order to uncover what factors and dynamics shaped the historical process which
ended in the failure of the great powers to achieve consensus on the international control
over atomic energy.
The historical and discourse analysis shows that in each stage of the emergence, evolution, and
demise of the international control over atomic energy, the atomic bomb was subject to the
conflicting securitization narratives of humanity securitization and national securitization, until
diplomacy within the UNAEC finally forced a decision between them. While the security narrative
of humanity securitization was clearly recognized as a serious concern by policymakers and
diplomats of the great powers—especially in the United States—it could not overtake national
securitization as the dominant narrative on the atomic bomb, leading ultimately to the failure of
great power consensus on the international control over atomic energy and the initiation of the
nuclear arms race between the United States and the Soviet Union in the emerging Cold War.
4.1 Emergence of the Atomic Threat to Humanity
4.1.1 The Decision to Make the Bomb, 19391942
The atomic bomb became the subject of a narrative of national securitization from the moment of
the scientific discovery of nuclear fission. The scientists, Leo Szilárd and Enrico Fermi, realized
immediately that a nuclear chain reaction in uranium unlocked the potential for making bombs of
enormous destructive force. They feared what such an atomic bomb would mean for world peace
98
and security in the hands of Nazi Germany (Cirincione 2007, 1-2). Szilárd shared his concerns
with Albert Einstein, who, in August 1939, prepared a letter for President Franklin Delano
Roosevelt with a clear warning about the atomic bomb:
In the course of the last four months it has been made probable… that it may
become possible to set up a nuclear chain reaction in a large mass of uranium, by
which vast amounts of power and large quantities of new radium-like elements
would be generated. Now it appears almost certain that this could be achieved in
the immediate future. This new phenomenon would also lead to the construction of
bombs, and it is conceivable—though much less certain—that extremely powerful
bombs of a new type may thus be constructed (Einstein 1939).
The Einstein-Szilárd Letter” suggested that the Nazis were already carrying out work on the
atomic bomb and recommended a “permanent contact” between the government and scientists and
the acceleration of “experimental work” on nuclear chain reactions in the United States. However,
the United States took little concrete action until the spring of 1941.
By this time, British scientists were ahead in research on the atomic bomb. In March 1940,
two German refugee scientists living in England, Otto Frisch and Rudolph Peierls, produced a
memorandum on the feasibility of the atomic bomb. The “FrischPeierls Memorandum” asserted
that a “super-bomb” utilizing 5 kilograms of uranium could yield an explosion equal to 1,000 tons
of dynamite and produce “a temperature comparable to that in the interior of the sun.”
The blast from such an explosion would destroy life in a wide area. The size of this
area is difficult to estimate, but it will probably cover the center of a big city… As
a weapon, the super-bomb would be practically irresistible. There is no material or
structure that could be expected to resist the force of the explosion.
The Memorandum went on to claim, “if one works on the assumption that Germany is, or will be,
in the possession of this weapon… The most effective reply would be a counter-threat with a
similar bomb.” Therefore, the British Government should not “wait until the Germans are in
possession of an atomic bomb,” but rather “start production as soon and as rapidly as possible…the
matter seems, therefore, very urgent.”
The British Government founded a scientific committee—the “MAUD Committee”—to
study the feasibility of the atomic bomb. In March 1941, the committee produced a scientific report
(“Use of Uranium for a Bomb”), which confirmed that “a uranium bomb is practicable and likely
to lead to decisive results in the war.” It recommended that the development of the atomic bomb
should be given “highest priority” and to expand scientific collaboration with the United States in
order to “obtain the weapon in the shortest possible time.” The British Government decided to
99
launch a scientific effort—codenamed “Tube Alloys”—to produce the atomic bomb and Prime
Minister Winston Churchill shared these scientific reports with President Roosevelt. The United
States responded by restructuring atomic research in the United States to produce an atomic
bomb—codenamed “S-1.” By August, 1942, the U.S. Government had established the Manhattan
Engineer District (the “Manhattan Project”), under the command of General Leslie Groves of the
U.S. Army and a weapons research laboratory at Los Alamos under the direction of Robert
Oppenheimer (Hewlett and Anderson 1962; Rhodes 1986; Gosling 1999). Thus, the decision to
make the atomic bomb in the United States and the United Kingdom was driven by the view
amongst scientists that it could be a powerful new destructive force and the fear that this potentially
decisive weapon could fall into the hands of Nazi Germany.
4.1.2 The Contours of Early-American Foreign Policy on the Atomic Bomb,
August 1942August 1943
During this early period, President Roosevelt made important decisions about the atomic bomb
that would shape American policy even after his death, and which ultimately weakened the
prospects for the international control of atomic energy (Lieberman 1970; Sherwin 1973; Bernstein
1975). Roosevelt’s legacy on the atomic bomb was threefold. Firstly, Roosevelt viewed the atomic
bomb as a legitimate weapon in the war against Germany and Japan. It is also likely that Roosevelt
saw the atomic bomb as a potential instrument of military power to achieve American political
objectives in a postwar peace—the beginnings of American “atomic diplomacy” (Alperovitz 1965;
Sherwin 1973; Bernstein 1975). Secondly, Roosevelt endeavored to maintain the secrecy of the
Manhattan Project, both domestically and internationally (Bernstein 1975). Thirdly, Roosevelt
consistently pursued a policy of Anglo-American cooperation on the atomic bomb—even allowing
Churchill to set the terms of collaboration (Bernstein 1974; 1975)—while excluding the Soviet
Union, despite the “Grand Alliance” during the Second World War.
These pillars of early-American foreign policy on the atomic bomb had become firmly
established by the summer of 1943. The “Quebec Agreement,” signed by President Roosevelt and
Prime Minister Churchill on August 19, 1943, affirmed that the development of the atomic bomb
was “vital” to the “common safety” of the United States and the United Kingdom and required
effective collaboration of “all available British and American brains and resources.” The Quebec
Agreement established the basic principles of Anglo-American atomic cooperation:
100
First, that we will never use this agency against each other. Secondly, that we will
not use it against third parties without each other’s consent. Thirdly, that we will
not either of us communicate any information about Tube Alloys to third parties
except by mutual consent…
While Anglo-American cooperation was framed in terms of wartime necessity, Churchill was
already thinking about the postwar implications of the atomic bomb. Churchill believed that the
Soviet Union would emerge from the war as the preeminent great power in Europe and that Great
Britain’s national security would depend on its possession of the atomic bomb as a counterbalance
to Soviet military power (Lieberman 1970, 34; Bernstein 1975). The Quebec Agreement was not
without its opponents in the United States. Two of the most influential scientists and bureaucrats
in the Manhattan Project, Vannevar Bush and James Conant, advised Roosevelt strongly against
an Anglo-American pact on atomic cooperation—partially out of concern over the British deriving
commercial benefits from American ingenuity, but mostly for their concerns about alienating the
Soviet Union. Despite their protests, Churchill prevailed in influencing Roosevelt’s thinking and
the British and American governments agreed to a policy of Anglo-American cooperation
(Lieberman 1970, 26–27; Bernstein 1975, 27).
Thus, by the middle of 1943, President Roosevelt had established some of the main
contours of American foreign policy on the atomic bomb, which would complicate later American
diplomacy to achieve the international control of atomic energy: the use or threat of the atomic
bomb as an instrument of military power for political objectives (i.e., “atomic diplomacy”), the
nondisclosure of scientific research and technical knowledge relevant for the development of
atomic energy (i.e., “atomic secrecy”), and the exclusion of the Soviet Union from Anglo-
American cooperation on the atomic bomb (i.e., “atomic monopoly”). As the historian Barton
Bernstein (1975, 23–24) writes:
Acting on the assumption that the bomb was a legitimate weapon, Roosevelt
initially defined the relationship of American diplomacy and the atomic bomb. He
decided to build the bomb, to establish a partnership on atomic energy with Britain,
to bar the Soviet Union from knowledge of the project, and to block any effort at
international control of atomic energy. These policies constituted Truman’s
inheritance—one he neither wished to abandon nor could easily escape.
4.1.3 Neils Bohr: The First Failed Diplomat for the International Control
over Atomic Energy, November 1943September 1944
One implication of atomic secrecy was that relatively few people could shape American thinking
and policy before the atomic bomb became a reality. Even fewer possessed the knowledge and
101
foresight to look beyond immediate wartime necessity at the broader implications of the atomic
bomb for humankind. One of these individuals was Neils Bohr. Despite being one of the
preeminent physicists of his time, Bohr’s principal contribution regarding nuclear weapons was
not as an atomic scientist but as a de facto diplomat for the idea of international control (Lieberman
1970; Bernstein 1974). Bohr believed that the United States and the Soviet Union would be the
preeminent great powers in the postwar world and feared that their ideological differences and
mutual suspicions could catalyze a nuclear arms race. Bohr was the first to grasp that nuclear war
could pose an existential threat to humanity and to glimpse the possibility of the international
control over atomic energy (Lieberman 1970).
In November 1943, Bohr travelled to the United States as part of the British team working
on the Manhattan Project. By early 1944, he began speaking with members of the British and
American governments about the need to engage the Russians on a political solution to the threat
of the atomic bomb. Bohr met with Sir John Anderson, Lord Halifax, and Sir Ronald Campbell of
the British Government, who agreed with his reasoning. He also met with U.S. Supreme Court
Justice, Felix Frankfurter, who mentioned a “secret scientific project,” prompting Bohr to discuss
his fears about the atomic bomb. All were persuaded of the need to raise the question of the postwar
control of the atomic bomb with Churchill and Roosevelt. Bohr also received correspondence at
this time from the preeminent Soviet physicist, Peter Zinchenko, who extended an invitation to the
Soviet Union, which Bohr saw as an opening for the international control of atomic energy
(Lieberman 1970, 36).
Sir Anderson, stirred by Bohr, produced two memoranda for Churchill on the atomic bomb
and international control: “Plans for world security which do not take account of Tube Alloys must
be quite unreal. When the work on Tube Alloys comes to fruition the future of the world will in
fact depend on whether it is used for the benefit or destruction of mankind” [sic] (Lieberman 1970,
33). Churchill wrote plainly, “I do not agree.” But the pressure on Churchill did not cease there.
Bohr’s thinking had also managed to persuade Prime Minister Mackenize King of Canada and
Prime Minister Jans Smuts of South Africa. On May 16, 1944, Churchill reluctantly met with Bohr.
However, Bohr was unable to turn Churchill’s attention to the atomic bomb and so Bohr asked if
he could prepare a memorandum on the subject, to which Churchill responded: “It will be an honor
for me to receive a letter from you… but not on politics” (Lieberman 1970, 3445).
Disheartened but not deterred, Bohr returned to the United States convinced that he must
persuade Roosevelt. On July 3, 1944, Bohr presented a memorandum to President Roosevelt,
102
which embodied the core rhetorical elements of macrosecuritization—and the beginnings of a
narrative of humanity securitization on the atomic bomb.
The fact of immediate preponderance is… that a weapon of an unparalleled power
is being created which will completely change all future conditions of warfare.
Quite apart from the question of how soon the weapon will be ready for use and
what role it may play in the present war, this situation raises a number of problems
which call for the most urgent attention. Unless… some agreement about the
control of the use of the new active materials can be obtained in due time, any
temporary advantage, however great, may be outweighed by a perpetual menace to
human security (emphasis added).
On August 26, 1944, Roosevelt and Bohr met at the White House. Bohr reiterated the main themes
of his memorandum and the importance of early engagement with the Soviet Union: if the
Americans and the British continued to exclude the Soviet Union from knowledge about the atomic
bomb, then the result would be a serious breach in trust that could make a system of international
control impossible and doom the world to a nuclear arms race. Unlike Churchill, Roosevelt
appeared receptive to Bohr’s arguments—although this may have reflected Roosevelt’s tendency
to tell visitors what they wanted to hear rather than genuine agreement (Bernstein 1974, 1007).
Roosevelt agreed to broach the matter at his upcoming meeting with Churchill.
Rather than producing a diplomatic breakthrough for the international control of atomic
energy, the summit between Churchill and Roosevelt reaffirmed the policy of Anglo-American
cooperation and rejected the idea of international control. On September 18, 1944, Churchill and
Roosevelt signed an “Aide-Memoire” with three points:
1. The suggestion that the world should be informed regarding Tube Alloys, with a
view to an international agreement regarding its control and use, is not accepted.
The matter should continue to be regarded as of the utmost secrecy; but when a
“bomb” is finally available, it might perhaps, after mature consideration, be used
against the Japanese, who should be warned that this bombardment will be repeated
until they surrender. 2. Full collaboration between the United States and the British
Government in developing Tube Alloys for military and commercial purposes
should continue after the defeat of Japan unless and until terminated by joint
agreement. 3. Enquiries should be made regarding the activities of Professor Bohr
and steps taken to ensure that he is responsible for no leakage of information,
particularly to the Russians.
The Aide-Memoire sealed Bohr’s fate. Not only did it reject Bohr’s appeal for international control
but even brought his loyalty into question. Neils Bohr can be regarded as the first macrosecuritizing
actor on the atomic bomb and his personal failure foreshadowed the future failure of the
international control of atomic energy against the logic of national securitization.
103
4.1.4 The Atomic Scientists, September 1944March 1945
There were other important voices behind macrosecuritization within the Manhattan Project. As
the atomic scientists were nearing the end of their work, they became increasingly restless about
the future of the Manhattan Project (Hewlett and Anderson 1962, 324). Bush and Conant stand out
as two of the most influential individuals for advancing the international control over atomic
energy within the U.S. Government. As both scientists and bureaucrats, Bush and Conant were
well-placed to understand both the scientific and political realities of the atomic bomb (Hewlett
and Anderson, 1962, 322; Sherwin 1973). Bush had shared some of his early thoughts on the
postwar domestic and international control of atomic energy with Conant. Conant agreed with
Bush: “I’m inclined to think the only hope for humanity is an international commission on atomic
energy with free access to all information and rights of inspection” (Lieberman 1970, 44).
By the late summer of 1944, Bush and Conant judged that it was time to bring the problem
of the postwar control of atomic bomb to the government’s attention. On September 19, Bush and
Conant presented a five-page memorandum to the Secretary of War, Henry Stimson, entitled
“Salient Points Concerning Future International Handling of Subject Atomic Bombs.” Three days
later, President Roosevelt summoned Bush for a meeting at the White House, with Lord Cherwell
present to represent the British Government. The meeting ended, much to Bush’s dismay, with a
reaffirmation of Anglo-American cooperation on the atomic bomb (Lieberman 1970, 4950).
Following the meeting, Bush expressed his frustration to Stimson about the directions of American
policy and suggested that he and Conant present a comprehensive memorandum on the subject
(Hewlett and Anderson 1962, 328329). Stimson agreed and Bush and Conant spent the next five
days consolidating their thoughts into a new document. On September 30, Bush and Conant
presented their memorandum—now a one-page cover letter, followed by two longer technical
memoranda—to Secretary of War, Stimson.
The “Bush-Conant Memorandum” represents one of the best illustrations of the narrative
of humanity securitization on the atomic bomb. In the memorandum, Bush and Conant argue six
points, which are “of great importance to the future of the world.” First, the atomic bomb would
become a “matter of great military importance” by the summer of 1945. Secondly, atomic
development will “expand rapidly” after the war and its military implications would be
“overwhelming.” Thirdly, the military advantage that the United States enjoyed could only be
“temporary” and would soon “disappear, or even reverse.” Fourthly, the basic scientific knowledge
was “widespread” and therefore security through secrecy was illusory. Fifthly, controlling the
104
supplies of materials “could not be depended upon,” especially in light of potential future scientific
and technical developments. Sixthly, an arms race could be prevented and the “peace of the world”
furthered through the “complete international scientific and technical interchange,” reinforced by
an “international commission acting under an association of nations and having the authority to
inspect.”
The Bush-Conant Memorandum made a compelling case for the international control over
atomic energy based on a securitization narrative of humanity securitization. Bush and Conant
went beyond thinking only of the United States and its allies by framing a universal humanity as
the referent object of security. The memorandum frequently employs language, like the “future
peace of the world” and a “new challenge to the world” (Bush and Conant 1944, 1–2), which
describes the implications of the atomic bomb for humanity as a whole. Furthermore, the Bush-
Conant Memorandum describes the “present” and “future military potentialities” of atomic
weapons (the “super bomb”) and thermonuclear weapons (the “super-super bomb”) as an
existential threat to humanity, or “dangers to civilization” (Bush and Conant 1944, 11).
There is every reason to believe that before August 1, 1945, atomic bombs will
have been demonstrated and that the type in production would be the equivalent of
1,000 to 10,000 tons of high explosive in so far as general blast damage is
concerned. This means that one B-29 bomber could accomplish with such a bomb
the same damage against weak industrial and civilian targets as 100 to 1,000 B-29
bombers (Bush and Conant 1944, 2).
After the war, the scientific and technological development of nuclear fusion would most likely
make possible a thermonuclear bomb, which would be “on a different order of magnitude in its
destructive power from an atomic bomb,” producing temperatures that had “never taken place on
this earth,” but were “closely analogous to the sources of energy on the sun.” Their estimates
suggest that the effects of a “super-super bomb” would be equivalent to “1,000 raids of 1,000 B-
29 Fortresses delivering their load of high explosives on one target” (Bush and Conant 1944, 2).
We see how vulnerable would be centers of population in a future war. Unless one
proposed to put all one’s cities and industrial factories under ground, or one
believes that the antiaircraft defenses could guarantee literally that no enemy plane
or flying bomb could be over a vulnerable area, every center of the population in
the world in the future is at the mercy of the enemy that strikes first (Bush and
Conant 1944, 7).
The Bush-Conant Memorandum asserted that the American policy of relying on its
monopoly of the atomic bomb for security was untenable. Bush and Conant rejected the notion
105
that the United States and Great Britain could maintain their advantage indefinitely through atomic
secrecy as the “height of folly.” The science and technology behind the atomic bomb were “just
over the horizon” before the war and would be “essentially rediscoverable” by other states once
the war was over—perhaps even more quickly, cheaply, and easily. Nor could the United States
rely on the control over raw materials, since they existed around the world and because the supply
of heavy hydrogen for a thermonuclear bomb was “essentially unlimited.” Therefore, the “present
advantage” of the American atomic monopoly was “very temporary indeed. We cannot
overemphasize this point” (Bush and Conant 1944, 8). If the United States and Great Britain
maintained the policy of atomic secrecy after the war, then the Soviet Union “would undoubtedly
proceed along the same lines and so too might certain other countries, including our defeated
enemies.” The United States might then find itself “living in a most dangerous world,” in which
“several powerful countries” race to develop nuclear weapons in secret (Bush and Conant 1944,
910).
Thus, the Bush-Conant Memorandum claimed that the best hope for security against the
existential threat of nuclear war were extraordinary measures for the international control over
atomic energy. Bush and Conant’s main proposal was for some combination of free “international
exchange” of scientific information and the creation of an “international office” with powers of
inspection in all countries and responsible to an “association of nations.”
In order to meet the unique situation created by the development of this new art we
would propose that free interchange of all scientific information on this subject be
established under the auspices of an international office deriving its power from
whatever association of nations is developed at the close of the present war. We
would propose further that as soon as practical the technical staff of this office be
given free access in all countries not only to the scientific laboratories where such
work is contained, but to the military establishments as well. We recognize that
there will be great resistance to this measure, but believe the hazards to the future
of the world are sufficiently great to warrant this attempt (Bush and Conant 1944,
34; emphasis added).
Although the proposals for the international control of atomic energy went far beyond the normal
boundaries of international politics—and would presumably be “violently opposed” in both the
United States and Russia—Bush and Conant were convinced that the “dangers to civilization”
posed by the atomic bomb were sufficiently great to justify extraordinary action to ensure “the
safety of the United States and the prospects for world peace” (Bush and Conant 1944, 10–11).
“We have been unable to devise any other plan which holds greater possibilities that these new
106
developments can be utilized to promote peace rather than to insure devastating destruction in
another war” (Bush and Conant 1944, 11).
While Stimson was impressed, it took several months for Bush and Conant’s reasoning to
become the dominant influence over the Secretary of War’s thinking about the atomic bomb
(Hewlett and Anderson 1962). By December 13, when Stimson and Bush were able to discuss the
issue, Stimson responded that he had “very evidently and quite appropriately not yet made up his
mind as to what our position should be” (Lieberman 1970, 53). Stimson’s ambivalence was partly
due to him being too preoccupied with the war to give much thought to the peace, and partly
because of the difficult relationship between the two biggest postwar problems for U.S. national
security: the atomic bomb and the Soviet Union (Bernstein 1974, 1009). Stimson was highly
suspicious of Soviet totalitarianism and even hoped—naively—that the United States might be
able to offer cooperation on “S-1” as a qui pro quo for “liberalization” of the Soviet Union
(Lieberman 1970; Bernstein 1974, 1009). In essence, Stimson was torn between the conflicting
securitization narratives of humanity securitization and national securitization and their
implications for the atomic bomb. During the first months of 1945, Bush and Conant—as well as
Harvey Bundy, the personal assistant to the Secretary of War—continued to press Stimson on the
postwar control of atomic energy. On February 13, Stimson agreed to establish a committee to
study the issue (Hewlett and Anderson 1962, 337). Over the next month, the atomic bomb and the
question of its domestic and international control came to dominate Stimson’s thinking about
postwar problems. For Stimson, the implications of the atomic bomb “touched the basic facts of
human nature, morality, and government” (Hewlett and Anderson 1962, 339; Bernstein 1974,
1009).
By March 1945, Stimson had decided it was time to bring the issue of postwar control of
the atomic bomb to the President. On March 15, Roosevelt and Stimson met in the White House
to discuss the atomic bomb. Stimson presented the policy dilemma to the President in terms of two
“schools of thought” about the future of the atomic bomb, reflecting the conflicting securitization
narratives of humanity securitization and national securitization. In the first, the United States
should continue to pursue its policy of secrecy and Anglo-American cooperation that excluded the
Soviet Union. In the second, the United States should pursue a policy of the international control
over atomic energy through the freedom of science and powers of inspection (Lieberman 1970,
59). In Stimson’s opinion, American policy on this question should be settled before the use of the
atomic bomb. President Roosevelt agreed on the need for a decision but did not say which “school”
107
he favored—although his decisions had consistently favored the former (Sherwin 1973; Bernstein
1974; 1975). This was the last meeting between President Roosevelt and Secretary of War
Stimson. Roosevelt died on April 12, 1945.
4.1.5 A Shift in Thinking, AprilMay 1945
On April 25, 1945, Secretary of War Stimson and General Groves went to the White House to brief
President Harry S. Truman on “S-1.” Stimson had drafted a memorandum on the significance of
the atomic bomb for international politics. Stimson’s memorandum is one of the clearest examples
of the securitization narrative of humanity securitization on the atomic bomb. The fact that it was
written by the Secretary of War demonstrates that a shift in thinking was taking place in the United
States, with humanity securitization emerging as a legitimate narrative on the atomic bomb.
Stimson’s memorandum frames humanity as the referent object and the atomic bomb as an
existential threat:
Within four months we shall in all probability have completed the most terrible
weapon even known in human history, one bomb of which could destroy a whole
city… The world in its present state of moral advancement compared with its
technical development would be eventually at the mercy of such a weapon. In other
words, modern civilization might be completely destroyed (Stimson 1945, 12;
emphasis added).
This existential threat of the atomic bomb justified the introduction of extraordinary measures of
world political organization, since the normal methods of international relations were insufficient
to ensure humanity’s security and survival:
To approach any world peace organization of any pattern now likely to be
considered, without an appreciation by the leaders of our country of the power of
this new weapon, would seem to be unrealistic. No system of control heretofore
considered would be adequate to control the menace. Both inside any particular
country and between the nations of the world, the control of this weapon will
undoubtedly be a matter of the greatest difficulty and would involve such thorough-
going rights of inspection and internal controls as we have never heretofore
contemplated (Stimson 1945, 2; emphasis added).
Also characteristic of the rhetorical style of macrosecuritization was Stimson’s juxtaposition of
existential hope and despair. “On the other hand,” Stimson wrote, “if the problem of the proper
use of this weapon can be solved, we would have the opportunity to bring the world into a pattern
in which the peace of the world and our civilization can be saved.” Stimson invoked an ethical
108
appeal to the “moral responsibility” of the United States for “any disaster to civilization”
originating in its development of the atomic bomb.
However, Stimson’s memorandum was not free from the tensions between the
securitization narratives of humanity securitization and national securitization. After describing
the likely danger of nuclear proliferation after the war, Stimson suggested that “the future may see
a time when such a weapon may be constructed in secret and used suddenly and effectively with
devastating power by a willful nation or group against an unsuspecting nation or group of much
greater size and material power.” The concern that even a “powerful unsuspecting national” could
be “conquered within a very few days” illustrates the continuing tendency to perceive the atomic
bomb through the conventional lens of national security. Not surprisingly, the concern for national
security quickly leads to an interpretation of the threat in terms of great power rivalries: “the only
nation which could enter into production within the next few years is Russia.”
The meeting between Truman and Stimson went well. They discussed Stimson’s
memorandum and a military report on the probable timing and scale of the atomic bomb, with an
implosion device ready for testing in July and a “gun-type” bomb ready by August 1. For nearly
an hour, President Truman listened to Stimson and Groves. Stimson left the White House feeling
that he had “accomplished much” (Hewlett and Anderson 1962, 343). Behind the scenes, Bush
met with Bohr and lobbied individuals around Stimson to establish an advisory committee on the
international implications of atomic energy. On May 1, Bundy and Harrison met with Stimson to
advocate such an advisory committee: “If properly controlled by the peace loving nations of the
world this energy should insure the peace of the world for generations. If misused it may lead to
the complete destruction of civilization” (Hewlett and Anderson 1962, 344). On May 2, Stimson
met again with President Truman, who decided to establish an “Interim Committee” to consider
the question of the international control of atomic energy. The Interim Committee was to be chaired
by Stimson and included Bush, Conant, Karl T. Compton, Ralph A. Bard, William L. Clayton, and
the soon-to-be Secretary of State, James Byrnes, as a personal representative of the President. The
Interim Committee would also include a scientific panel made up of Arthur Compton, Ernest
Lawrence, Robert Oppenheimer, and Enrico Fermi.
4.1.6 Emergence
In this first historical stage (August 1939–May 1945), the security constellation on the atomic
bomb was essentially an elite phenomenon within the United States and the United Kingdom. The
109
secrecy of the Manhattan Project meant that the principal securitizing actors were a relatively small
group of atomic scientists, military staff, government bureaucrats, and political leaders with
privileged knowledge and influence—such as Bohr, Groves, Bush, Conant, Stimson, Churchill,
Roosevelt, and Truman (for the views of key individuals, see Appendix 8)—while the main
audiences were the American and British governments. Initially, the sole securitization narrative
on the atomic bomb was one of national securitization, which warned of the extraordinary
destructive power of the atomic bomb (a “super-bomb”) and the danger that Nazi Germany could
acquire this weapon first within the context of the Second World War (e.g., the Einstein-Szilárd
Letter and Frisch-Peierls Memorandum). Therefore, the United States and the United Kingdom
must work together to produce the atomic bomb has quickly as possible to defeat their common
enemies in the Second World War and to ensure favorable peace in the postwar world (e.g., the
Quebec Agreement). This narrative of national securitization shaped two critical policy decisions
on the atomic bomb: (1) launching the Manhattan Project to quickly and secretly produce the
bomb; and (2) Anglo-American cooperation and exclusion of the Soviet Union.
Eventually, humanity securitization emerged as an alternative securitization narrative on
the atomic bomb. The atomic scientists—notably, Bohr, Bush, and Conant—became increasingly
concerned about the revolutionary destructive power of the atomic bomb, the impossibility of
keeping secret the scientific and technical knowledge behind its development, and the possibility
that great power rivalries could lead to a nuclear arms race and atomic warfare in the aftermath of
the Second World War (e.g., the Bohr Memorandum and the Bush-Conant Memorandum). The
atomic bomb was not merely a threat to national security, but an existential threat to humanity. In
the words of Secretary of War Stimson (1945, 2), “modern civilization might be completely
destroyed.” The best hope for humanity’s security and survival lay in an “international
arrangement” on atomic energy—although a clear plan for the international control of atomic
energy did not yet exist. This securitization narrative took on the core rhetorical features of a
macrosecuritization discourse, with the referent object of a universal humanity (or “mankind,”
“civilization,” “modern civilization”, or “the world”), the existential threat of a nuclear arms race
and atomic war (“dangers to civilization”), and extraordinary measures for international
cooperation and control over atomic energy.
Figure 4.1: “Emergence,” August 1939–May 1945
110
Description: Figure 4.1 depicts the first stage of the security constellation on the atomic bomb. The atomic
scientists are the primary securitizing actor, with a security narrative that incorporates the elements of both
humanity securitization and national securitization. The British Government represents a secondary
securitizing actor, with a discourse that almost exclusively reflects national securitization. The U.S.
Government is the principal audience for both security narratives.
Thus, the atomic bomb became the subject of conflicting securitization narratives even
before the bomb had become a reality. The contestation between these two narratives played out
amongst the atomic scientists and policymakers of the Manhattan Project within the American and
British governments, which constituted the early security constellation on the atomic bomb (see
Figure 4.1). One of the most interesting characters in this drama was Stimson, who was one of the
leading members of the U.S. Government during the Second World War and had the dual role of
both audience and securitizing actor. Stimson was personally torn between the conflicting
narratives of humanity securitization and national securitization; his eventual conversion to the
cause of international control marked a turning point in establishing the legitimacy of the narrative
of humanity securitization on the atomic bomb (e.g., the Stimson Memorandum). Although this
securitization narrative had not yet produced a fundamental shift in American policy, after several
years of quiet and patient pressure by some of the leading scientists and bureaucrats of the
Manhattan Project, the American government’s Interim Committee turned its attention in May
111
1945 towards the task of crafting an American policy for the domestic and international control
over atomic energy.
4.2 Evolution of the American Plan for the International Control
of Atomic Energy
4.2.1 The Atomic Bomb Becomes Reality, July 1945
On July 16, 1945, at 5:29 in the morning, the United States detonated the first atomic bomb in the
desert of Alamogordo, New Mexico. The “Trinity Test” produced a blast equivalent to 20,000 tons
of TNT. From a safe distance, the members of the Manhattan Project watched the mushroom cloud
reach 41,000 feet into the air: “My God, it worked” (Hewlett and Anderson 1962, 379; Lieberman
1970, 99). The atomic bomb had become reality. It soon became a decisive influence on
international politics, fundamentally shaping the course of events at the end of the Second World
War and how the great powers came to understand and pursue security in the postwar world.
During the summer of 1945, the U.S. Government made the decision to use the atomic
bomb against Japan. “At no time, from 1941 to 1945,” Stimson (1947, 37) wrote, “did I ever hear
it suggested by the President, or by any other responsible member of the government, that atomic
energy should not be used in the war.” Churchill’s (1953, 552553) account is similar: “there never
was a moment’s discussion as to whether the atomic bomb should be used or not.” These accounts
are misleading. The decision to drop the bomb was based on the belief that the atomic bomb was
a legitimate weapon of war (Bernstein 1975), but its use was still a matter of debate. On June 1,
1945, the Interim Committee unanimously determined that (1) the atomic bomb should be used
against Japan as soon as possible; (2) it should be used against a “dual” military and civilian target;
and (3) it should be used without prior warning (Stimson 1947, 39). The Committee also decided
against raising the issue of the atomic bomb with the Soviet Union until after its use against Japan.
Then, on June 11, the atomic scientists of the University of Chicago—increasingly restless
about the atomic bomb and left out of the major policy discussions—submitted a report to the
Interim Committee. The “Franck Report” adopted the securitization narrative of humanity
securitization, invoking the referent object of humanity (or “mankind”), the existential threat of a
“nuclear armament race” that could lead to “sudden” and “total mutual destruction,” and the need
for extraordinary measures for security whereby “protection can only come from the political
organization of the world” (Franck et al. 1945, 2). The Franck Report made a connection between
112
the use of the atomic bomb and the prospects for international control over atomic energy: “the
way in which nuclear weapons, now secretly developed in this country, will first be revealed to the
world appears of great, perhaps fateful importance” (Franck et al. 1945, 7). Its authors warned that
the “horror and repulsion” from the use of the atomic bomb could lead to a “loss of confidence”
in the United States, which might not only “destroy all our chances of success” in a system of
international control but could also trigger “a flying start of an unlimited arms race” (Franck et al.
1945, 9–10). The Franck Report recommended a noncombat demonstration of the atomic bomb
“before the eyes of representatives of all [the] United Nations.”
On June 16, the Interim Committee revisited the question of the use of the atomic bomb.
The Committee considered the possibility of a noncombat demonstration and an ultimatum to
Japan but decided that this course was unlikely to compel Japan’s surrender. As Stimson wrote:
The opinions of our scientific colleagues on the initial use of these weapons are not
unanimous: they range from the proposal of a purely technical demonstration to
that of the military application best designed to induce surrender. Those who
advocate a purely technical demonstration would wish to outlaw the use of atomic
weapons, and have feared that if we use the weapons now our position in future
negotiations will be prejudiced. Others emphasize the opportunity of saving
American lives by immediate military use, and believe that such use will improve
the international prospects, in that they are more concerned with the prevention of
war than with the elimination of this special weapon. We find ourselves closer to
these latter views (Stimson 1947, 39).
On the question of the wartime use of the atomic bomb, the narrative of humanity securitization
came up against the narrative of national securitization and lost. Nevertheless, the atomic scientists
succeeded in reversing the Interim Committee’s position towards the Soviet Union (Bernstein
1975). On June 21, the Interim Committee concluded that informing the Soviet Union about the
atomic bomb would be in the interest of “securing effective future control” over atomic energy.
Stimson was convinced that failure to approach the Soviet Union could strain US-USSR postwar
relations and advised President Truman that, if he “thought that Stalin was on good terms with
him,” then he should inform Stalin at the Potsdam Conference about their possession of the atomic
bomb, intentions to use it against Japan, and hopes to discuss the postwar international control
over atomic energy with the Soviet Union (Bernstein 1975, 40).
From July 17 to August 2, 1945, the heads-of-state of the three great powers of the Grand
Alliance—Prime Minister Churchill, Premier Stalin, and President Truman—met in Potsdam,
Germany. On July 18, Stimson conveyed to President Truman news of the successful Trinity Test,
113
whose confidence was “noticeably reinforced” (Hewlett and Anderson 1962, 386). Truman shared
the news about the Trinity Test with Churchill in private and asked his views about how to approach
the Russians, who suggested that Truman inform Stalin about the atomic bomb at the end of the
summit. On July 24, Truman approached Stalin and casually mentioned that the United States had
developed a “new weapon of unusual destructive force.” Stalin responded only that he was “glad
to hear it” and hoped that the Americans would make “good use of it against the Japanese” (Hewlett
and Anderson 1962, 394). Churchill, who watched the “momentous talk” from a short distance
away, interpreted Stalin’s muted response as demonstrating his lack of “special knowledge” about
the Manhattan Project and concluded that Stalin “had no idea of the significance of what he was
being told” (Churchill 1953, 580). The American disclosure of the atomic bomb to the Soviets—
which Bohr, Bush, Conant, and many others had advocated for several years—was unceremonious.
Truman did not say anything about the nature of the atomic bomb, nor did he express the hope that
it could become a force for peace through a system of international control over atomic energy.
According to Churchill (1953, 580), “This was the end of the story so far as the Potsdam
Conference was concerned. No further reference to the matter was made by or to the Soviet
delegation.”
4.2.2 Hiroshima and Nagasaki, August 1945
On the morning of August 6, the U.S. Air Force dropped a uranium gun-type bomb (“Little Boy”)
on Hiroshima. Then, on August 9, it dropped another plutonium implosion bomb (“Fat Man”) on
Nagasaki. Together, the two atomic bombs killed between 120,000 and 226,000 people—mostly
civilians. On August 15, 1945, Japan announced its unconditional surrender to the Allies. The
Second World War was over. Against the triumph and tragedy of the atomic bomb, President
Truman delivered two speeches on August 6 and 9, 1945, which provide a window into American
thinking about the atomic bomb.
With grandiose rhetoric, President Truman invoked the core elements of a securitization
narrative of humanity securitization, calling the atomic bomb “the greatest destructive force in
history” and a “new era in man’s [sic] understanding of nature’s forces.” Truman made frequent
references to the referent object of a universal humanity (e.g., “man,” “mankind,” “civilization,”
“the world”), and framed the atomic bomb as an existential threat (e.g., the “danger of total
destruction”). He compared the new dangers of the atomic bomb against the experiences of the
Second World War: “No one can foresee what another war would mean to our own cities and our
114
own people. What we are doing to Japan now—even with the new atomic bomb—is only a small
fraction of what would happen to the world in a third World War.” This new situation required
extraordinary measures for security to ensure the “future control of this bomb” and that “its power
be made an overwhelming influence towards world peace.”
It has never been the habit of the scientists of this country or the policy of this
Government to withhold from the world scientific knowledge… But under present
circumstances it is not intended to divulge the technical processes of production or
all the military applications, pending further examination of possible methods of
protecting us and the rest of the world from the danger of sudden destruction. I
shall recommend that the Congress of the United States consider promptly the
establishment of an appropriate commission to control the production and use of
atomic power within the United States. I shall give further consideration and make
further recommendations to the Congress as to how atomic power can become a
powerful and forceful influence towards the maintenance of world peace (Truman
1945a; emphasis added).
However, Truman’s speeches also reflected the conflicting narrative of national
securitization. Truman claimed that the decision by the United States to pursue the atomic bomb
was not taken “lightly,” but they knew their “enemies” were developing it and understood “the
disaster which would come to this Nation, and to all peace-loving nations, to all civilization, if
they had found it first.” Fortunately, the United States and its allies prevailed in the “battle of the
laboratories” and “won the race of discovery against the Germans.” Truman’s language shows the
dubious relationship between “might” and “right” in international politics. The President not only
justified the use of the atomic bomb through the utilitarian logic of saving (American) lives, but
also implicitly represented it as justice—or revenge—for Pearl Harbor, which was “repaid many
fold” but only after providing “adequate warning.” Truman moves from extolling the “guiding
spirit” of peace in the United Nations Charter to threatening a “rain of ruin” against Japan unless
they “save themselves from destruction” through immediate and unconditional surrender.
We are now prepared to obliterate more rapidly and completely every productive
enterprise the Japanese have above ground in any city. We shall destroy their docks,
their factories, and their communications. Let there be no mistake; we shall
completely destroy Japan’s power to make war (Truman 1945a).
Truman’s speeches illustrate the tendency in international politics for states to regard their
own power and intentions as benign but the power and intentions of other states as threatening.
The atomic bomb is too dangerous to be loose in a lawless world. That is why Great
Britain, Canada, and the United States, who have the secret of its production, do
not intend to reveal that secret until means have been found to control the bomb so
115
as to protect ourselves and the rest of the world from the danger of total destruction
(Truman 1945b; emphasis added).
In the ultimate expression of morality and power, Truman declared the United States “trustees” of
the atomic bomb, a responsibility granted by God and in the service of humankind.
We must constitute ourselves trustees of this new force—to prevent its misuse, and
to turn it into the channels of service to mankind [sic]. It is an awful responsibility
which has come to us. We thank God that it has come to us, instead of to our
enemies; and we pray that He may guide us to use it in His ways and for His
purposes (Truman 1945b; emphasis added).
This belief, that American possession of the atomic bomb did not pose a threat to other nations but
was a force for world peace, would profoundly shape American policy and the failure of the
international control of atomic energy.
4.2.3 The Soviet Decision to Make the Bomb, August 1945
The Soviet Union responded to the events of Hiroshima and Nagasaki with the decision to make
the atomic bomb. On either August 17 or 18, only days after Japan’s surrender, Stalin summoned
Boris L. Vannikov, the People’s Commissar of Munitions, and Igor Kurchatov, the Director of
Soviet atomic research at the Ioffe Institute:
A single demand of you, comrades. Provide us with atomic weapons in the shortest
possible time! You know that Hiroshima has shaken the whole world. The balance
has been destroyed! Provide the bomb—it will remove a great danger from us
(Cochran et al., 1995, 23).
This is the only account of Stalin’s decision. Clearly, Stalin did not feel reassured by their American
allies as “trustees” of the atomic bomb. Nor did Stalin fail to grasp the significance of the atomic
bomb for international politics, as Churchill had believed. Stalin perceived American possession
of the atomic bomb as a serious threat to the national security of the Soviet Union and saw Soviet
acquisition of the atomic bomb as necessary to restore the balance-of-power (Holloway 1981;
1994).
The Soviet leadership was aware of scientific developments in atomic energy and the
possibility of an atomic bomb before the summer of 1945. Soviet scientists were well-aware of the
discovery of nuclear fission in 1938 and engaged in basic scientific research on atomic energy,
albeit with limited progress due to the frequent purges of Soviet scientists. The German invasion
of the Soviet Union brought nuclear research to a halt (Holloway 1981, 169). On several occasions,
116
Kurchatov tried to convince Soviet officials of the military and economic significance of atomic
energy and proposed research on the “uranium problem.” However, Kurchatov’s proposal was
rejected by skeptical scientists and Soviet administrators, who perceived more urgent needs for
scarce military and industrial resources while the Soviet Union was fighting a war for survival
against Nazi Germany (Holloway 1981).
The Soviet decision to recommence scientific research came at the end of 1942, when the
State Defence Committee issued a decree to set up a laboratory and named Kurchatov as its
scientific director. A lieutenant in the Soviet Army, G. N. Flyorov, had noticed that the leading
atomic scientists in the United States and the United Kingdom were no longer publishing: “the
names of Fermi, Szilard, Teller, Andersen, Wheeler, Wigner and others had disappeared from
print” (Holloway 1981, 174). From their silence, Flyorov deduced that the Americans and British
were conducting secret research on atomic energy, which he brought to Stalin’s attention in a letter.
“It is essential,” Flyorov argued, “not to lose any time in building the uranium bomb” (Holloway
1981, 174). The realization that the Americans and British were working in secret on atomic energy
led the Soviet Union to restart its own project by the end of 1942, although with limited progress
until the end of the war (Holloway 1981; 1994).
The major turning point came in August 1945. While Stalin was aware of the atomic bomb,
it seems that he did not fully appreciate the political and military significance of the atomic bomb
before Hiroshima and Nagasaki. The shift in Stalin’s thinking was probably not only because of
the physical demonstration of the destructive power of the atomic bomb, or its military significance
in compelling Japan’s surrender, but also the political demonstration of American will to use this
weapon when victory was already all but assured (Holloway 1981, 184). In this, the atomic
scientists behind the Franck Report reached an accurate assessment of the probable effects of the
American use of the atomic bomb on Soviet thinking—although a noncombat demonstration of
the atomic bomb may not have fundamentally altered the Soviet decision. According to David
Holloway (1994, 133), “the atomic bomb was not only a powerful weapon; it was also a powerful
symbol of American power.” The atomic bomb represented a revolutionary source of national
power and security. Thus, “Stalin would still have wanted a bomb of his own.”
117
4.2.4 “Atomic Diplomacy” from London to Moscow, SeptemberDecember
1945
In the aftermath of the Second World War, the United States and the Soviet Union pursued
conflicting visions and interests for the postwar world. As John Lewis Gaddis (1972, 362)
describes, “the first postwar confrontation with Russia came in September 1945, when the foreign
ministers of the United States, the USSR, Great Britain, France, and China met in London to draw
up peace treaties.” While formally a diplomatic meeting to discuss the future of the “liberated”
countries of Eastern Europe, the deeper issue at London was the nature of the postwar relationship
between the United States and the Soviet Union: were they allies or adversaries? The specter of
the atomic bomb hung over the London Conference. Far from being an “overwhelming influence
towards world peace,” the atomic bomb was a source of tension between the great powers.
The essence of the conflict that the atomic bomb created in Soviet-American relations was
the problem of “atomic diplomacy” (Alperovitz 1965): the United States sought to wield the
atomic bomb as a de facto bargaining chip to coerce the Soviet Union into making political
concessions in peace negotiations, while the Soviet Union was determined not to cede an inch to
the United States to nullify the political power of the American atomic monopoly (Gaddis 1972).
This conflict of interests was reflected in the attitudes of U.S. Secretary of State James F. Byrnes
and Soviet Foreign Minister, Vyacheslav M. Molotov. By September 1945, Stimson was
advocating that the U.S. Government promptly approach the Soviet Union on the issue of the
international control of atomic energy, but Byrnes rejected this approach in order to keep “the
bomb in his pocket” at the peace negotiations in London (Gaddis 1972, 264; Bernstein 1975, 62).
Byrnes argued that “before any international discussion of the future of the bomb can take place,
we must first see whether we can work out a decent peace” (Lieberman 1970, 155). However,
Byrnes had no clear strategy for using the military power of the atomic bomb to achieve his
political objectives (Bernstein 1975). Molotov’s intransigence at London demonstrated that the
American atomic monopoly did not automatically translate into political power—what Gaddis
(1972) called the “impotence of omnipotence.” The Conference broke up in early October without
political agreement on any of the postwar problems and with mutual mistrust between the great
powers. “At London,” Lieberman writes (1970, 154), “the disintegration of the wartime alliance
with Russia had become a fact.”
In October, President Truman delivered a speech to Congress on the subject of atomic
energy, which contained the core elements of the narrative of humanity securitization:
118
The discovery of the means of releasing atomic energy began a new era in the
history of civilization… Never in history has society been confronted with a power
so full of potential danger and at the same time so full of promise for the future of
man [sic] and for the peace of the world. I think I can express the faith of the
American people when I say that we can use the knowledge we have won, not for
the devastation of war, but for the future welfare of humanity (Truman 1945c;
emphasis added).
President Truman went on to outline the need to establish both domestic and international policy
on the atomic bomb. The “most urgent step” was to settle a national policy for the domestic control
of atomic energy. Towards this end, Truman called for the creation of an Atomic Energy Agency
for “effective control and security” on nuclear activities within the United States, including
controlling raw materials, licensing and regulating private companies, and conducting scientific
research.
Truman’s call for domestic control foreshadowed the problem and policy of the
international control over atomic energy. “In international relations,” Truman declared, “the
release of atomic energy constitutes a new force too revolutionary to consider in the framework of
old ideas.” The world could no longer rely on the “slow progress” of international cooperation.
Instead, “civilization demands that we shall reach at the earliest possible date a satisfactory
arrangement for the [international] control of this discovery in order that it may become a powerful
and forceful influence towards the maintenance of world peace instead of an instrument of
destruction.” Truman drew stark alternatives for the atomic bomb: world peace through
international control or total destruction from nuclear war.
The hope of civilization lies in international arrangements looking, if possible, to
the renunciation of the use and development of the atomic bomb, and directing and
encouraging the use of atomic energy and all future scientific information toward
peaceful and humanitarian ends. The difficulties in working out such arrangements
are great. The alternative to overcoming these difficulties, however, may be a
desperate armament race which might well end in disaster (Truman 1945c;
emphasis added).
President Truman assumed the role of a securitizing actor by framing the atomic bomb as an
existential threat and calling for the extraordinary measures of international control for the survival
of humanity. However, Truman also hedged against the threat that the atomic bomb posed to
American national security. The President assured Congress and the American public that “these
discussions [for international control] will not be concerned with disclosures relating to the
manufacturing processes leading to the production of the atomic bomb itself.” Truman claimed
119
that it was “equally necessary to direct future research and to establish control of the basic raw
materials essential to the development of this power whether it is to be used for purposes of peace
or war (emphasis added). Although President Truman’s speech to Congress embraced the
narrative of humanity securitization, it also contained elements of the conflicting narrative of
national securitization on the atomic bomb.
During the Fall of 1945, American diplomacy on atomic energy continued to exclude the
Soviet Union in favor of Anglo-American cooperation. In November, President Truman, British
Prime Minister Clement Attlee, and Canadian Prime Minister MacKenzie King met in Washington
to discuss the future of the atomic bomb and the international control of atomic energy. On
November 15, they produced a joint declaration:
We recognize that the application of recent scientific discoveries to the methods
and practice of war has placed at the disposal of mankind [sic] means of destruction
hitherto unknown, against which there can be no adequate military defence, and in
the employment of which no single nation can in fact have a monopoly. We desire
to emphasize that the responsibility for devising means to ensure that the new
discoveries shall be used for the benefit of mankind [sic], instead of as a means of
destruction, rests not on our nations alone, but upon the whole civilized world [sic]
(Department of State 1960, 1; emphasis added).
The Truman-Attlee-King Declaration describes the atomic bomb as a “threat to civilization” and
outlines the goals of preventing the use of atomic energy “for destructive purposes” and promoting
the use of atomic energy for “peaceful and humanitarian ends.” It also declared the aim of
establishing a Commission under the United Nations, which would make specific
recommendations on (1) the “exchange” of “basic scientific information”; (2) the “control of
atomic energy” for “peaceful purposes”; (3) the “elimination from national armaments” of the
atomic bomb; and (4) “effective safeguards” to “protect complying states against the hazards of
violations and evasions” (Department of State 1960, 2). The Truman-Attlee-King Declaration was
a significant step towards the macrosecuritization of the atomic bomb in international relations,
but it had one glaring omission: a signature from Stalin.
In December, the representatives of the “Big Three” met again in Moscow. By this time,
Secretary Byrnes recognized that atomic diplomacy had failed to achieve American foreign policy
objectives and was ready to discuss the international control of atomic energy with the Soviet
leadership (Lieberman 1970, 184). Foreign Minister Molotov agreed to discuss the atomic bomb,
but only as the final item on their agenda (Hewlett and Anderson 1962, 475). At the Moscow
Conference, the representatives of the Grand Alliance discussed the issue of atomic energy without
120
much difficulty. The Soviets agreed to sponsor the American proposal to establish a commission
in the United Nations, which would make specific recommendations on the international control
over atomic energy. The only substantive disagreements between the Americans and the Soviets
were over the issue of the “veto” and principle of “stages” (Lieberman 1970). The Moscow
Communiqué reflected a quid pro quo, with the Americans acquiescing to the Soviet position that
the Commission be established under the direction of the UN Security Council—ensuring the
Soviet Union its veto power—and the Soviets accepting the American principle that the
Commission would carry out its work on the basis of stages, which would allow the United States
to maintain its atomic monopoly until it was satisfied with a system of international control. One
episode was revealing of the Soviet leadership’s deeper thinking and fears about the atomic bomb.
During a state dinner, Molotov proposed a toast to Conant, asking that if he had an atomic bomb
in his pocket to “bring it out.” Stalin, apparently annoyed with Molotov, broke in with his own
toast: “Here’s to science and American scientists and what they have accomplished… We must
now work together to see that this great invention is used for peaceful ends” (Holloway 1994, 158
159).
The Moscow Conference represented a crucial step towards great power consensus on the
international control of atomic energy. Yet neither side made any real concessions, since they had
only agreed to talk (Holloway 1994). On January 24, the newly established United Nations General
Assembly (UNGA) passed its very first resolution, establishing the United Nations Atomic Energy
Commission (UNAEC) with a mandate to make specific proposals:
(a) for extending between all nations the exchange of basic scientific information
for peaceful ends;
(b) for control of atomic energy to the extent necessary to ensure its use only for
peaceful purposes;
(c) for the elimination from national armaments of atomic weapons and of all other
major weapons adaptable to mass destruction;
(d) for effective safeguards by way of inspection and other means to protect
complying States against the hazards of violations and evasions (U.S. Department
of State 1960, 5).
4.2.5 The American Plan, JanuaryJune 1946
121
By January 1946, the atomic bomb had become a matter of public and political debate in the United
States. The Senate’s Joint Committee on Atomic Energy, chaired by Senator Brien McMahon, was
in the process of fashioning legislation for the domestic control of atomic energy, leading to the
1946 Atomic Energy Act and the creation of the U.S. Atomic Energy Commission (Hewlett and
Anderson 1962). But the American plan for the international control over atomic energy was not
yet settled.
The atomic scientists were eager to shape the public discourse and national policy on
atomic energy. On December 7, on the anniversary of Pearl Harbor, the atomic scientists at the
University of Chicago published the first issue of the Bulletin of Atomic Scientists, appealing to
the American public to work “unceasingly” for the international control of atomic energy as a “first
step towards permanent peace” (The Bulletin 1945, 1). On January 6, they founded the Federation
of Atomic Scientists with the mission of contributing to an informed public discussion about the
existential threat of the atomic bomb and the hope for security through the international control of
atomic energy. The atomic scientists made clear that the problem of the atomic bomb was political
not technical in nature: “No insuperable technical difficulties stand in the way of an efficient
international control of atomic power.” The responsibility fell on politicians and diplomats—
especially in the United States, the United Kingdom, and the Soviet Union—to create the
“necessary political understanding” for the international control of atomic energy. The
securitization narrative of the atomic scientists was one of the purest illustrations of humanity
securitization:
The problem of atomic power requires a fresh approach and should not be
encumbered by the burden of past and present conflicts and misunderstandings. We
fervently hope that all three big nations are aware that an agreement on atomic
power controls must be reached if our civilization is to survive… We can afford
compromises, disagreements, or delays in other fields — but not in this one, where
our very survival is at stake (The Bulletin 1945, 1; emphasis added).
Meanwhile, an effort was underway in Washington to craft a plan for the international
control over atomic energy. Secretary Byrnes appointed Undersecretary of State, Dean Acheson,
to chair the committee responsible for designing the American plan, which included Acheson,
Bush, Conant, General Groves, and John McCloy, the former Assistant Secretary of War. Acheson
also enlisted the help of David Lileanthal, director of the Tennessee Valley Authority, to put
together a board of consultants with the necessary scientific and technical knowledge on atomic
energy, which included Chester Barnard, Harry A. Winne, Charles Thomas, and Robert
122
Oppenheimer (Hewlett and Anderson 1962; Lieberman 1970). As Lileanthal saw it, their purpose
was to answer the following question: “Can a way be found, a feasible, workable way, to safeguard
the world against the atomic bomb?” (Lieberman 1970, 240).
Over several weeks, the Acheson-Lilienthal Committee worked to develop a blueprint for
a system of international control. The “Acheson-Lilienthal Plan” was a sixty-page report
examining the nature of atomic energy, the problem of great power rivalries, the feasibility of a
system of effective safeguards, the functions and composition of an international organization, and
the “transition” to the international control over atomic energy (Acheson et al. 1946). The
Acheson-Lilienthal Plan provided an affirmative response to Lilienthal’s question: “It is our
conviction that a satisfactory plan can be developed, and that what we here recommend can form
the foundation of such a plan” (Acheson et al. 1946, 5). It began with a three-part statement of the
basic security problem. First, atomic energy had placed at the “disposal of mankind [sic] means of
destruction hitherto unknown” in the form of the atomic bomb. Secondly, there can be “no
adequate military defence” against atomic warfare. Thirdly, no single state can maintain a
“monopoly” over the atomic bomb (Acheson et al, 1946, 89).
The Acheson-Lilienthal Plan took the position that an effective system of international
control could be established only if it went beyond traditional concepts of national security and
international cooperation (Gerber 1982). The authors rejected the notion that any agreement to
“outlaw” the use of the atomic bomb or “suppress” the development of atomic energy could
provide the “promise of adequate security.” Furthermore, a system that relied solely on
international inspections could not offer sufficient assurances against the clandestine development
of the atomic bomb (Acheson et al. 1946, 14, 22, 26, 34).
We have concluded unanimously that there is no prospect of security against atomic
warfare in a system of international agreements to outlaw such weapons controlled
only by a system which relies on inspection and similar police-like methods…
National rivalries in the development of atomic energy readily convertible to
destructive purposes are the heart of the difficulty. So long as intrinsically
dangerous activities may be carried on by nations, rivalries are inevitable and fears
are engendered that place so great a pressure upon a system of international
enforcement by police methods that no degree of ingenuity or technical competence
could possibly hope to cope with them (Acheson et al. 1946, 11).
The Acheson-Lilienthal Plan proposed a novel system for an international organization—
an “Atomic Development Authority” (ADA)—which would possess the powers and authority to
exercise effective control over atomic energy, including (Acheson et al. 1946, 37–45):
123
Exclusive ownership or control over all raw materials—such as uranium and thorium—
around the world;
Managerial control over all “intrinsically dangerous” activities and infrastructure in the
field of atomic energy;
Authority to license, regulate, and inspect all other “safe” activities in atomic energy;
Scientific research and technological development to expand the field of knowledge on
atomic energy;
Responsibility to promote the peaceful uses of atomic energy for the benefit of all nations
(see also Anderson and Hewlett 1962; Lieberman 1970; Gerber 1982).
One of the Plan’s main premises was that it was possible to distinguish between inherently “safe”
and “dangerous” activities in the development and use of atomic energy. A system of international
control could mitigate the dangers of national rivalries and clandestine development of an atomic
bomb by assigning all “intrinsically dangerous” activities to an international authority, while
permitting states to freely engage in “safe” activities (Acheson et al 1946, 26, 34). This system
would provide a clear warning to states without relying on discerning their intentions; for the “mere
fact” of a state engaging in inherently dangerous activities would constitute an “unambiguous
danger signal of warlike intentions” (Acheson et al. 1946, 34). The system could also afford some
security to states in the event of the breakdown of international control—for example, by
constructing nuclear facilities so that it would take “a year or more” to produce an atomic bomb
or distributing them geographically to maintain a strategic balance between states (Acheson et al.
1946, 49). States would then be in a position to respond to noncompliance through the traditional
means of international relations, including the ultima ratio of war.
The Acheson-Lilienthal Plan represented the most comprehensive examination of a system
for the international control over atomic energy. Its authors embraced the securitization narrative
of humanity securitization, invoking rhetoric such as “mankind,” “mass destruction,” “catastrophic
weapons,” “world security,” “security of the world,” “protection of the peoples of the world,” and
“the security of the world against atomic warfare” (Acheson et al. 1946). They offered the hope
for peace and security against the existential threat of the atomic bomb through an international
organization with unprecedented powers and authority in international relations.
It is clear, too, that in the solution of this relatively concrete and most urgent
problem of protecting mankind [sic] from the evils of atomic warfare, there has been
124
created an opportunity for a collaborative approach to a problem which could not
otherwise be solved, and the successful international solution of which would
contribute immeasurably to the prevention of war and to the strengthening of the
United Nations Organization (Acheson et al. 1946, 910; emphasis added).
The Acheson-Lilienthal Committee presented its report to the State Department on March
16. Two days later, President Truman appointed Bernard Baruch as the U.S. representative to the
UNAEC. Baruch had a long, illustrious career as a businessperson and public servant. Baruch
possessed a strong belief in Wilsonian internationalism (Gerber 1982). His appointment was a
political choice: his prestige in Washington ensured Congressional support for Truman’s policy of
the international control over atomic energy (Lieberman 1970, 263264). The Senate did not even
require Baruch to appear before a Senate hearing, only to sign a statement affirming that he would
not disclose American atomic secrets before adequate safeguards were in place (Hewlett and
Anderson 1962, 555). The atomic scientists, however, were unimpressed by the choice of Baruch
and his colleagues: “What did these men know about atomic energy?” (Hewlett and Anderson
1962, 555). On June 7, President Truman approved the American policy on the international
control over atomic energy. The “Baruch Plan” adopted the main tenets of the Acheson-Lilienthal
Plan, but with a few substantive changes that Baruch and his staff insisted on. First, any treaty
should establish sanctions and penalties for noncompliance. Secondly, any violation of a treaty
should not be subject to the veto power of the UN Security Council. Thirdly, any international
authority should exercise “dominion” rather than “ownership” over raw materials. Finally, there
should be a requirement for a preliminary survey of raw materials around the world (Lieberman
1970, 286; Gerber 1982). At long last, the United States had decided on a policy for the
international control over atomic energy.
4.2.6 Evolution
In the second historical stage (June 1945–June 1946), the security constellation on the atomic
bomb became a complex, two-level phenomenon of both domestic and international politics. The
events of Hiroshima and Nagasaki made the atomic bomb one of the most pressing issues facing
the great powers in the aftermath of the Second World War, with multiple securitizing actors and
audiences both within and between states—including the atomic scientists, key political leaders,
national governments, and the public (see Figure 4.2). At first, the dominant securitization
narrative shaping the thinking and decisions of the great powers on the atomic bomb was national
securitization, which framed rival great powers as a threat and the atomic bomb as a potentially
125
decisive source of military power and national security. During the Second World War, this
securitization narrative shaped the American decision to use the atomic bomb through the
essentially Clausewitzian logic of the bomb as a legitimate instrument of war—the “absolute
weapon” (Brodie 1945)—which would break the enemy’s will to continue military resistance and
force them to accept the political terms of “unconditional surrender” (Stimson 1947). In the
aftermath of the Second World War, it took the form of “atomic diplomacy” towards the Soviet
Union, with some of the key figures in the Truman administration—particularly Secretary
Byrnes—hoping to use the implicit threat of the atomic bomb to extract political concessions from
the Soviet Union in the postwar peace (Alperovitz 1965; Lieberman 1970; Bernstein 1974; 1975).
In the Soviet Union, national securitization was the sole security narrative on the atomic bomb,
with Stalin perceiving the American atomic monopoly as overthrowing the balance-of-power and
posing a serious threat to Soviet national security. The Soviet leadership therefore took the
consequential decision in August 1945 to launch a secret program to develop the atomic bomb as
quickly as possible (Holloway 1981; 1994; Cochran et al. 1995).
Figure 4.2: “Evolution,” June 1945–June 1946
Notes: Figure 4.2 depicts the second stage of the security constellation on the atomic bomb. The atomic
scientists, state department officials, and military staff represent the main securitizing actors in the United
States, with competing security narratives of humanity securitization and national securitization. The U.S.
126
Government is their principal domestic audience, while the general public is a secondary domestic audience
that can influence national policy. The U.S. Government also becomes an international securitizing actor,
with the Soviet Union as its principal audience. The U.S. Government adopts a narrative of humanity
securitization, but the Soviet Union interprets its actions in term of national securitization.
Over time, the narrative of humanity securitization became a stronger influence over the
thinking and policy decisions of the United States. The London Conference reveled the limits of
atomic diplomacy and the American government increasingly turned diplomacy towards the
international control over atomic energy. By the Fall of 1945, President Truman had made a series
of speeches that adopted the core elements of the narrative of humanity securitization and
supported a policy of international control. Initially, the United States pursued diplomacy on
international control through the framework of Anglo-American cooperation (e.g., the Truman-
Attlee-King Declaration), but eventually approached the Soviet Union and obtained an agreement
to pursue multilateral diplomacy through the establishment of the UN Atomic Energy Commission
(i.e., the Moscow Communiqué). The narrative of humanity securitization reached its peak
influence over American thinking and policy during the Spring and Summer of 1946, with the
development of the Acheson-Lileanthal Plan and the adoption of the Baruch Plan.
Overall, this contestation between the securitization narratives of humanity securitization
and national securitization continued to shape the evolution of the security constellation on the
atomic bomb in international relations. Both narratives influenced the key decisions of American
policy on the atomic bomb: the United States would seek the international control over atomic
energy through multilateral diplomacy in the UNAEC, while maintaining the American atomic
monopoly through the nondisclosure of scientific and technical information (i.e., atomic secrecy)
and continued production of atomic bombs. While these securitization narratives of humanity
securitization and national securitization coexisted uneasily in the Baruch Plan, they would
ultimately prove irreconcilable in the diplomacy on the international control over atomic energy.
4.3 Demise of the United Nations Atomic Energy Commission
4.3.1 The “Baruch Plan,” June 1946
On June 14, Bernard Baruch addressed the first session of the United Nation Atomic Energy
Commission to present the American plan for the international control of atomic energy.
My Fellow Members of the United Nations Atomic Energy Commission, and My
Fellow Citizens of the World. We are here to make a choice between the quick and
the dead. That is our business. Behind the black portent of the new atomic age lies
127
a hope which, seized upon with faith, can work our salvation. If we fail, then we
have damned every man [sic] to be the slave of Fear. Let us not deceive ourselves:
We must elect World Peace or World Destruction (U.S. Department of State 1960,
78; emphasis added).
In grandiose style, Baruch’s speech embraced the securitization narrative of humanity
securitization on the atomic bomb. Baruch made repeated references to “humanity” (or “mankind”)
and exhorted the other diplomats to represent the “will of mankind” [sic] and answer the “world’s
longing for peace and security.” “In this crisis,” Baruch proclaimed, “we represent not only our
governments but, in a larger way, we represent the peoples of the world.” In doing so, Baruch
framed humanity not only as the referent object of security (i.e., an entity under threat), but also
as a political subject—a “we,” constituted by the common “will,” “spirit,” and “aspirations of
mankind” [sic].
Baruch framed the atomic bomb as a matter of “life” and “death” for humanity: a choice
between the threat of “World Destruction” and the security of “World Peace.” The “peril” of the
atomic bomb has its roots in science, which had “torn from nature a secret so vast in its
potentialities that our minds cower from the terror it creates.” The scientific search for “the
absolute weapon” had succeeded in the United States, but now science could offer no “adequate
defence” against the “dread power” of the atomic bomb. Against the existential threat of the atomic
bomb, there was only one hope or “safeguard” for the security of humanity.
In our success lies the promise of a new life, freed from the heart-stopping fears
that now beset the world. The beginning of victory for the great ideals for which
millions have bled and died lies in building a workable plan. Now we approach
fulfilment of the aspirations of mankind [sic]. At the end of the road lies the fairer,
better, surer life we crave and mean to have (Department of State 1960, 8).
Thus, Baruch laid out the American plan for the international control of atomic energy:
“The United States proposes the creation of an International Atomic Development Authority, to
which should be entrusted all phases of the development and use of atomic energy.” This
international organization would possess special powers and authority, including:
1. Managerial control or ownership of all atomic-energy activities potentially dangerous to
world security;
2. Power to control, inspect, and license all other atomic activities;
3. The duty of fostering the beneficial uses of atomic energy;
128
4. Research and development responsibilities of an affirmative character intended to put
the Authority in the forefront of atomic knowledge and thus to enable it to comprehend,
and therefor to detect, misuse of atomic energy (Department of State 1960, 10–11).
It would conduct a global survey of the raw materials of uranium and thorium as one of its “earliest
objectives,” and exercise “complete managerial control of the production fissionable materials.” It
would have the authority to “determine the line between intrinsically dangerous and non-
dangerous activities,” making any dangerous activities undertaken by states “an unambiguous
danger signal.” It would be made up of competent personnel on an “international basis” and ensure
a “strategic distribution” of nuclear infrastructure and materials around the world. The system of
international control over atomic energy would come into being in “successive stages,” with the
transitions from one stage to another established by treaty. The United States would make available
“essential” scientific and technical information for a “reasonable understanding” of atomic energy,
but “further disclosures” would be contingent on the ratification of a treaty and progress towards
a system of international control.
These were the fundamental features of the American plan for the international control
over atomic energy. Baruch also rejected a treaty based on the “pious thoughts” of the
“renunciation” of the atomic bomb or “outlawing” its use in war as insufficient. He insisted that a
treaty must establish “enforceable sanctions” and “penalties” against violations—an “international
law with teeth” (Department of State 1960, 11). He also challenged the special rights and privileges
that the United Nations Charter had granted to the great powers; for “there must be no veto to
protect those who violate their solemn agreements not to develop or use atomic energy for
destructive purposes” (Department of State 1960, 12; emphasis added). “We of this nation,” stated
Baruch, “are prepared to make our full contribution toward effective control of atomic energy.”
Once an “adequate system” of international control over atomic energy had been agreed upon and
come into effect, the United States would undertake to cease production of the atomic bomb,
destroy its existing stockpile of weapons, and share its scientific and technical information on
atomic energy (Department of State 1960, 11).
While Baruch framed the American plan for international control “as a basis for beginning
our discussion,” it was actually a non-negotiable American policy. In particular, the United States
was unwilling to consider any proposal that would erode the American atomic monopoly—
whether by placing limits on its arsenal of atomic bombs, or by sharing scientific information on
atomic energy—before an effective system of international control was put in place. The
129
justification for the American policy of “stages” was national security. As Baruch put it, “before a
country is ready to relinquish any winning weapons it must have more than words to reassure it. It
must have a guarantee of safety” (Department of State 1960, 13).
Thus, the Baruch Plan was shaped by two fundamentally distinct, and ultimately
conflicting securitization narratives on the atomic bomb. On the one hand, it reflected the security
narrative of humanity securitization by framing the atomic bomb as an existential threat to
humanity, which required an international organization with unprecedented powers and authority
to ensure the security of humankind. On the other hand, it reflected the national securitization of
the atomic bomb, which required the continuation of the American atomic monopoly to protect
the United States from the military power and secret development of atomic energy by other states,
especially the Soviet Union. While Baruch’s speech embraced the core elements of the narrative
of humanity securitization, behind this rhetoric—however genuine—the American plan was
shaped and constrained by the international realities of asymmetric power and political mistrust
between the great powers. Ultimately, this led the United States to see the atomic bomb as a
source—albeit transient—of national security and to demand a greater degree of trust from other
states than it was willing to show in return. The United States would renounce the bomb once a
system of international control was in existence, but in the meantime the rest of the world must
accept the United States as a “trustee” of the atomic bomb and believe that it would use this power
only for humanity’s interest in world peace and security. For the United States, these unequal terms
offered “the last, best hope of earth” (Department of State 1960, 13).
4.3.2 The “Gromyko Plan,” June 1946
The first responses to the Baruch Plan in the UNAEC came on June 19, 1946. The first to speak
was the Canadian representative, A. G. L. McNaughton, who welcomed the American proposal
and expressed support for the principles behind it. Next spoke four other delegates—Sir Alexander
Cadogan of the United Kingdom, Dr. Quo Tai-chi of the Republic of China, Captain Alvaro-
Alberto da Motte e Silva of Brazil, and Dr. Sandoval Vallarta of Mexico—who expressed their
general approval of the American proposal, as well as some minor reservations. The British and
Canadian delegates suggested the importance of promoting an atmosphere of mutual trust through
the immediate exchange of scientific information. The Chinese delegate raised the question of the
composition of the Atomic Development Authority. The Brazilian delegate flagged the difficulty
of managerial control over the deposits of raw materials. On the issue of the veto, the Chinese and
130
Mexican delegates supported the American stance, the British delegate avoided the veto issue, and
the Canadian delegate said that he did not like the veto but recommended that the Commission
focus its attention on the more substantive elements of the American proposal (Hewlett and
Anderson 1962, 583).
Then, the Soviet permanent representative to the United Nations, Andrei A. Gromyko,
addressed the Commission. Gromyko’s speech adopted some of the core elements of the security
narrative of humanity securitization. “One of the greatest discoveries of mankind [sic],” Gromyko
began, “found its material application in the form of a particular weapon—the atomic bomb”
(Department of State 1962, 17).
[I]t is the general opinion that humanity stands at the threshold of a wide
application of atomic energy for peaceful purposes for the benefit of the peoples,
for promoting their welfare and raising their standard of living and for the
development of science and culture. There are thus two possible ways in which this
discovery can be used. One way is to use it for the purpose of producing the means
of mass destruction. The other way is to use it for the benefit of mankind [sic]
(Department of State 1962, 17; emphasis added).
Gromyko emphasized the relationship between atomic energy and world peace, claiming that
“there can be no active and effective system of peace if the discovery of the means of using atomic
energy is not placed in the service of humanity and is not applied to peaceful purposes only.”
Gromyko described two possible futures. In one, the use of atomic energy only for “peaceful
purposes” could “strengthen confidence between the countries and friendly relations between
them.” In the other, the continued use of atomic energy “for the production of weapons of mass
destruction is likely to intensify mistrust between States and to keep the peoples of the world in a
continual anxiety and uncertainty” (Department of State 1962, 18).
Despite some of the similarities in their rhetoric, Gromyko did not respond to Baruch’s
proposal. Instead, the Soviet representative proposed an alternative plan, which called for the
immediate negotiation of a treaty to prohibit the atomic bomb, while leaving open the possibility
of a system of international control of atomic energy in the future (Department of State 1962, 19).
The Soviet delegation proposes that consideration be given to the question of
concluding an international convention prohibiting the production and employment
of weapons based on the use of atomic energy for the purpose of mass destruction.
The object of such a convention should be the prohibition of the production and
employment of atomic weapons, the destruction of existing stocks of atomic
weapons and the condemnation of all activities undertaken in violation of this
convention (Department of State 1962, 18).
131
Gromyko stated that a treaty would be “only one of the primary measures to be taken to prevent
the use of atomic energy to the detriment of mankind” [sic], which could be “followed” by “other
measures” to ensure “strict observance” of the treaty, including a “system of control” and sanctions
for the “unlawful use of atomic energy” (Department of State 1962, 18). Gromyko also asserted
the necessity of the “exchange of scientific information” and “joint scientific efforts” between
states in order to broaden the possibilities for the peaceful uses of atomic energy. International
scientific cooperation was essential, since “atomic energy, cannot remain for an indefinite time the
property of only one country or small group of countries. It is bound to become the property of a
number of countries” (Department of State 1962, 19). This would represent, in the opinion of the
Soviet Union, a “serious step” towards achieving the “aspirations and conscience of the whole of
progressive humanity” (Department of State 1962, 19)
The Soviet representative proceeded to make “two concrete proposals” to the Commission.
The first was a draft treaty (“International Convention to Prohibit the Production and Employment
of Weapons Based on the Use of Atomic Energy for the Purpose of Mass Destruction”). The treaty
took on the rhetoric of a narrative of humanity securitization, invoking positive articulations of the
referent object of humanity—including “the peoples of the world,” “the benefit of mankind” [sic],
“the progress of human culture,” “the public opinion of the civilized world” [sic], “the interests of
mankind” [sic], and “the aspirations and the conscience of the peoples of the whole world”—and
negative invocations of the threat of the atomic bomb—such as the “great danger…for peaceful
towns and the civilian population,” “the purpose of mass destruction,” and the “mass destruction
of human beings” (Department of State 1962, 2021). The key substantive provision of the draft
treaty was Article 1, which stated that “The high contracting parties solemnly declare that they are
unanimously resolved to prohibit the production and employment of weapons based on the use of
atomic energy,” and therefore undertake (a) not to use the atomic bomb under any circumstances;
(b) to prohibit the production and stockpiling of atomic bombs; and (c) to destroy all existing
stockpiles of atomic bombs within three months of the treaty’s entry-into-force. Article 2 added
that any violation of these terms would be considered a “most serious international crime against
humanity” (Department of State 1962, 21).
The second proposal was a program of work for the Commission organized into two
committees. The first committee would address the exchange of scientific information for
“obtaining and using atomic energy,” including (a) the contents of scientific discoveries; (b)
technology and technological processes; (c) the organization and methods of industrial production;
132
and (d) the forms, sources and locations of raw materials (Department of State 1962, 23). The
second committee would address “the prevention of the use of atomic energy to the detriment of
mankind” [sic]. While this committee would consider the “measures, systems and organization of
control,” as well as a “system of sanctions,” Gromyko’s consideration of these points was vague
and ambivalent. The final important aspect of Gromyko’s speech concerned the veto, whereby any
“attempts to undermine the principles” enshrined in the UN Charter, including “unanimity”
amongst the members of the Security Council, “must be rejected” (Department of State 1962, 24).
Overall, the Soviet proposal for a treaty to prohibit the atomic bomb was exactly the sort
of international agreement that the Americans had rejected as insufficient—i.e., a renunciation of
the atomic bomb based on “pious thoughts.” While Gromyko had left open the possibility of the
future development of a system of international control over atomic energy, the Soviet proposal
was too abstract and noncommittal in this regard to assuage the American demand for effective
safeguards. Crucially, the Soviet proposal implied the reversal of the “stages” in the American
plan: first, the elimination of the atomic bomb; second, the international control over atomic
energy. The Soviet proposal cut to the heart of the Baruch Plan, which envisioned the United States
maintaining its atomic monopoly until the international control of atomic energy was firmly
established. Not surprisingly, the historiography has tended to interpret Soviet policy as essentially
hostile to an effective system of international control over atomic energy (Sutherland 1961;
Holloway 1981; 1994; Nogee 2012). Yet one does not need to assume malign intentions to explain
Soviet policy. The Soviet position in the UNAEC reflects its understanding of how the atomic
bomb had affected Soviet national security and what was necessary to restore the balance-of-
power. For the Soviet Union, the principal threat was the American possession of the atomic bomb
(Holloway 1994). Therefore, a treaty for the prohibition of the atomic bomb would provide a
diplomatic solution to an intolerable military situation. The international control of atomic energy
was secondary to the primary problem of the American possession of the atomic bomb, especially
if such a system would prevent the Soviets from acquiring the atomic bomb while preserving the
American atomic monopoly.
Importantly, the Soviet Union framed the threat of the atomic bomb in terms of the
established category of “weapons of mass destruction.” It is unlikely that the Soviet leadership
understood the atomic bomb as posing an existential threat to humanity. Nowhere in Gromyko’s
speech, nor in the diplomatic correspondence between Gromyko and the Kremlin (Holloway
2020), is there any mention of the possible “destruction of civilization.” The atomic bomb was a
133
matter of the “consciousness of humanity,” not the survival of humankind. By placing the atomic
bomb alongside other “weapons of mass destruction,” like biological and chemical weapons, the
Soviet diplomats downplayed the exceptional nature of the atomic bomb. The Soviet Union
therefore recommended the normal practice in international relations of a treaty to renounce a
weapon of war and ignored the American demand for extraordinary measures to neutralize an
existential threat to humanity.
4.3.3 Bikini Atoll and American Dependence on the Bomb, July 1946
On July 1, the U.S. armed forces began a series of military tests of the atomic bomb at the Bikini
Atoll lagoon in the Marshall Islands. Unlike the Trinity Test, the “Bikini Tests” were displayed to
the world—politicians, scientists, soldiers, journalists, and foreign dignitaries all watched. In the
year since Hiroshima, the image of the atomic bomb had become almost biblical. Against popular
imagination, the Bikini Tests were something of a disappointment. In the first above-the-surface
test, only three vessels were destroyed. In the second below-the-surface test, the column of water
was more visibly impressive. However, the Bikini Tests left the general impression that the atomic
bomb was a “terrible but a finite weapon” (Hewlett and Anderson 1962, 581).
The Bikini Tests occurred just two weeks after Baruch’s speech at the Atomic Energy
Commission. The contradiction between American diplomacy and nuclear tests was not lost on
other nations. The Soviet reaction was predictable. Pravada claimed that the Americans were
seeking not to restrict but to perfect the atomic bomb and charged the United States with
“international blackmail” (Hewlett and Anderson 1962, 581; Holloway 1994, 162). The timing of
the Bikini Tests with discussions in the UNAEC appear to have been by coincidence rather than
design. The Bikini Tests were pushed by the Department of Defence, not the Department of State.
The Navy wanted to determine the effects of an atomic bomb on a fleet of warships. The staff
planning for Bikini Atoll began in September 1945 and was approved by President Truman in
January 1946. The date of July 1 was scheduled to accommodate both the atomic scientists and
congresspersons. However, American decision-makers once again miscalculated the effects of the
atomic bomb on the Soviet Union. The Bikini Tests would harden Soviet diplomacy in the Atomic
Energy Commission, just like Hiroshima had done in the postwar peace negotiations. According
to Joseph Baratta (1985, 609), “The second Bikini test on 25 July provoked a complete Russian
rejection of the Baruch Plan.”
134
Yet the Bikini Tests were demonstrative of a more fundamental shift in American national
security policy over the course of 1946: the increasing reliance of the American military on the
atomic bomb. After the defeat of Germany and Japan, there was an overwhelming demand for
demobilization in the United States. Winston Churchill’s “Iron Curtain” speech on March 5, 1946,
in Fulton, Missouri may have influenced American public opinion about the reality of Soviet
political domination of Eastern Europe, but it did not reverse the course of American
demobilization. This reality of domestic politics made it impossible for the U.S. Government to
prevent the rapid reduction of American conventional military power, which constituted the most
serious check on Soviet power in Europe. The effect of demobilization was an increasing
dependence on the atomic bomb for national security (Lieberman 1970; Gaddis 1972; Gerber
1982).
Indeed, the link between the military and diplomatic dimensions of the American policy of
international control were spelled out to Baruch even before his speech at the UNAEC. In May,
Baruch had requested the opinions of the Joint Chiefs of Staff on the policy of international control.
Admiral Chester W. Nimitz believed that an international agreement on international control
should only follow a European peace treaty. General Dwight D. Eisenhower warned that, “If we
enter too hurriedly into an international agreement to abolish all atomic weapons, we may find
ourselves in the position of having no restraining means in the world capable of effective action.”
Therefore, “To control atomic weapons, in which field we are pre-eminent, without provision for
equally adequate control of other weapons of mass destruction can seriously endanger national
security” (Bernstein 1974, 1036). General Carl Spaatz made explicit the relationship between
demobilization and the atomic bomb:
The conventional military strength of the United States has been reduced drastically
by the hysterical pace of demobilization. The atomic bomb because of its decisive
nature is now an essential part of our military strength. Our monopoly of the bomb,
even though it is transitory, may well prove to be a critical factor in our efforts to
achieve first a stabilized condition and eventually a lasting peace. Any step in the
near future to prohibit atomic explosives would have a grave and adverse effect on
the United States since the result is reduction in our advantage without a
proportionate reduction in the strength of other powers. Such action in my opinion
would threaten the security of the United States and the peace and security of the
entire world (Bernstein 1974, 1036; emphasis added).
135
The consensus opinion of the Joint Chiefs-of-Staff of the U.S. Armed Forces was that the
“demobilization of conventional forces… made America dependent on the bomb” (Bernstein
1974, 1036).
American foreign policy was also moving from mere suspicion to outright hostility towards
the Soviet Union (Gaddis 1972). On February 22, George Kennan’s infamous “Long Telegram”
was received in Washington. Kennan argued that “ideology and circumstances” were behind the
“basic unfriendliness” of Soviet foreign policy—namely, the Marxist-Leninist ideology that
perceived an “innate antagonism” between capitalism and communism, and the historical
circumstances of the repeated invasions of Russia by foreign powers that contributed to the “semi-
myth of implacable foreign hostility” (Kennan 1947, 570571). The twin determinants of ideology
and circumstances shaped the expansionist nature of Soviet power, both the consolidation of the
monopoly-of-power at home and its expansion into every “nook and cranny” available to it around
the world. Kennan articulated the strategic principle of “containment,” whereby the United States
should apply steady and long-term counter-pressure to contain the expansionist tendencies of the
Soviet Union.
[I]t will be clearly seen that the Soviet pressure against the free institutions of the
western world is some thing that can be contained by the adroit and vigilant
application of counter-force at a series of constantly shifting geographical and
political points, corresponding to the shifts and manoeuvres of Soviet policy, but
which cannot be charmed or talked out of existence (Kennan 1947, 576).
While Kennan made no mention of the atomic bomb nor the international control over atomic
energy, the implications of his analysis were clear: the Soviet Union was an implacable great power
adversary and the United States must rely on the resolute demonstration of American power.
4.3.4 Stalemate in the Atomic Energy Commission, JulySeptember 1946
The UNAEC convened for the third time on June 25, giving the delegates an opportunity to signal
where they stood with respect to the American and Soviet proposals. The Polish delegates endorsed
the Soviet proposal, while the Egyptian and French delegates supported the American proposal.
The Australian delegate and chairman, Herbert V. Evatt, expressed his support for the American
position and proposed the creation of a subcommittee to prepare a general plan for an international
authority on atomic energy. Once Gromyko was satisfied that the subcommittee would consider
all proposals—not simply the American one—the Commission approved the subcommittee
(Hewlett and Anderson 1962, 585). Subcommittee No. 1 held its first meeting on July 1 and
136
Chairman Evatt proposed six items for general discussion, which reflected the American plan for
an international authority. Gromyko immediately rejected the proposal, arguing that the first step
must be to outlaw the atomic bomb—international control could be considered later. He also
argued that the Baruch Plan was incompatible with the authority and functions of the UN Security
Council.
The American representative to the subcommittee, Ferdinand Eberstadt, shared two
additional memoranda. The first memorandum described a draft treaty to establish an international
authority, including (i) the basic principles to be stated in the preamble; (ii) the purposes and
functions of the authority; (iii) the composition, organization, and location of the authority; (iv)
provisions for verification and enforcement of the treaty; and (v) the relations between the
authority and the Security Council, General Assembly, and International Court of Justice,
including “any necessary amendment of the charter of the United Nations” (Department of State
1960, 2529). The second memorandum elaborated in greater detail the problem of international
control over atomic energy and the American plan for an Atomic Development Authority,
following the substantive reasoning of the Acheson-Lilienthal Report. The memorandum
concluded that the “core” of a system of international control over atomic energy rested in the
authority’s capacity to exercise “effective dominion” over all uranium and thorium and their
derivatives, as well as “complete control” over all activities that could facilitate the production of
atomic weapons (i.e., “intrinsically dangerous activities”) (Department of State 1960, 2936).
Gromyko posed many questions to the American representative. How would inspections
work? How would atomic facilities be distributed around the world? Why was the United States
unwilling to sign a convention to outlaw the atomic bomb? To this last question, Eberstadt
explained that such a treaty would not fulfill the mandate set out by the General Assembly’s
resolution and would be unacceptable to the United States in the absence of a system of
international control. The Soviet Union was asking the United States to eliminate the atomic bomb
and share scientific and technical information on atomic energy in return for little more than a
promise by other nations not to produce the atomic bomb. The Kellogg-Briand Pact to outlaw war
had demonstrated the ineffectiveness of such treaties (Hewlett and Anderson 1962, 587). On July
24, Gromyko attacked the entire American plan for an atomic development authority as being
inconsistent with the principles of national sovereignty and the United Nations system. The Soviet
Union could not accept that American proposals “either as a whole or in their parts.”
137
Two days later, Gromyko made another statement advocating the Soviet plan for an
interventional convention to outlaw the atomic bomb. This time the other delegates posed
questions to the Soviet representative. How could nations be sure of the effectiveness of an
international convention to outlaw the bomb? To this, Gromyko responded that the only real
assurance that existed between nations was the genuine desire of all to cooperate. There was always
the UN Security Council to ensure compliance (Hewlett and Anderson 1962, 590–591). On August
6, the American representative reiterated the goal of a “workable plan” to prevent the use of atomic
energy for war and the provide safeguards for complying states. He pushed Gromyko to give
further details about the Russian proposal, particularly how the Security Council might learn about
and respond to an aggressive nation that pursued the clandestine development of the atomic bomb.
The Russian representative responded that the United States was concentrating on the future, while
from the Russian perspective the problem was the present situation in which the atomic bomb
could be produced and used without limit. The subcommittee ended in political stalemate, unable
to reconcile the basic conflict between the American and Soviet positions.
The delegates decided to postpone further discussion until receiving a report from the
Scientific and Technical Committee on the technical feasibility of a system of international control
over atomic energy. This subcommittee organized itself as an informal conference amongst
scientists rather than as official representatives of their respective countries. On September 3, the
Scientific and Technical Committee presented its report: “We do not find any basis in the available
scientific facts for concluding that effective control is not technologically feasible” (Hewlett and
Anderson 1962, 594; Lieberman 1970, 341). The Russian scientist had collaborated fully with the
other scientists and announced that he had no objections to the report but lacked the political
authorization to make any decisions.
4.3.5 The Vote in the Atomic Energy Commission, SeptemberDecember
1946
By September 1946, Baruch felt the need to discuss strategy and receive further instructions from
President Truman about how to approach the diplomatic impasse at the UNAEC. The Soviets had
categorically rejected the Baruch Plan. Baruch prepared a letter explaining the situation to the
President. As he saw it, the United States had two options. The first was to pursue a majority
report—not unanimity—and bring it to a vote in the UNAEC. This approach would solidify the
American position and likely receive support from friendly delegations but would probably be
138
rejected by the Soviet Union and Poland and ultimately fail in the Security Council. The second
was to avoid a diplomatic break between the United States and the Soviet Union on atomic energy
through a recess of the UNAEC. This approach would not aggravate the international situation and
leave open the possibility of resuming discussions within the UNAEC under more favorable
circumstances in the future. Baruch expressed his preference for the former approach, since three
of the delegations that were supportive of the American position—Egypt, Mexico, and the
Netherlands—would leave the UNAEC on January 1, 1947, which could weaken the American
diplomatic position on the international control over atomic energy. Baruch concluded that
American national security should not depend on its plans for the international control of atomic
energy and that the United States should be prepared to take the necessary national measures
should diplomacy fail (Hewlett and Anderson 1962, 596597).
Meanwhile, the Baruch Plan had become front-page news in the United States. The former
Vice President, Henry A. Wallace, had written a letter to President Truman criticizing the
American plan for the international control of atomic energy. How could the United States expect
other states—especially the Soviet Union—to accept binding limitations on their national
development of atomic energy while the United States was free to maintain and develop its arsenal
of atomic weapons? It was no surprise that the Soviet Union rejected the American plan, which
presupposed uneven trust and unequal commitments. If the United States insisted on a policy of
“stages,” then Wallace could foresee only deadlock on the international control of atomic energy,
followed by a nuclear arms race between the United States and the Soviet Union. Wallace saw it
as a vital confidence-building measure that the United States cease the production of atomic
bombs. The “Wallace Affair” put the White House temporarily on the defensive but ultimately did
not change American policy. Wallace was asked to resign from Truman’s cabinet and Baruch made
a line-by-line rebuttal against Wallace’s charges (Hewlett and Anderson 1962). Like Bohr in 1944,
Wallace’s dissent was ignored and his loyalty questioned (Lieberman 1970).
For a brief moment in October, the Scientific and Technical Committee in the UNAEC
seemed to make progress, with a unanimous decision to adopt its technical report and fruitful
discussions about technical safeguards and mining practices for preventing the diversion of raw
materials. Yet progress on technical matters had no impact on the political questions before the
Atomic Energy Commission. On October 29, Soviet Foreign Minister Molotov resumed the attack
on the Baruch Plan within the United Nations First Committee, accusing the United States of the
“monopolistic possession of the atomic bomb” and calling on states to adopt the Soviet proposal
139
of a treaty to prohibit the atomic bomb (Hewlett and Anderson 1962, 608). Then, on November 5,
President Truman and Secretary Byrnes directed Baruch to pursue a 10-2 vote in the Atomic
Energy Commission. The U.S. Government had made the decision to shift the goal of American
diplomacy from security against the atomic bomb through the international control over atomic
energy to a political “propaganda victory” against the Soviet Union in the emerging Cold War
(Shils 1947, 871; Bernstein 1974, 1043; Gerber 1982, 93).
From that moment on, the American delegation assumed the diplomatic offensive to
produce a vote in the UNAEC to adopt a report based on the Baruch Plan. On December 5, Baruch
once again addressed the Commission. His speech began by reminding the other dignitaries of
their duty to humanity:
My fellow members of the Atomic Energy Commission. The primary responsibility
for originating a system to protect the world against the atomic bomb has been
placed squarely in our hands… The stakes are greater than ever before offered
mankind [sic]—peace and security (Department of State 1960, 4445).
According to Baruch, humanity stood at the “edge of danger.” On one side lay the “awakened
conscience of humanity” for international control over the “instruments of mass destruction” and
even the “elimination of war itself.” On the other side lay the “chaos of fear” of the atomic bomb
that would only bring humanity “nearer to our doom” (Department of State 1960, 4546). The
American plan for the international control of atomic energy provided the best hope for the security
of humanity against the existential threat of the atomic bomb: “The outline here presented is the
bone and the sinew of any effective international control that may be—that shall be—that must be
established if the civilized world [sic] is not to be ended” (Department of State 1960, 46; original
emphasis). The United States was prepared to “surrender… the absolute weapon,” an offer which
was “generous and just,” but could not accept “unilateral disarmament by which America gives up
the bomb, to no result except our own weakening” (Department of State 1960, 46). The United
States welcomed the cooperation of other states, “especially” the Soviet Union, to establish an
international authority that could protect the world against the atomic bomb.
I beg you to remember that to delay may be to die. I beg you to believe that the
United States seeks no special advantage. I beg you to hold fast to the principle of
seeking the good of all, and not the advantage of one (Department of State 1960,
45).
Baruch presented a draft resolution to the Atomic Energy Commission based on the main
elements of the Baruch Plan. On December 13, the UN General Assembly passed Resolution 41(I),
140
which urged the “expeditious fulfilment” of the Commission’s work. Baruch seized this
opportunity to push for a vote. Gromyko objected that the American proposal did not conform to
the UNGA resolution and requested additional time for study and discussion. Baruch agreed to a
“short delay” of no more than three days. When Gromyko again sought to delay a decision, the
Commission voted against Gromyko’s call for a delay and the Soviet delegation ceased its efforts
at resistance. On December 31, 1946, the UNAEC held an anticlimactic vote on its “First Report
of the Atomic Energy Commission to the Security Council,” based on the Baruch Plan: 10 states
voted in favor and two—Poland and the Soviet Union—abstained. The 10-2 vote meant there was
no hope for adoption of the report by the UN Security Council. The international control over
atomic energy had failed.
4.3.6 Demise
In the third historical stage (June–December 1946), the international security constellation on the
atomic bomb became a high-level process of multilateral diplomacy within the newly established
UN Atomic Energy Commission. The principal securitizing actors and audiences were the
diplomatic representatives of the United States and Soviet Union—especially Baruch and
Gromyko—as well as a broader audience of the other delegations (see Figure 4.3). Officially, the
American policy on the international control of atomic energy—the Baruch Plan—was based on
the narrative of humanity securitization, calling for the creation of an international organization
with unprecedented authority and powers to neutralize the atomic threat to humanity. In practice,
national securitization continued to exercise a powerful influence over American policy, from the
American diplomat’s insistence on the principle of “stages”—i.e., that an effective system of
international control must precede the United States relinquishing the atomic bomb—to the
growing dependence of the American military on the atomic bomb as a counterbalance to Soviet
conventional military power in Europe (e.g., the Bikini Atoll nuclear tests).
Conversely, the Soviet policy—or Gromyko Plan—contested the security narrative of
humanity securitization, framing the atomic bomb within the familiar category of “weapons of
mass destruction” and calling for the established international practice of a treaty to renounce and
destroy the atomic bomb—i.e., “normal politics”—while leaving open the possibility of “other
measures” in the future. Behind the scenes, the Soviet Union was committed to moving forward
with the secret development of the atomic bomb—that is, national securitization of the bomb. In
the end, these conflicting securitization narratives came to a head within the Atomic Energy
141
Commission. The diplomatic impasse between the American and Soviet representatives and their
respective plans culminated in an inconsequential 10-2 vote in the UNAEC and the demise of the
international control over atomic energy, “the last, best hope” for humankind (U.S. Department of
State 1960, 16).
Figure 4.3: “Demise,” June–December 1946
Notes: Figure 4.3 depicts the third stage of the security constellation on the atomic bomb. The United States
becomes both the principal international securitizing actor with a macrosecuritization discourse, with the
Soviet Union and other nations in the UNAEC as its international audience. The Soviet Union becomes the
principal international actor with a macropoliticization discourse, with the United States and other nations
as its international audience.
4.4 Macrosecuritization Failure of the Atomic Bomb
Why did the international control over atomic energy fail? On the surface, the issue looks like a
classic case of bargaining failure in international relations (Fearon 1995), whereby neither the
United States nor the Soviet Union was willing to make the concessions required by the other to
make an agreement possible. The American plan called for the creation of an effective system of
international control over atomic energy—an Atomic Development Authority—which would be
142
implemented in stages and subject to penalties and sanctions free from the veto of the UN Security
Council. Only then would the United States give up the atomic bomb. The Soviet plan insisted on
an international treaty to outlaw the atomic bomb and immediate nuclear disarmament. Only then
would the Soviet Union consider a system of international control, but one that respected the
principles enshrined in the UN Charter, including the veto. The insistence of the United States and
the Soviet Union on irreconcilable plans made great power consensus impossible.
The historical analysis shows that the inability of the great powers to reach a diplomatic
consensus in the UNAEC on the international control of atomic energy and elimination of the
atomic bomb was actually the outcome of a series of policy decisions by the great powers on the
atomic bomb between 1942 and 1946 (see Appendix 9 for a timeline).
The American decision to launch the Manhattan Project;
Anglo-American cooperation and Soviet exclusion (i.e., the Quebec Agreement);
The American decision to use the atomic bomb against Japan;
The Soviet decision to make the atomic bomb;
American atomic diplomacy and Soviet intransigence in peace negotiations;
The agreements between the great powers to establish the UNAEC (i.e., Truman-Attlee-
King Declaration and Moscow Communiqué);
The American proposal for the international control of atomic energy (i.e., the Acheson-
Lileanthal Plan and Baruch Plan);
The Soviet proposal for a treaty to outlaw the atomic bomb (i.e., the Gromyko Plan);
The American decision to continue to develop and test the atomic bomb (e.g., the Bikini
Tests);
The stalemate and vote in the UNAEC.
Behind these critical junctures in the emergence, evolution, and demise of the international control
over atomic energy, the discourse analysis shows that the atomic bomb was subject to two
conflicting securitization narratives—humanity securitization and national securitization—until
diplomacy between the great powers in the Atomic Energy Commission finally forced a fateful
decision between them.
On the one hand, the narrative of humanity securitization framed the atomic bomb as an
existential threat to humanity that required an extraordinary system of international control over
143
atomic energy for the security and survival of humankind. On the other hand, the narrative of
national securitization framed the other great power as a threat to national security and the atomic
bomb as an essential source of military power. The failure of great power consensus on the
international control over atomic energy had its origins in the triumph of national securitization
over humanity securitization as the dominant security narrative on the atomic bomb. The
theoretical framework developed here emphasizes three variables to explain the primacy of
national securitization in the thinking and decisions of the great powers on the atomic bomb: (1)
the instability of the postwar distribution of power in the international system; (2) the power and
interests of the principal securitizing actors within the domestic security constellations of the
United States and the Soviet Union; and (3) the beliefs and perceptions of the American and Soviet
political leadership about the atomic bomb and the intentions of the other great power. Overall,
within the context of the emerging great power rivalry between the United States and the Soviet
Union—the Cold War—great power consensus on macrosecuritization failed because fear of “the
Other” came to subsume fear of “the Bomb.”
4.4.1 The Instability of the Postwar World
The first system-level driver behind macrosecuritization failure was the unstable distribution of
power in the postwar world. One of the structural forces behind the instability of the postwar world
was the transition from a multipolar to a bipolar international system. Kenneth Waltz (1964; 1988)
contended that established bipolar systems are relatively stable because of the reduced dangers of
uncertainty and miscalculation between two great powers, but the destabilizing mechanisms of
uncertainty and miscalculation are likely to be at their highest during a period of structural
transition, when the trajectory and future of the distribution of power is unclear (Sears 2018). In a
structural transition from multipolarity to bipolarity, both great powers are likely to face acute
uncertainties not only about the intentions and capabilities of the other, but also over the
capabilities and alignment of the former great powers in the overall balance-of-power.
In the aftermath of the Second World War, France, Germany, Italy, Japan, and even the
United Kingdom all fell from the ranks of the great powers, leaving the United States and the
Soviet Union as the only great powers—or “superpowers”—in the international system (Kennedy
1997, 357). The structural transition to bipolarity led almost immediately to great power rivalry
between the United States and the Soviet Union (Gaddis 1972). Both American and Soviet foreign
policy pursued relative gains in power and security at the expense of the other in the postwar order:
144
the Americans denied the Russians any influence over Japan, while the Russians excluded the
Americans from Eastern Europe. These tensions within the Grand Alliance were already evident
before the end of the Second World War—as demonstrated by Anglo-American exclusion of the
Soviet Union from the Manhattan Project—but the break in Soviet-American relations became
clear at the London Conference. By the time George Kennan sent his “Long Telegram” to
Washington in February 1946, outlining the principles of containment which would become a core
pillar of American grand strategy throughout the Cold War (Gaddis 1982), mutual suspicion and
mistrust had firmly taken hold in both Washington and Moscow. In the newly bipolar world, each
of the great powers became the “obsessing danger” of the other (Waltz 1964, 882). The Cold War
had begun (Gaddis 1972; 2005).
The instability of the postwar distribution of power was not merely a function of structural
transition, but also fundamental changes in the nature and distribution of military power brought
on by the atomic bomb. The atomic bomb constituted a revolutionary increase in the power of
destruction, which appeared to alter the basic character of military power and warfare (Brodie ed.
1946; Wohlstetter 1958; Herz 1959; Morgenthau 1964; Niebuhr 1963; Jervis 1989; Deudney
2007). As Oppenheimer wrote, the “radical character of atomic weapons lies… in their vastly
greater powers of destruction” (Masters and Way eds. 1946, 22). Even before the atomic bomb
had become a reality, the atomic scientists had emphasized its revolutionary destructive power.
The Bush-Conant Memorandum estimated the atomic bomb (the “super-bomb”) to be equivalent
to between 1,000 and 10,000 tonnes of high explosives, or roughly equal to the payload of between
100 to 1,000 B-29 bombers. The “future military potentialities” of thermonuclear weapons (the
“super-super bomb”) were even more extraordinary: “equivalent in blast damage to 1,000 raids of
1,000 B-29 Fortresses delivering their load of high explosives on one target” (Bush and Conant
1944, 2). In essence, the quantitative increase in the destructive power of the atomic bomb created
a qualitative change in the nature of warfare (Jervis 1989, 13). The revolutionary character of the
atomic bomb influenced strategic thinking on American national security. As Bernard Brodie
wrote:
What bothered the generals and admirals most was the startling efficiency of this
new weapon. It was so far ahead of the other weapons in destructive power as to
threaten to reduce even the giants of yesterday to dwarf size. In fact to speak of it
as just another weapon was highly misleading. It was a revolutionary development
which altered the basic character of war itself (Brodie ed. 1946, 2; emphasis
added).
145
The atomic bomb appeared to create a situation of radical vulnerability for all countries, including
the great powers. According to Einstein, “the construction of the atom bomb has brought about the
effect that all people living in cities are threatened, everywhere and constantly, with sudden
destruction” (Masters and Way eds. 1946, 76). The atomic bomb was a destabilizing force in the
postwar world because it seemed to overturn the existing foundations of military power and
national security. In Stalin’s words, the atomic bomb had “shaken the whole world” (Cochran et
al. 1995, 23).
The atomic bomb was also destabilizing because it produced an asymmetric distribution of
capabilities between the United States and the Soviet Union. In principle, the concept of “polarity”
implies a general equilibrium in the distribution of capabilities between the great powers (Waltz
1979). In practice, qualitative and/or quantitative differences in military capabilities can foster
perceptions of imbalance and insecurity (Jervis 1978). The atomic bomb was central to the great
powers’ calculations of their relative military power and vulnerabilities in the postwar world. The
United States possessed the atomic bomb, but its conventional military power was much
diminished by the “hysterical pace” of demobilization in Europe (Bernstein 1974, 1036). The
American military therefore became increasingly dependent on the atomic bomb as a
counterbalance to Soviet conventional superiority. The Joint Chiefs-of-Staff warned Baruch that
any short-term effort to prohibit the atomic bomb would have a “grave and adverse effect” on
American national security, since it would reduce American military power without a
“proportionate reduction” in the power of others (Bernstein 1974, 1036). The Soviet Union
possessed conventional military superiority but lacked the atomic bomb. The Soviet leadership
was anxious about the American atomic monopoly and wanted to produce the atomic bomb as
quickly as possible to restore the balance-of-power and “remove a great danger” to the Soviet
Union (Holloway 1981; 1994; Cochran et al. 1995, 23). In short, both the great powers had reason
to worry about their relative military power and vulnerabilities in an asymmetric distribution of
capabilities. The atomic bomb posed a serious distributional problem in the postwar world.
In sum, the structural transition from multipolarity to bipolarity, the revolutionary
transformation in the nature of military power and warfare, and the asymmetric distribution of
capabilities between the great powers all contributed to the instability of the distribution of power
in the international system. The instability of the postwar world—driven by the changing nature
and distribution of power—fueled the growing intensity of great power rivalry between the United
States and the Soviet Union, which left them both highly susceptible to a securitization narrative
146
of nationalization securitization on the atomic bomb and decisively shaped their policies and
diplomacy on the international control over atomic energy. For the United States, the atomic
monopoly had become the mainstay of American military power and therefore the principle of
“stages” was a non-negotiable part of its plan for the international control over atomic energy
(Shils 1947; Lieberman 1970; Bernstein 1974; Herken 1980). The United States could not accept
unilateral disarmament in the absence of an effective system of international control, which
would—in Baruch’s words—have “no result except our own weakening” (Department of State
1960, 46). For the Soviet Union, the American “monopolistic possession of the atomic bomb” was
the principal threat to its national security. The Soviet Union could not accept the American
proposals “either as a whole or in their parts,” since they would maintain the American atomic
monopoly while preventing their own efforts to produce the atomic bomb. The Soviet plan
demanded the immediate negotiation of an international convention that would outlaw and
eliminate the American atomic monopoly. The instability of the postwar distribution of power was
therefore an important structural driver behind the divergence in the American and Soviet policies.
As David Kearn (2010, 56) concluded: “Whilst other delegates and observers may have seen some
possibility for progress if these ‘sequencing’ issues could be resolved, in reality neither the United
States nor the Soviets were in a position to compromise. The issue was indivisible.” The inability
of the great powers to achieve consensus on the international control over atomic energy and the
elimination of the atomic bomb—that is, macrosecuritization failure—was both cause and
consequence of the emerging “Cold War” (Sherwin 1973; Bernstein 1974; Herken 1980; Gerber
1982).
4.4.2 The American and Soviet Security Constellations
The second state-level variable behind macrosecuritization failure was the dynamics and
relationships between the principal securitizing actors and audiences within the domestic security
constellations of the great powers. How did the identities of and relationships between the key
securitizing actors, functional actors, and audiences influence the securitization narratives on the
atomic bomb within the great powers? The security constellations in the United States and the
Soviet Union looked very different, since the domestic political systems of American democracy
and Soviet communism shaped who could “speak” and “do security” on the atomic bomb. In both
cases, however, the political power and security interests of the principal securitizing actors
147
ensured that national securitization was the dominant securitization narrative behind the major
policy decisions on the atomic bomb.
In the United States, the security constellation on the atomic bomb was pluralistic and
competitive, with multiple securitizing actors and audiences. The principal securitizing actors
behind the narrative of humanity securitization were the atomic scientists, whose power and
influence depended on their specialized scientific and technical knowledge on atomic energy,
which gave them epistemic authority and legitimacy to speak security on the atomic bomb.
Initially, the secrecy of the Manhattan Project made this an elite phenomenon, with the atomic
scientists seeking to influence an audience of the key political leaders (e.g., Churchill, Roosevelt,
and Truman) and military officials (e.g., Groves, Marshall, and Stimson). After the Second World
War, the atomic scientists continued to lobby the government as their primary audience, but also
tried to influence the American public (e.g., by forming the Federation of Atomic Scientists and
publishing the Bulletin of Atomic Scientists). At the height of their influence, some of the leading
atomic scientists—including Bush, Conant, and Oppenheimer—were instrumental in developing
the Acheson-Lileanthal Plan. However, as the atomic bomb shifted from being a scientific and
technical matter to one of defense and diplomacy, the atomic scientists found themselves on the
margins of political power. By the time the international control over atomic energy had become
a matter of multilateral diplomacy and negotiation within the UNAEC, the atomic scientists no
longer had any direct authority and influence over American policy.
Conversely, the principal securitizing actors behind a narrative of national securitization
occupied key positions within the United States Government, especially in the Departments of
State and Defense. They derived their power and influence over American policy from their
positions and functions within the state bureaucracy, which gave them significant political and
institutional authority to “speak” and “act” on matters of American foreign policy and national
defence. During the Second World War, the Manhattan Project was under the direction of the U.S.
Army, which gave the military—including Stimson and Groves—significant influence over
American policy, while Anglo-American cooperation ensured that Churchill and the British
government had a powerful external influence on American policy. The power and interests of
these securitizing actors shaped some of the major decisions on early-American policy on the
atomic bomb (Bernstein 1975), such as the exclusion of the Soviet Union from the Manhattan
Project and the use of the atomic bomb against Japan. After the Second World War, Congress took
control over the major decisions on the domestic policy of atomic energy but left primary
148
responsibility for international policy to the Departments of State and Defense. Once again
securitizing actors within the state bureaucracy—especially Secretary Byrnes and the Joint Chiefs-
of-Staff—ensured that the narrative of national securitization was central to the major decisions of
American foreign and defence policy on the atomic bomb, such as the pursuit of atomic diplomacy
towards the Soviet Union and the continued development of the atomic bomb (Alperovitz 1965).
During the negotiations within the UNAEC, the main voices behind American policy and
diplomacy were those of President Truman, Secretary Byrnes, and Bernard Baruch, all of whom
were deeply committed to protecting the American atomic monopoly and secrets until an effective
system of international control over atomic energy was in existence.
American democracy shaped the security constellation on the atomic bomb in other ways.
The Truman administration needed the support of Congress—especially the Senate—for the
international control over atomic energy, since Congress held the political authority to ratify or
reject any treaty. According to Gaddis (1972, 332), “As distrust of Russia grew during 1946, the
[Truman] Administration began to shape its policy, not according to what the Russians might
accept, but in terms of what Congress would not condemn.” The Truman administration’s choice
of Baruch as the American representative to the UNAEC was primarily a political choice to ensure
the support of Congress (Hewlett and Anderson 1962; Lieberman 1070; Bernstein 1974).
Additionally, the structure of the state bureaucracy, especially the lack of policy coordination
between the Departments of State and Defense, meant that American foreign policy and defence
policy were not always aligned, as demonstrated by the American military conducting the Bikini
nuclear tests not two weeks after the presentation of the Baruch Plan at the UNAEC. Finally, the
American public had important but countervailing influences on American policy. The American
public was generally supportive of the policy of international control over atomic energy. One poll
in September 1946 found that 67% of polled Americans supported the international control over
atomic energy—compared to 28% opposed and 5% undecided—including if the United States had
to choose between two courses of action: supporting an international organization or making “more
and better atomic bombs” (Erskine 1963, 165). However, the unequivocal demand of the public to
bring American soldiers back home after the war led to rapid demobilization, leading indirectly to
American military dependence on the atomic bomb.
In the Soviet Union, the security constellation on the atomic bomb was hierarchic in its
political structure and homogenous in its securitization narrative. The totalitarian machinery of the
centralization of political power and the elimination of public discourse meant that there were
149
relatively few securitizing actors and an even smaller audience behind the securitization of the
atomic bomb. The Soviet atomic scientists had neither the resources nor the organization to
constitute a securitizing actor. Before the Second World War, the atomic scientists suffered
repeated “purges” and the Soviet Union had fallen behind in the scientific research on atomic
energy (Hollway 1981, 170). During the war, most of the atomic scientists redirected their work
towards more urgent requirements of the military situation (Hollway 1981, 170–171). The leading
atomic scientist in the Soviet Union, Kurchatov, tried on multiple occasions to call attention to the
military and economic implications of atomic energy, but faced obstacles from skeptical scientists
about atomic energy and short-term priorities for scarce resources (Holloway 1994). Nor did the
Soviet intelligence community—i.e., the NKVD and GRU—constitute an effective securitizing
actor, despite possessing intelligence which suggested that the Americans, British, and Germans
all had secret programs on atomic energy (Holloway 1981, 176; Holloway 1994, 82–84). The most
significant securitizing move came in May 1942, when a junior military officer, Flyorov, wrote a
letter to Stalin which warned that the leading atomic scientists were no longer publishing and that
foreign powers were probably conducting secret research on the atomic bomb (Holloway 1981;
1994). These securitizing actors influenced the Soviet decision by the end of 1942 to recommence
scientific research on atomic energy, but not on a scale which would suggest that the Soviet
leadership saw the atomic bomb either as an existential threat or as a decisive weapon in its struggle
for survival against Nazi Germany (Holloway 1994, 90).
Ultimately, the Soviet decision to make the atomic bomb rested on the authority of one
man: Stalin (Holloway 1994). Stalin was the primary audience in the Soviet security constellation.
When Stalin made the decision to launch a crash program to make the atomic bomb in August
1945, there is no record of any discussion or debate amongst the Soviet leadership (Holloway
1994, 131). Rather, Stalin simply summoned Vannikov and Kurchatov and said, “A single demand
of you, comrades. Provide us with atomic weapons in the shortest possible time!” (Cochran et al.,
1995, 23). This was not a request, but a directive. Stalin’s decision was a direct response to the
events of Hiroshima and Nagasaki, reinforced by American atomic diplomacy and continued
Anglo-American cooperation (Holloway 1981; 1994; Cirincione 2007). By the time the
international control over atomic energy was under negotiation in the UNAEC, the only other
voices in the Kremlin with any influence over Soviet policy—especially Foreign Minister Molotov
and Soviet diplomat Gromyko—had fully embraced the narrative of national securitization. There
150
was no parallel narrative of humanity securitization within the domestic security constellation of
the Soviet Union.
Overall, the security constellations in the United States and the Soviet Union favored a
securitization narrative of national securitization over humanity securitization on the atomic bomb.
While the domestic political systems of American democracy and Soviet communism produced
different patterns of security constellations—pluralistic and competitive in the United States,
hierarchic and homogenous in the Soviet Union—in both cases national securitization became the
dominant security narrative of powerful securitizing actors within the state. In the United States,
the main securitizing actors behind the narrative of humanity securitization—the atomic
scientists—gradually found themselves on the margins of political power, as they no longer
occupied key positions within the state bureaucracy and their epistemic authority waned, whereas
the main securitizing actors behind the narrative of national securitization—especially those within
the Departments of State and Defense—exercised a persistent and powerful influence over
American policy. In the Soviet Union, the primary audience was Stalin and the only securitizing
actors with direct access to him embraced a narrative of national securitization.
4.4.3 Fear of a “Threatening Other”
The third individual-level variable behind macrosecuritization failure were the threat related
beliefs and perceptions of the American and Soviet political leadership about the atomic bomb and
the other great power. In both the United States and the Soviet Union, differences in political
ideology and recent historical memories of the war contributed to a sense that one’s own foreign
policy was benign and just while the other’s intentions were untrustworthy or hostile. The
collective beliefs of political leaders in a “benevolent self” and a “threatening other” contributed
to mutual suspicions and mistrust between the great powers and shaped their threat perceptions of
the atomic bomb (Wendt 1992; Haas 2018), making them more susceptible to a securitization
narrative of national securitization and undermining the prospects for great power consensus on
macrosecuritization. Both the American and Soviet political leadership concluded that possession
of the atomic bomb by an adversarial great power constituted a greater peril than the atomic bomb
itself, and that their own possession of the atomic bomb was vital to national security.
American liberalism and the war had a profound impact on how the American leadership
understood the United States’ place and purpose in the postwar world and its relationship with the
Soviet Union. The American “city upon a hill” worldview contributed to the belief that their
151
political values represented the universal aspirations of humankind. In his radio address to the
American people on “V-J Day,” President Truman described the Second World War as a “victory
of liberty over tyranny.” In the 1941 Atlantic Charter, the American and British governments had
established the “common principles” of sovereignty and self-determination, free trade and free
movement of peoples and goods, and territorial non-aggrandizement and disarmament as
“essential” for world peace and security (Roosevelt and Churchill 1941). American foreign policy
aimed to spread liberal democracy to the liberated nations of Europe and their defeated enemies in
order to construct a liberal international order (Ikenberry 2001, 163–165). The Americans firmly
believed that they had both might and right on their side.
However, the power and interests of the Soviet Union threatened the American vision of a
liberal international order. The Truman administration perceived the Soviet military occupation
and support for puppet regimes as a threat to the freedom and self-determination of Eastern Europe
(Gaddis 1972, 133–137). More fundamentally, American political leaders had a profound
ideological mistrust of Soviet communism, rooted in its negation of political liberty and private
ownership. At Potsdam, for example, Stimson wondered how the Americans could trust a
government whose authority depended on the totalitarian machinery of secret police and political
censorship, or how the Soviet leadership would agree to a system of international control with
intrusive inspections of military and industrial installations when they would not even permit the
entry of foreign journalists to monitor elections in Eastern Europe. This did not, however, prevent
Stimson from entertaining the quixotic idea, based in American liberal idealism, of a quid pro quo
of the United States giving up the bomb in exchange for the political liberalization of the Soviet
Union (Hewlett and Anderson 1962, 388).
American liberalism and historical memory of the war also shaped the political leadership’s
beliefs about the atomic bomb and the American plan for the international control over atomic
energy. American exceptionalism contributed to the (self-)perception of American power and
intentions as benign but the power and intentions of others as threatening. There is perhaps no
clearer demonstration of the relationship between American exceptionalism and the atomic bomb
than when, on the day of the destruction of Hiroshima, President Truman declared the United
States as trustees of this new force—to prevent its misuse, and to turn it into the channels of
service to mankind [sic].” The Second World War also shaped American fears of the atomic bomb.
At top of mind was the threat of a surprise attack with atomic bombs—a nuclear Pearl Harbor. One
military intelligence assessment concluded that “our most urgent military problem is to reorganize
152
ourselves to survive a vastly more destructive ‘Pearl Harbor’ than occurred in 1941” (Brodie ed.
1946, 72). Similarly, Stimson’s (1945) memorandum to the President on the atomic bomb
suggested, “the future may see a time when such a weapon may be constructed in secret and used
suddenly and effectively with devastating power by a wilful nation or group against an
unsuspecting nation or group of much greater size and material power.” Stimson also suggested
that “probably the only nation which could enter into production within the next few years is
Russia.”
The belief that American possession of the atomic bomb did not pose a threat to other
countries, but that atomic energy in the hands of other states would endanger world peace and
security, was fundamental to American thinking. The American plan for the international control
of atomic energy—including both the Acheson-Lileanthal Plan and the Baruch Plan—required
other states to come under a system of international control before the United States would give
up the bomb. At the same time, the American leadership rejected the Soviet proposal as
unacceptable because a treaty to outlaw the atomic bomb in the absence of a system of international
control—i.e., “pious words and thoughts”—could afford no guarantee of safety to the United
States to justify giving up the security of the atomic bomb. The strongest critique came from Henry
Wallace, who rightly pointed out that the American plan demanded an unequal commitment from
the United States and the Soviet Union (Lieberman 1970). Wallace’s dissenting opinion did
nothing to change American policy, widely believed to be magnanimous and just. As Baruch saw
it, the United States had “right as well as might on its side” (Hewlett and Anderson 1962, 591).
Barton Bernstein (1974, 1004) concludes that the Baruch Plan, “widely heralded as magnanimous,
was unsatisfactory to the Russians because it protected the American nuclear monopoly and
threatened Soviet security and industrial development.” Ultimately, the Americans demanded a
higher degree of trust upfront from other states than they were willing to give in return.
Similarly, political ideology and historical experience influenced how the Soviet leadership
understood the place of the Soviet Union and its relationship with the United States in the postwar
world. The ideological belief of Marxist-Leninism in the inherent antagonism between capitalism
and communism—a struggle driven by the materialist forces of history (Lenin 1918)—combined
with the Soviet Union’s position as the sole communist state contributed to the Soviet leadership’s
sense of isolation and mistrust towards the “Western powers,” which became a characteristic
feature of Soviet foreign policy after the Second World War (Gaddis 1972). Furthermore, historical
memory of the Nazi invasion—in violation of the 1939 Molotov-Ribbentrop Pact (or
153
Nonaggression Pact)—as well as British and American support for the “White” (or “counter-
revolutionary”) forces in the Russian Civil War all contributed to Soviet perceptions of what
George Kennan (1947, 571) called “the semi-myth of implacable foreign hostility.” There is every
reason to believe that Marxist-Leninist ideology influenced the Soviet leadership’s understanding
of the ends and means of Soviet foreign policy. As Stalin wrote:
Could Russian Communists confine their work within the narrow national bounds
of the Russian revolution? Of course not. On the contrary, the whole situation…
impelled them to go beyond these bounds in their work, to transfer the struggle to
the international arena, to expose the ulcers of imperialism, to prove that the
collapse of capitalism was inevitable, to smash social chauvinism and social-
pacifism, and, finally, to overthrow capitalism in their own country and… facilitate
the task of overthrowing capitalism for the proletarians of all countries (Stalin 1953,
79–80).
The fact that the Soviet Union and the United States were the leading communist and capitalist
states in the postwar world meant that the ideological struggle between communism and capitalism
must also be a political struggle between the great powers.
Ideology and history also influenced the Soviet leadership’s perceptions of the atomic
bomb—and, especially, American possession of the atomic bomb. Marxist-Leninist ideology
postulated that “combined and uneven development” was the central materialist force of
international history (Rosenberg 2016). The Soviet leadership believed that the legitimacy and
security of the Soviet state depended on the relative economic productivity of communism and
capitalism. Indeed, Stalin saw Russia’s economic and technological underdevelopment as the main
cause of its historical suffering at the hands of foreign powers (Holloway 1981; 1994). Soviet
propaganda emphasized the need to “catch-up and overtake” the Western capitalist countries
(Scherrer 2014). The Soviet Union achieved impressive relative economic growth through its rapid
industrialization during the 1930s, to the extent that the Soviet war-economy was able to
outcompete that of Nazi Germany (Kennedy 1989, 330–332). The atomic bomb, however,
represented a potent symbol of the economic and technological prowess of the United States.
According to Holloway (1994, 133), “As the most powerful symbol of American economic and
technological might, the atomic bomb was ipso facto something the Soviet Union had to have too.”
In a public speech on November 6, 1945, Molotov declared, “We will catch up on what we must
and we will achieve the flowering of our nation. We will have atomic energy, and much else”
(Holloway 1981, 185).
154
From the Soviet perspective, American foreign policy on the atomic bomb seemed to
confirm the ideological premises of the implacable hostility between capitalist and communist
powers: the exclusion of the Soviet Union—their wartime ally—from Anglo-American
cooperation on the Manhattan Project; the use of the atomic bomb once the Soviet Union had
entered the war against Japan; American atomic diplomacy during the peace negotiations; the
continued exclusion of the Soviet Union from the Truman-Attlee-King declaration; the further
development and testing of the atomic bomb at Bikini Atoll; and the general unwillingness of the
United States to share scientific and technical knowledge on atomic energy or give up its
monopolistic possession of the atomic bomb. Not surprisingly, the Soviet leadership was highly
suspicious and mistrustful of the American plan for the international control over atomic energy.
As Holloway (1994, 162–163) wrote:
Given the premises of Soviet policy, it is very unlikely that Stalin or his colleagues
believed that international control would be established. They did not expect help
from the United States in building the bomb; nor did they expect the United States
to give up its monopoly. On the contrary they expected the United States to try to
hold on to its monopoly for as long as possible, and to use it to put pressure on the
Soviet Union.
Thus, the Soviet Union made the counter proposal of an international treaty to prohibit the atomic
bomb, which would expose American hypocrisy and imperialism while giving the Soviet Union
time to make the atomic bomb.
The final question is whether American and Soviet political leaders perceived the atomic
bomb as an existential threat to humanity. Clearly, the Soviet leadership was extremely unreceptive
to the securitization narrative of humanity securitization on the atomic bomb. The Soviet diplomats
framed the atomic bomb within the established category of “weapons of mass destruction” and
proposed the normal international practice of a treaty to prohibit the possession and use of the
atomic bomb—that is, macro-politicization. More fundamentally, Stalin appears to have
understood the significance of the atomic bomb exclusively in terms of its implications for Soviet
national power and security. Hence, Stalin’s assertion that the balance-of-power had been
“destroyed” and that Soviet acquisition of the atomic bomb would “remove a great danger”
(Cochran et al., 1995, 23). Robert Cirincione (2007, 19) argues that “For Stalin, the bomb wasn’t
a threat to all of humanity, but rather a source of security and power.” Holloway (1994, 166)
concurs, stating that “For Stalin the danger was not the atomic bomb as such, but the American
155
monopoly of the bomb. The obvious solution to this problem, in Stalin’s mind, was a Soviet atomic
bomb.”
Why the Soviet leadership did not perceive the atomic bomb as an existential threat to
humanity cannot be answered conclusively. It may have been that the political relationship
between the United States and the Soviet Union had become so bogged down in mutual suspicions
and mistrust that the Americans—both scientists and diplomats alike—were simply unable to be a
credible securitizing actor to a Soviet audience. Perhaps the reluctance of the Americans to share
scientific and technical information on the atomic bomb—“atomic secrets”—weakened their
ability to convey the existential threat of nuclear war. Following the 10-2 vote in the UNAEC,
Edward Shils (1947, 865–867) reflected that the Soviet leadership’s failure to recognize the atomic
bomb as an existential threat was at least partially responsible for the diplomatic failure of the
international control over atomic energy:
Did the Soviet Union wish to have the atomic bomb subjected to international
control?… [A] considerable part of the Soviets’ delaying tactics are to be accounted
for by their ignorance of the nature of the atomic bomb and their uncertainty as to
its significance… Given their desire to remain at peace with the United States,
which we accept as axiomatic, it seems that only ignorance of the bomb’s
potentialities and the failure to reflect on the nature of an atomic armaments race
could allow them actually to obstruct, as they have, the establishment of the scheme
which could have helped to prevent an atomic armaments race and an atomic bomb
war which would be generated by a situation in which several large states already
hostile toward one another possess atomic bombs.
The Soviet leadership’s rejection of the securitization narrative of humanity securitization
weakened the prospects for great power consensus on the international control of atomic energy.
Nor did the American political leadership accept the narrative of humanity securitization
without reservations. Despite the rhetoric of his public speeches, President Truman may never
have truly believed that the atomic bomb was an existential threat to humanity (Cirincione 2007,
19). According to Holloway (1994, 166), “Neither Truman nor Stalin saw the atomic bomb as a
common danger for the human race.” Secretary Byrnes only grudgingly accepted the policy of
international control once atomic diplomacy had failed to deliver American foreign policy
objectives. The Joint Chiefs of Staff clearly saw the Soviet military power as a greater danger than
the atomic bomb. Even Baruch’s thinking shifted far too quickly and drastically from seeing the
international control over atomic energy as the “last, best hope” for humanity in the summer to
156
celebrating a mere “propaganda victory” for the United States by the winter of 1946 (Hewlett and
Anderson 1962; Lieberman 1970).
Ultimately, the United States Government’s policy decisions on the atomic bomb suggest
that the American political leadership was not entirely persuaded by the narrative of humanity
securitization, since it did not do everything in its power to achieve the goal of the international
control over atomic energy. Specifically, the Americans were unwilling to share their scientific
and technical knowledge on atomic energy or accept unilateral limitations or reductions on their
arsenal of atomic bombs in the hopes of making possible an agreement with the Soviet Union. As
David Kearn (2010, 53-54) wrote:
One issue that remained unchanged throughout the process of developing the US
policy was the commitment to maintain the US atomic arsenal… Various historical
evaluations of the period have faulted Baruch’s hard-line approach to negotiations
with the Soviets in the AEC. However, this misses a more important point that
Baruch’s views were actually quite reflective of the broader consensus within the
American political and military leadership. Most importantly, there was absolutely
no support within the Truman administration for any plan that would relinquish
atomic weapons before a functioning ADA was in place. This view was so pervasive
as to be taken for granted, even within the scientific community. This position
would prove the central focus of the Soviet diplomatic counterattack in the Atomic
Energy Commission. Even then, the US commitment to its atomic arsenal would
remain non-negotiable (emphasis added).
Faced with conflicting securitization narratives about the two principal security threats of the
postwar world—the atomic bomb and the Soviet Union—the U.S. Government based its major
policy decisions on the premise that the Soviet Union posed the greater danger to American
national security, which precluded the United States from doing everything in its power to secure
a diplomatic consensus on the international control over atomic energy. In the end, the United
States chose to hinge its national security on the same weapon that posed an existential threat to
humanity; for fear of “the Other” had come to outweigh fear of “the Bomb.”
4.5 Conclusion
This chapter has examined the collapse of the international control over atomic energy as a
historical case of macrosecuritization failure. The American plan for the international control over
atomic energy—the Baruch Plan—was the result of a securitization narrative of humanity
securitization on the atomic bomb. This narrative framed the possibility of an arms race and atomic
warfare between the great powers as an existential threat to humanity, and called for the creation
157
of a new international organization—an Atomic Development Authority—with extraordinary
powers and authority over atomic energy in all countries to neutralize the danger that any state
would acquire the atomic bomb. However, the failure of the international control over atomic
energy had its origins in the conflicting securitization narrative of national securitization, which
framed a rival great power as a threat to national security and possession of the atomic bomb as a
vital source of military power. The evidence in this chapter supports the theoretical framework of
three variables to help explain why national securitization triumphed over humanity securitization
as the dominant narrative in shaping the thinking and decisions of the great powers on the atomic
bomb: (1) the instability of the postwar distribution of power in the international system; (2) the
power and interests of the principal securitizing actors within the domestic security constellations
of the United States and the Soviet Union; and (3) the threat related beliefs and perceptions of the
American and Soviet political leadership about the atomic bomb and the other great power.
Ultimately, macrosecuritization failed because fear of “the Other” overshadowed fear of “the
Bomb.”
The historical case of the failure of the international control over atomic energy has broader
theoretical significance for understanding the obstacles to macrosecuritization under the structural
condition of international anarchy. The Charter of the United Nations affirmed the political
“sovereignty” of states as the ordering principle for the postwar international order, but the
international control over atomic energy would have required a fundamental revision of the
meaning and limits of national sovereignty. The Baruch Plan proposed the creation of an
international organization with, inter alia, dominion over natural resources, exclusive
responsibility for scientific research and technological development, authority to determine
acceptable and unacceptable behavior, unfettered right to access and inspection, and the
prerogative to punish violations and noncompliance on matters of atomic energy. This was a
radical solution to the problem of war in international relations: states would forfeit the right to the
unlimited pursuit of military power and national security—i.e., “self-help” (Waltz 1979)—by
accepting mutual constraints on their military capabilities and enforcement by an international
authority. In short, the Baruch Plan envisioned something of a Hobbesian exchange (Herz 1959;
Craig 2003; Deudney 2007), whereby states would transfer a degree of their national sovereignty
to an international organization in order to protect humanity from the existential threat of the
atomic bomb. “The essence of the proposal was,” in the view of Joseph Baratta (1985, 618), “the
modification of sovereignty.”
158
Fear of the atomic bomb was enough to provoke serious contemplation amongst states—
especially the United States—of the need for a political transformation of the international system
for the security and survival of humankind. Many of atomic scientists—including Neils Bohr,
Albert Einstein, Arthur Compton, Leo Szilárd, and Robert Oppenheimer—publicly discussed the
possibility of a “world state” for security against the atomic bomb, with the slogan “one world, or
none!” (Masters and Way, eds. 1946). They were not alone. In Stimson’s words:
In this last great action of the Second World War we were given final proof that
war is death. War in the twentieth century has grown steadily more barbarous, more
destructive, more debased in all its aspects. Now, with the release of atomic energy,
man’s ability to destroy himself is very nearly complete [sic]. The bombs dropped
on Hiroshima and Nagasaki ended a war. They also made it wholly clear that we
must never have another war. This is the lesson men [sic] and leaders everywhere
must learn, and I believe that when they learn it they will find a way to lasting
peace. There is no other choice (Stimson 1947, 67).
Baruch’s speech to the UNAEC called sovereignty “today’s phrase for yesterday’s isolation” and
claimed that the peoples of the world were “not afraid of an internationalism that protects.”
The history of the international control over atomic energy is fascinating because it
glimpsed beyond the political structure of international anarchy and the nation-state towards the
hierarchic ordering principle of a world political entity that could “speak” and “do security” on
behalf of humankind. It was, however, exactly this political challenge to the sovereignty of states
that inspired the most resistance in the UNAEC, including from the diplomats who were generally
sympathetic to the Baruch Plan (Hewlett and Anderson 1962). The Soviet representative,
Gromyko, insisted on the principles of national sovereignty and the right to the veto, which were
enshrined in the UN Charter and necessary to protect the Soviet Union from political domination
by the United States and “the West,” which to the Soviet leadership was effectively the same as a
“world state” (Barrata 1985, 606). Thus, the prospect of the international control over atomic
energy posed a fundamental challenge to the political sovereignty of states, and the reality of
international anarchy imposed a structural constraint on the prospect of the international control
over atomic energy. As Kearn (2010, 59) concluded:
In retrospect, the revolutionary redefinition of state sovereignty required for the
successful implementation of the Baruch Plan seems to preclude its adoption in an
anarchic realm inhabited by sovereign nation states. However, the conclusion
drawn by a broad consensus of the US political-military policy community was
precisely that the revolutionary nature of atomic weaponry demanded a
revolutionary redefinition of traditional sovereignty in order to maintain peace and
avoid nuclear catastrophe.
159
The question before the diplomats in the Atomic Energy Commission came down to the conflicting
premises of the security narratives of humanity securitization and national securitization. What
should be the referent object of security: humanity or the nation? In the end, national securitization
prevailed.
160
Chapter 5
Biological weapons have massive, unpredictable, and potentially uncontrollable
consequences. They may produce global epidemics and impair the health of future
generations… These important decisions have been taken as an initiative toward
peace. Mankind [sic] already carries in its own hands too many of the seeds of its
own destruction. By the example we set today, we hope to contribute to an
atmosphere of peace and understanding between nations and among men [sic.].
— Richard Nixon, 1969
The Biological Weapons Convention
This chapter explores the macrosecuritization of biological weapons in the late 1960s and early
1970s. The Biological Weapons Convention (BWC) represents a case of successful
macrosecuritization to diminish the existential threat of biological weapons and warfare through
an unprecedented international treaty to prohibit an entire category of weapons of mass destruction
(WMD). The argument here is that the reduced intensity of great power rivalry between the United
States and the Soviet Union at this point of the Cold War left them more amenable to a security
narrative of humanity securitization on biological weapons and opened space for great power
consensus on macrosecuritization. At the time, biological weapons became subject to the
conflicting security narratives of humanity securitization and national securitization: the former
emphasized the indiscriminate, unpredictable, and uncontrollable nature of biological weapons
that could pose an existential threat to humanity; the latter focused on the strategic and tactical
implications of biological weapons for military power and national security.
The theoretical framework developed here points to three variables to explain the triumph
of humanity securitization as the dominant security narrative on biological weapons and the
possibility of great power consensus on the Biological Weapons Convention: (1) the stability of
the bipolar distribution of power in the international system; (2) the absence of powerful
securitizing actors behind the narrative of national securitization within the domestic security
constellation of the United States; and (3) the ideas and beliefs of political leaders about biological
weapons and great power relations. Overall, the relative stability of the bipolar system and reduced
intensity of the Cold War (i.e., “détente”), the inability of the American national security apparatus
to justify the continued military utility of biological weapons, the separation of biological and
chemical weapons in disarmament diplomacy, and growing concern for the indiscriminate,
161
unpredictable, and uncontrollable nature of biological weapons all contributed to the triumph of
humanity securitization over national securitization on biological weapons and the possibility great
power consensus on macrosecuritization in the form of the Biological Weapons Convention.
This chapter offers a case study of the successful macrosecuritization of biological
weapons. It is shorter than the other chapters on the atomic bomb and artificial intelligence due to
the comparative simplicity of this case and the focus of the dissertation on the theoretical puzzle
of macrosecuritization failure. Nonetheless, this case of the successful macrosecuritization of
biological weapons offers opportunities for comparisons to the cases of macrosecuritization failure
on the atomic bomb and artificial intelligence. The rest of the chapter is organized as follows. The
first section examines the historical process behind the successful negotiation of the BWC, from
the reemergence of biological weapons on the international agenda in 1968 through to the
diplomatic breakthrough between the United States and the Soviet Union, which made the
signature of a treaty possible by 1972. The second section analyzes the macrosecuritization of
biological weapons in terms of the conflicting security narratives of humanity securitization and
national securitization, the identities and relationships between the principal securitizing actors
and audiences, and the security measures in the BWC to diminish the existential threat of biological
weapons. The third section examines the sources and dynamics behind great power consensus on
the macrosecuritization of biological weapons. The conclusion discusses the theoretical and
practical implications of this case of the successful macrosecuritization of biological weapons.
5.1 Origins of the Biological Weapons Convention
Biological and chemical weapons (BCW) had long been the subject of disarmament diplomacy to
prevent their use in war. At the 1899 Hague Conference, states signed a declaration to abstain from
the use of asphyxiating poisonous gases.” In the 1925 Geneva Protocol (“Protocol for the
Prohibition of the Use in War of Asphyxiating, Poisonous or Other Gases, and of Bacteriological
Methods of Warfare”), states parties agreed to prohibit “the use of bacteriological methods of
warfare.” On January 24, 1946, the United Nations General Assembly passed its very first
resolution which established a Commission to make recommendations on “the elimination from
national armaments of atomic weapons and of all other major weapons adaptable to mass
destruction,” a general category widely considered to include nuclear, biological, chemical, and
radiological weapons. Yet the goal of the general elimination of weapons of mass destruction at
162
that time was no more successful than the failed attempt to achieve the international control over
atomic energy and nuclear disarmament.
Biological weapons remerged on the international agenda in the late 1960s when the United
Kingdom raised the issue within the forum of the Eighteen-Nation Committee on Disarmament
(now the Conference on Disarmament) (Goldblat 1997; Cross and Klotz 2020). On August 6, 1968,
British Disarmament Minister, Frederick (“Fred”) Mulley, stated that his government did not view
the Geneva Protocol as “entirely satisfactory” for addressing the dangers of biological warfare.
[I]ndeed the threat to humanity from the use of these agents is perhaps even greater
today than it was in 1925. As we seek to reduce and, I hope, ultimately to eliminate
the terrible threat of nuclear conflagration, we must not neglect to take steps also to
deal with the threat posed by these means of warfare which have a potential of
misery and suffering of comparable severity (USACDA 1969, 560; emphasis
added).
The British delegation tabled a working paper that recommended the separation of biological and
chemical weapons in disarmament diplomacy and called on states to prioritize the negotiation of
a treaty to prohibit biological weapons, which are “generally regarded with even greater
abhorrence” than chemical weapons. While the British working paper asserted that verification of
an international prohibition of biological weapons was “not possible,” a treaty would nevertheless
contribute to “world security” and could eventually create the conditions in which the use of
biological weapons in war would be “beyond contemplation” (USACDA 1969, 563)—that is, a
biological weapons taboo.
[W]e must make a choice—balance the risks of evasion if we go ahead with the
formulation of new obligations, against the risks for the world if we do nothing and
allow the fears of eventual use of microbiological methods of warfare to continue
and intensify. My choice is emphatically to go ahead; we cannot afford to do
nothing (USACDA 1969, 562).
The initial responses to the British proposal by other states, including the great powers,
were mixed. The Soviet representative recognized “the threat which the use of chemical and
bacteriological weapons represents for mankind” [sic], but accused the United Kingdom of
“solving problems which have long since been solved” (USACDA 1969, 574). Indeed, the Soviet
Union and Warsaw Pact countries initially opposed the British proposal for the separation of
biological and chemical weapons and the “dangerous” proposal to negotiate a new treaty, which
could “destroy” the Geneva Protocol, “an already existing, useful and important international
document on the prohibition of chemical and bacteriological weapons without having replaced it
163
by a better or indeed by any other international instrument” (USACDA 1969, 574). The American
delegate agreed with the United Kingdom’s assessment that biological weapons “are weapons of
mass destruction which constitute a danger to all mankind” [sic] and that its working paper
deserved “serious study” by the Committee, including the proposal for the separation of biological
and chemical weapons. While the U.S. Government had not yet decided on the broader question
of a new treaty, the American representative affirmed that “the world needs to be told of the nature
of these weapons, and what their use might entail for mankind [sic]. The problems are of great
complexity, yet the dangers are of mass devastation” (USACDA 1969, 583).
5.1.1 An Indiscriminate, Unpredictable, and Uncontrollable Weapon
Biological weapons became the subject of a narrative of humanity securitization that framed the
possession and use of biological weapons as a catastrophic or existential threat to humankind.
Intergovernmental organizations played an important role in advancing this securitization narrative
on biological weapons, with two influential reports on biological and chemical weapons published
by the UN Secretary General and the World Health Organization. On December 20, 1968, the UN
General Assembly adopted a resolution (2454A), which stated that “the possibility of the use of
chemical and bacteriological weapons constitutes a serious threat to mankind” [sic] and requested
that the Secretary General produce a report in consultations with scientific and technical experts
on the nature and implications of biological and chemical weapons (UNGA 1968).
The July 1969 report by the UNSG (Chemical and Bacteriological (Biological) Weapons
and the Effects of their Possible Use) framed the threat from biological and chemical weapons in
stark terms:
All weapons of war are destructive of human life, but chemical and bacteriological
(biological) weapons stand in a class of their own as armaments which exercise
their effects solely on living matter. The idea that bacteriological (biological)
weapons could deliberately be used to spread disease generates a sense of horror.
The fact that certain chemical and bacteriological (biological) agents are potentially
unconfined in their effects, both in space and time, and that their large-scale use
could conceivably have deleterious and irreversible effects on the balance of nature
adds to the sense of insecurity and tension which the existence of this class of
weapons engenders… The general conclusion of the report can thus be summed up
in a few lines. Were these weapons ever to be used on a large scale in war, no one
could predict how enduring the effects would be and how they would affect the
structure of society and the environment in which we live (UNSG 1969, 87–88;
emphasis added).
164
While the UNSG report examined the dangers of biological and chemical weapons together, it
emphasized the unpredictability of biological weapons that could introduce “new epidemic
diseases…which could result in deaths on the scale which characterized the mediaeval plagues”
(UNSG 1969, 71). Moreover, this category of WMD could be produced “cheaply, quickly, and
secretly” by both developed and developing countries and therefore constituted a proliferation
threat “even more dangerous than nuclear weapons” (UNSC 1969, viii). While advances in the
biological sciences had made important contributions to “the good of mankind” [sic], they also
“opened up the possibility of exploiting the idea of chemical and bacteriological (biological)
warfare weapons, some of which could endanger man’s future” [sic] (UNSG 1969, 3). The threat
of biological warfare would remain “so long as a number of States proceed with their development,
perfection, production and stockpiling” (UNSG 1969, 3).
The UN Secretary General, U Thant, also requested a report from the WHO on the
implications of biological and chemical weapons for public health. The WHO report (Health
Aspects of Chemical and Biological Weapons) was published on November 28, 1969. Its principal
finding was that “chemical and biological weapons pose a special threat to civilians” (WHO 1970,
10). The “special” character of the threat was due to the “indiscriminate nature” of biological and
chemical weapons and their “high degree of uncertainty and unpredictability” (WHO 1970, 10-
11). The large-scale use of biological and chemical weapons would not only “overwhelm”
healthcare resources and facilities in the short-term, but “could also cause lasting changes of an
unpredictable nature in man’s environment” [sic] (WHO 1970, 11). The report singled out
biological weapons for their indiscriminate and unpredictable nature: “A biological attack intended
to be highly lethal might prove relatively ineffective, whereas an attack intended to be merely
incapacitating might kill an unexpectedly large proportion of the target population” (WHO 1970,
17–18). While the WHO report highlighted the difficulties behind quantitative assessments of
casualties from biological warfare, which could be even greater than natural pandemics due to the
higher viral loads of weaponized pathogens, it suggested that an attack by a single aircraft could
result in health problems of “unprecedented magnitude” and that the spread of epidemics—such
as plague, smallpox, or influenza—could result in “many millions of illnesses and deaths” (WHO
1970, 19). The WHO concluded:
It is therefore clear that in the last analysis the best interests of all Member States
and mankind [sic] in general will be served by the rapid implementation of the
resolutions… that would help ensure outlawing the development and use in all
165
circumstances of chemical and biological agents as weapons of war (WHO 1970,
19–20)
There is some ambiguity in the UNSG and WHO reports about the nature of the threat from
biological weapons. On the one hand, both reports framed biological and chemical weapons within
the well-established category of weapons of mass destruction, thereby contributing to the false
equivalency between these weapons in terms of their potential threat to humanity. They also
emphasized that both biological and chemical weapons posed a “special threat” to civilian
populations, which could raise concerns for international humanitarian law and human security
but would not necessarily constitute an existential threat to humanity (Sears 2020a). On the other
hand, both reports recognized the distinct nature of biological and chemical weapons—the former
a living organism that could spread uncontrollably, the latter an inert substance that could in some
sense be employed like conventional weapons. Ultimately, the UNSG and WHO reports
highlighted the indiscriminate, unpredictable, and uncontrollable nature of biological weapons that
could pose an unprecedented danger of epidemic spread of a virus in human, animal, or plant life,
and which could, in certain circumstances, threaten the survival of humanity. In doing so, these
intergovernmental organizations played a crucial role as securitizing actors behind the security
narrative of humanity securitization, which framed biological weapons as an existential threat to
humanity and called on states to take urgent international action to prohibit them.
5.1.2 The Great Powers and the Biological Weapons Convention
At the same time, the U.S. Government undertook a national security review process to assess the
role of biological and chemical weapons in American national security policy (Tucker and Mahan
2009). The United States had initiated biological weapons research during the Second World War
and by the late 1960s had acquired a biological weapons capability to deter, attack, or retaliate
against the Soviet Union (Tucker 2002). Not being party to the Geneva Convention, the United
States had come under domestic and international pressure for its use of chemical weapons during
the Vietnam War. In January 1969, Members of the Congress pressured the Nixon administration
to clarify American BCW policy. On May 28, 1969, President Richard Nixon signed National
Security Study Memorandum (NSSM) 59, which initiated a multi-agency review process by the
U.S. national security establishment—chaired by National Security Advisor, Henry Kissinger—
on American BCW policy. Meanwhile, the U.S. Army announced on July 8, 1969, that an accident
had occurred on Okinawa, exposing 23 American soldiers and one civilian to nerve gases from
166
sarin-filled bombs, which sparked anti-American riots in Japan and ratcheted up pressure on the
government (Tucker 2002, 118).
The main strategic question about biological weapons was whether the American
biological weapons capability “added significantly” to the nation’s nuclear arsenal for deterrence
or retaliation against a biological weapons attack (Tucker and Mahan 2009, 3). Kissinger had also
requested that the President’s Science Advisory Committee (PSAC) prepare a separate report on
biological and chemical weapons. The PSAC report concluded that biological weapons had serious
drawbacks as a weapon, since biological pathogens were “slower acting, less reliable in the field,
and unpredictable in their effects, and had a shorter shelf life in storage” (Tucker and Mahan 2009,
4). The PSAC also warned that a microbial pathogen could remain in the environment long after a
war had ended and mutate into a more deadly strain, which could pose a serious threat to public
health. The report recommended halting the American production and stockpiling of biological
weapons, while maintaining a “defensive” research program.
The Pentagon was internally divided on the question of biological weapons. Two defense
papers had arrived at opposite conclusions: one questioned their military utility for deterrence and
coercion, while the other suggested that they were reliable and controllable and recommended the
expansion of American biological weapons capabilities. However, Defense Secretary Marvin
Laird became convinced of the limited strategic and tactical utility of biological weapons—as
opposed to chemical weapons—and was well-aware of the mounting political pressure against
them, both domestically and internationally. Moreover, unlike the chemical industry, biological
weapons did not have a powerful constituency within or outside the Pentagon. On November 18,
1969, the National Security Council (NSC) convened at the White House on the issue of American
BCW policy. The Chairman of the Joint Chiefs of Staff, General Earle Wheeler, was the sole voice
in favor of maintaining an “offensive” biological weapons capability and opposing ratification of
the Geneva Protocol. All the other NSC principals and their respective agencies recommended a
policy of the United States unilaterally renouncing an “offensive” biological weapons program and
destroying its existing stockpiles of weapons, while maintaining a “defensive” research program
to hedge against “technological surprise” (Tucker 2002, 126–127). They also advocated U.S.
ratification of the Geneva Protocol and the continuation of disarmament diplomacy towards the
negotiation of a treaty to prohibit biological weapons. The main arguments against biological
weapons were their limited military utility due to their lack of controllability and predictably and
the sufficiency of nuclear weapons for strategic deterrence (Tucker and Mahan 2009, 8–9).
167
Defense Secretary Laird argued in favor of the separation of biological and chemical weapons,
claiming that while chemical weapons had military utility, biological weapons did not contribute
significant strategic or tactical value to American national security.
President Nixon accepted the majority view of the NSC on biological weapons. Although
it is unlikely that Nixon perceived biological weapons as a moral issue—but rather as a political
and military one—he was persuaded by the PSAC report’s findings about the unpredictable and
uncontrollable nature of biological weapons, which not only limited their tactical utility on the
battlefield, but also undermined their reliability for strategic deterrence. At a time when domestic
and international opinion was turning against the Vietnam War, President Nixon perceived the
renunciation of biological weapons as an opportunity to present himself as a “man of peace”
(Tucker 2002, 128). On November 25, 1969, President Nixon publicly announced a new American
BCW posture, including the decision that the United States would cease production and destroy
its stockpiles of biological weapons—the first ever unilateral renunciation by a great power of an
entire category of weapons of mass destruction. He also announced his intention to resubmit the
1925 Geneva Protocol to Congress for ratification and expressed his support for the British draft
treaty on the prohibition of biological weapons.
Biological weapons have massive, unpredictable, and potentially uncontrollable
consequences. They may produce global epidemics and impair the health of future
generations… These important decisions have been taken as an initiative toward
peace. Mankind [sic] already carries in its own hands too many of the seeds of its
own destruction. By the example we set today, we hope to contribute to an
atmosphere of peace and understanding between nations and among men [sic]
(Department of State 1969, 461–462).
In the short period of time between 1968 and 1969, the United States had gone from being an
object of international condemnation for its posture on biological and chemical weapons to one of
the leading states on the macrosecuritization of biological weapons.
At the same time, the Soviet Union also turned its attention towards BCW diplomacy. On
September 19, 1969, Soviet Foreign Minister Andrei Gromyko delivered a major speech in the UN
First Committee declaring that the Soviet Union would, in cooperation with other Warsaw Pact
nations, seek the elimination of biological and chemical weapons. Then, on November 24, 1969,
the Soviet representative introduced a draft treaty on biological and chemical weapons to the UN
First Committee, stating that “the use of chemical and bacteriological (biological) weapons would
constitute a serious threat to mankind” [sic] and calling for urgent negotiations of a treaty to
168
prohibit the development, production, and stockpiling of biological and chemical weapons and on
their destruction (USACDA 1970a, 577). The very next day, the Soviet representative gave a
lengthy address on biological and chemical weapons, calling them “extremely nefarious” and
“serious perils for mankind” [sic].
There is an increasing awareness by the peoples of the world of the growing danger
of the waging of war with the use of chemical and bacteriological weapons… The
progress made in chemical and biological sciences brings great benefits to mankind
[sic] but also makes it possible to develop chemical and bacteriological weapons.
Today in the world there are hundreds of thousands of tons of poisonous substances
produced every year. The stockpiles are sufficient to cause incalculable harm to the
world’s population and to animal and vegetable life on our planet. The adoption of
measures for the final prohibition of chemical and bacteriological weapons would
be extremely timely (USACDA 1970a, 581).
Thus, by the end of 1969, the issue was no longer whether states should negotiate a new
treaty for the international prohibition of biological weapons, but whether biological weapons
should be treated separately or in tandem with chemical weapons. The Soviet position diverged
from the British and American positions in asserting that biological and chemical weapons
“must… be considered together” and that the proposal to treat them separately was “extremely
inappropriate and even dangerous(USACDA 1970a, 585). The question of the separate treatment
of biological and chemical weapons remained the focal point of international debate until March
30, 1971, when the Soviet Union suddenly reversed its position and tabled a draft proposal for an
international convention to prohibit biological weapons at the Committee of the Conference on
Disarmament (formerly the Eighteen-Nation Committee on Disarmament). Formally, the Soviet
Union maintained its position on the need to prohibit both biological and chemical weapons, but
blamed “some Western Powers” for being “unwilling at present to renounce chemical means of
warfare” (USACDA 1970b, 461). The Soviet Union nevertheless declared its readiness to
negotiate a convention solely on biological weapons as a “first step” towards the general
elimination of biological and chemical weapons.
This shift in Soviet policy opened the possibility for great power consensus on a Biological
Weapons Convention. As one piece in The New York Times reported:
The Soviet Union’s decision to accept a ban on biological weapons separate from a
prohibition against chemical weapons has cleared the way for another agreement at
the Geneva disarmament conference. The United States had previously taken a
similar position. The two major powers are thus teamed up once more at Geneva to
promote a treaty forbidding weapons which neither should rationally consider using
anyway… A treaty banning biological weapons should have some utility in
169
discouraging further irrational experimentation in a field that could produce such
hideous damage… Such questions, touching the heart of national interests and
sovereignty, must be faced at Geneva and at the SALT talks in Vienna if meaningful
arms control and disarmament are to be achieved in time to save the world from the
threat of unlimited conflict (The New York Times 1971, 28).
On August 5, 1971, the United States and the Soviet Union tabled separate but identical draft
conventions on the prohibition of biological weapons (USACDA 1970b, 456). On December 6,
1971, the UN General Assembly voted 110 to 0 in favor of the Biological Weapons Convention.
The final draft of the treaty (“Convention on the Prohibition of the Development, Production, and
Stockpiling of Bacteriological [Biological] and Toxin Weapons and on Their Destruction”) was
opened for signature on April 10, 1972, and entered into force on March 26, 1975.
5.2 The Macrosecuritization of Biological Weapons: Discourse,
Actors, and Action
The discourse on biological weapons that emerged between 1968 and 1972 reflected the core
rhetorical features of a securitization narrative of humanity securitization. First, it clearly adopted
the referent object of a universal humanity. This was evident in the speeches of diplomats at the
Eighteen Nation Committee on Disarmament. British delegate, Fred Mulley, spoke of the “threat
to humanity” from biological weapons (USACDA 1969, 560). The American representative called
biological weapons “weapons of mass destruction which constitute a danger to all mankind” [sic]
(USACDA 1969, 583). And the Soviet diplomat, Gromyko, agreed that biological weapons posed
a “threat…for mankind” [sic] (USACDA 1969, 574). This language was also apparent in the
resolutions and reports by intergovernmental organizations. The UNGA called biological weapons
a “serious threat to mankind” [sic], while the WHO spoke of the “best interests of… all mankind”
[sic] (WHO 1970, 20). In the end, the referent object of a universal humanity was affirmed in the
preamble of the Biological Weapons Convention:
Convinced of the importance and urgency of elimination from the arsenals of
States, through effective measures, such dangerous weapons of mass destruction as
those using chemical or bacteriological (biological agents)… Determined, for the
sake of all mankind [sic], to exclude completely the possibility of bacteriological
(biological) agents and toxins being used as weapons; Convinced that such use
would be repugnant to the conscience of mankind [sic] and that no effort should be
spared to minimize this risk… (UNODA 2021).
Secondly, this security narrative framed biological weapons as, at the minimum, a
catastrophic threat, and, at the maximum, an existential threat to humanity. Specifically, the
170
security narrative on biological weapons highlighted their uncertain, indiscriminate, and
unpredictable effects, including the possibility of the uncontrollable spread of a pandemic leading
to catastrophic consequences “on the scale… [of] the mediaeval plagues” (UNSG 1969, 71), and
which could even “pose an awesome threat to man’s survival” [sic] (WHO 1970, 122). This
uncertainty about the potential consequences of biological warfare meant that “no one could
predict how enduring the effects would be and how they would affect the structure of society and
the environment in which we live” (UNSG 1969, 87–88).
Thirdly, the narrative of humanity securitization on biological weapons offered hope for
security and survival through an international prohibition against biological weapons; for only the
complete elimination of these weapons from the military arsenals of states could neutralize the
threat of biological warfare to humankind. As the UNSG report concluded, biological warfare
“could endanger man’s future [sic], and the situation will remain threatening so long as a number
of States proceed with their development, perfection, production and stockpiling” of biological
weapons (UNSG 1969, 3). Security therefore required urgent action by states to ensure the
elimination of biological weapons: “we cannot afford to do nothing.” In short, the discourse on
biological weapons followed the general rhetorical structure of a security narrative of humanity
securitization: biological weapons pose an existential threat to humanity and therefore their
elimination is necessary for survival!
At the same time, the security narrative of humanity securitization competed with other
ways of understanding the issue. In particular, biological weapons were typically framed in terms
of the established category of “weapons of mass destruction.” The WMD framing had ambiguous
effects on the macrosecuritization of biological weapons. On the one hand, it worked to elevate
the importance of biological weapons to the level of nuclear weapons. As British representative
Fred Mulley claimed, the threat of biological warfare was of “comparable severity” to nuclear war
(USACDA 1969, 560). On the other hand, it made it more difficult to separate the dangers of
biological weapons from chemical weapons, despite the physical impossibility of chemical
weapons producing an epidemic spread that could threaten all of humanity. Another persuasive
narrative was to frame biological weapons as a “special threat” to civilians, which, because of their
indiscriminate and uncontrollable nature, could spread throughout the civilian population and
endure beyond a war (WHO 1970, 10). This emphasis on the indiscriminate nature of biological
weapons and the threat to civilians built on previously established principles of international
humanitarian law and would be invoked again decades later to ban landmines and cluster
171
munitions (Nadelmann 1990; Price 1998). While this discourse of a “special threat” to civilians
helped to build public and international pressure against biological weapons, it represented a more
limited security concern than framing biological weapons as an existential threat to humanity—
i.e., one of human security rather than the security of humanity (Sears 2020a). Biological weapons
were also subject to a narrative of national securitization, which aimed to frame the possession of
biological and/or chemical weapons by other states as a threat to national security and which
emphasized the military utility of BCW for strategic deterrence and on the battlefield. Though it
failed, national securitization represented a competing securitization narrative on biological
weapons. In short, humanity securitization emerged as a legitimate securitization narrative on
biological weapons in international relations, but one that coexisted alongside other security
narratives on the threat of biological weapons.
Who were the securitizing actors, de-securitizing actors, functional actors, and audiences
that constituted the international security constellation on biological weapons (see Figure 5.1)?
The principal securitizing actors behind the narrative of humanity securitization were states and
intergovernmental organizations. The macrosecuritization of biological weapons was a matter of
official disarmament diplomacy between states. In particular, the United Kingdom played a key
role in putting biological weapons back on the international agenda in the late 1960s through a
series of diplomatic statements, policy papers, and a draft convention. The essence of the British
strategy was the move to separate biological and chemical weapons, point out the weaknesses of
the Geneva Convention, and call for a general prohibition on biological weapons. In the United
States, the Nixon administration’s decision to unilaterally renounce and destroy its biological
weapons converted the United States into a leading state—a great power patron—on the
macrosecuritization of biological weapons. Additionally, intergovernmental organizations
provided a crucial forum for diplomatic negotiations, particularly the disarmament machinery of
the Eighteen Nation Committee on Disarmament. The UNSG and WHO provided expert scientific
and technical analysis on the nature of biological weapons and the dangers of biological warfare.
There were also de-securitizing actors, including a few recalcitrant members of the U.S. National
Security Council, as well as the Soviet Union while it resisted the separation of biological and
chemical weapons.
The main functional actors and audiences of macrosecuritization were states, especially the
great powers. It was clear that an international treaty on the prohibition of biological weapons
would require the support of both the United States and the Soviet Union. Following the U.S.
172
national security review on American BCW policy, the U.S. Government determined that
biological weapons—unlike chemical weapons—lacked military utility and did not offer any
additional strategic value for deterrence beyond nuclear weapons, opening up space for the United
States to take on a leadership role in disarmament diplomacy and unilaterally cease its “offensive”
biological weapons programs and destroy its existing stockpiles of weapons. The Soviet Union
was another important audience. Despite initial resistance, the Soviet Union eventually agreed to
the separation of biological and chemical weapons, clearing the way for the Biological Weapons
Convention.
Figure 5.1: Actors and Audiences behind the Biological Weapons Convention
Notes: Shapes represent the functional differences between actors: macrosecuritizing actors are circles,
audiences are squares, and functional actors are diamonds. Colours represent the differences in the identities
of actors: states are in blue and intergovernmental organizations are in red.
The Biological Weapons Convention was a landmark agreement for disarmament
diplomacy. It was the first treaty to establish an international prohibition against an entire category
of weapons of mass destruction. Indeed, the BWC set a precedent for future international
prohibition regimes, including on chemical weapons, landmines, and cluster munitions. Article I
established the responsibility of states never to “develop, produce, stockpile or otherwise acquire
173
or retain” biological weapons or their means of delivery in “types” or “quantities that have no
justification… [for] peaceful purposes.” Article II established the responsibility of states to
“destroy” or “divert to peaceful purposes” within nine months of the convention’s entry into force
“all agents, toxins, weapons, equipment and means of delivery” covered by the treaty. The BWC
required states to eliminate and foreswear “offensive” biological weapons under international law.
It therefore represented a major breakthrough in disarmament diplomacy on weapons of mass
destruction and a reduction of the existential threat of biological warfare by establishing a strong
international taboo against the use and possession of biological weapons.
At the same time, the success of the Biological Weapons Convention should not be
overstated. First, the BWC did not create a robust treaty regime with new international powers for
monitoring, verification, and compliance. Instead, Article V entails only a general obligation for
states parties “to consult one another and to co-operate in solving any problems which may arise
in relation to… the Convention,” and Article VI only provides states parties with legal recourse in
cases of suspected noncompliance to “lodge a complaint” with the UN Security Council. In the
absence of measures for verification and compliance, the BWC resembles a pactum dictum more
than a robust international regime—exactly the type of international agreement that the United
States had rejected as unsatisfactory for the atomic bomb, like the Gromyko Plan. We now know
that despite the Soviet rhetoric against biological and chemical weapons, the Soviet Union
launched a large-scale clandestine biological weapons program—Biopreparat—after having
signed the BWC (Goldblat 1997; Koblentz 2003; 2010; Zilinskas 2016; Cross and Klotz 2020), a
case of great power noncompliance that clearly demonstrates the limitations of the BWC for
neutralizing the existential threat of biological weapons. There have been other notable cases of
states maintaining offensive biological weapons programs, including South Africa, Iraq, and North
Korea (Koblentz 2003; 2010; Kim et al. 2017; Cross and Klotz 2020).
Secondly, the BWC did not prohibit “defensive” research on biological weapons or their
defenses, which means that states could continue to develop a “latent” biological weapons
capability (Goldblat 1997). While the non-production of biological weapons in peace time is
preferable to simple declarations of “no-first-use,” the difficulty of distinguishing between
“offensive” and “defensive” research (Jervis 1978), and the absence of international inspections or
even domestic oversight (Tucker 2002), meant that states could continue to develop the scientific
and technical knowhow in peace time that could be used to quickly develop biological weapons in
war time. Thus, the Biological Weapons Convention represents a case of successful
174
macrosecuritization because it established an unprecedented ban against an entire category of
weapons of mass destruction and decreased the risk of biological warfare through an international
norm against the possession and use of biological weapons. It does not, however, represent a
complete success of macrosecuritization, since without a robust system for monitoring and
verification or a ban on military research the BWC failed to neutralize the existential threat of
biological weapons.
5.3 The Sources and Dynamics of Macrosecuritization on
Biological Weapons
What made the Biological Weapons Convention possible was great power consensus on the
macrosecuritization of biological weapons. This in turn was made possible by the triumph of a
narrative of humanity securitization over national securitization in shaping the international
security constellation on biological weapons. The theoretical framework developed here points to
three variables that made the conditions right for a narrative of humanity securitization to prevail
over national securitization in the case of the macrosecuritization of biological weapons.
The first system-level forces behind macrosecuritization were the structure and stability of
the distribution of power in the international system, which reduced the intensity of great power
rivalry between the United States and the Soviet Union. By the late 1960s, the United States and
the Soviet Union had taken steps towards the deescalation of the Cold War and the normalization
of great power relations. One of the most dangerous crises of the Cold War—the Cuban Missile
Crisis—now lay in the past, and the United States and the Soviet Union were settling into the
reality that nuclear deterrence and “mutually assured destruction” required mutual restraint and
toleration of one another’s existence. Under the Nixon administration’s foreign policy of détente
which no longer framed the existence of Soviet and/or Chinese communism as an existential threat
to American liberal democracy—the United States moved to normalize its diplomatic relations
with the Soviet Union and the People’s Republic of China (Gaddis 2005). Moreover, the successful
negotiation of the 1967 Treaty on the Non-Proliferation of Nuclear Weapons seemed to breathe
new life into disarmament diplomacy at Geneva by demonstrating that the Cold War rivals could
work together on shared security interests, like preventing nuclear nonproliferation. This spirit of
great power cooperation reached its zenith with the “Strategic Arms Limitation Talks,” a series of
bilateral arms control negotiations that took place in Helsinki between November 1969 and May
1972, which, in addition to placing quantitative and qualitative limitations on their nuclear
175
arsenals, established some basic political principles for managing the relations between the great
powers, including mutual recognition and respect for sovereignty (Sargent 2015).
Behind this warming in the relations between the great powers lay a relatively stable
distribution of power and capabilities, whereby bipolarity had become an entrenched structural
feature of the international system by the late 1960s (Kennedy 1987). The Correlates of War
Project shows a rough equilibrium in the “national material capabilities” of the two great powers
during this period (Singer et al. 1972) (see Table 5.1). According to the Stockholm International
Peace Research Institute (1969, 29), the United States and the Soviet Union accounted for “some
70 per cent of world military expenditures in 1968.” In 1970, the American armed forces numbered
3.2 million, while the Soviet armed forces were 3.3 million in strength (IISS 1970, 2–6). The
nuclear arms race also seemed to have reached a rough parity in both nuclear warheads and
delivery systems (SIPRI 1969; IISS 1970). The International Institute for Strategic Studies
estimated the total numbers of nuclear warheads of the United States and the Soviet Union to be
7,502 and 5,662, respectively (IISS 1970, 89), and both had deployed a “nuclear triad” of strategic
bombers, intercontinental ballistic missiles, and ballistic missile submarines that made them
capable of “strik[ing] at every significant target in the territory of the other super-power or of its
allies” (IISS 1970, 86). Overall, the historical context of a relatively stable bipolar system and the
reduced intensity of the Cold War created a historical period that was conducive to great power
cooperation and consensus on shared threats to international peace and security, including the
macrosecuritization of biological weapons.
Table 5.1: National Material Capabilities, 1968–1972
1968
1968
1970
1971
1972
USA
0.20
0.20
0.18
0.17
0.16
USSR
0.17
0.17
0.17
0.17
0.17
Source: Correlates of War Project, “National Material Capabilities”
Note: Composite Index of National Capability (CINC) scores
The second state-level dynamics behind macrosecuritization were the internal structures
and relationships that constituted the domestic security constellations of the great powers,
especially the power and interests of securitizing actors vis-á-vis state audiences. In the United
States, the absence of a powerful securitizing actor behind the national securitization of biological
weapons created space for humanity securitization to become the default security narrative within
176
the American security constellation. The nature of American democracy meant that the Nixon
administration faced growing domestic and international pressures to clarify its stance on
biological weapons. The Nixon administration launched a national security review of American
BCW policy, which determined that biological weapons lacked military utility and were therefore
unnecessary to American national security interests. After extensive consultations within the
American national security establishment, the NSC assessed that the unpredictable and
uncontrollable nature of biological weapons detracted from their tactical utility on the battlefield,
while the American nuclear arsenal meant that they did not add significant strategic value to
deterrence (Tucker 2002; Tucker and Mahan 2009)—an interesting linkage whereby the
persistence of one existential threat permitted action to reduce another, since the possession of
nuclear weapons enabled the American decision to renounce biological weapons.
The U.S. Army and Joint Chiefs of Staff, who wished to maintain biological weapons,
struggled to come up with a compelling national security justification for retaining them. Unlike
nuclear weapons and chemical weapons, the United States had never engaged in biological warfare
and seemed unable to contemplate a scenario in which it might use biological weapons. Even
General Wheeler, the lone voice in the NSC in favor of retaining biological weapons, supported
renouncing the “first use” of biological weapons (Tucker 2002, 126). At the same time, the NSC
did not share this assessment about the military utility of chemical weapons (Nixon 1969), which
were still in large-scale use during the Vietnam War, particularly tear gas and the herbicide known
as “Agent Orange.” The separation of biological and chemical weapons in disarmament diplomacy
therefore made it possible for the United States to maintain its posture on chemical weapons, while
downgrading the importance of biological weapons—effectively de-securitizing biological
weapons in American national security policy. In short, the failure of a powerful and cohesive
securitizing actor to make a clear and compelling case for the military utility of biological weapons
within the American national security establishment meant that national securitization did not
present a strong counter-narrative to humanity securitization within the American security
constellation.
In the Soviet Union, the foreign policy and national defence establishments were the
preeminent securitizing actors on biological weapons within the Soviet security constellation
(Zilinskas 2016, 30–31). The Ministry of Defence (MoD) adopted a security narrative of national
securitization, which emphasized the military implications of advances in biotechnology and the
danger of the Soviet Union falling behind the United States in the biological sciences (Zilinskas
177
2016, 22–23). They were supported by prominent Soviet bioscientists in the USSR Academy of
Scientists who understood that a narrative of national securitization—or “playing the military
card”—was the best way to acquire large R&D investments by the state in biotechnology
(Zilinskas 2016, 22–23, 43). This securitization narrative appears to have influenced the Soviet
decision, made by the Politburo between 1969 and 1971 (Zilinskas 2016), to launch a secret large-
scale “offensive” biological weapons program, known as Biopreparat.
At the same time, the Ministry of Foreign Affairs (MoFA) adopted a narrative of humanity
securitization in disarmament diplomacy for the purposes of political propaganda and pressure
against the Western powers. In 1952, the Soviet Union had introduced a draft resolution in the UN
Security Council calling on all countries—including the United States—to ratify the Geneva
Protocol (Tucker 2002, 111). The Soviet diplomats at Geneva continued to use disarmament
diplomacy on biological and chemical weapons to put pressure on the United States and other
Western nations. When Foreign Minister Gromyko delivered a major speech at the UN First
Committee calling for a BCW ban in September 1969, the Soviet position continued to put the
Americans at a political disadvantage in disarmament diplomacy. The reversal of the American
position on biological weapons and endorsement of the British proposal for the separation of
biological and chemical weapons seemed to temporarily turn the tides against the Soviet Union;
for it was Soviet insistence on treating biological and chemical weapons together which appeared
to prevent progress towards a treaty on biological weapons. The shift in the Soviet position to
officially support the British proposal on biological weapons while publicly criticizing Western
nations for their position on chemical weapons allowed the Soviet Union to once again seize the
diplomatic initiative. Thus, the Soviet security constellation was characterized by both a security
narrative of humanity securitization, adopted by the MoFA for foreign policy, and a narrative of
national securitization, invoked by the MoD in national defence policy, which both shaped the
Politburo’s decisions on biological weapons. Ultimately, the closed nature of Soviet communism
and the absence of monitoring and verification measures in the BWC meant that the Soviet Union
could pursue its foreign policy interests of international leadership on disarmament diplomacy
while proceeding with the clandestine development of biological weapons (Zilinskas 2016, 43).
The third individual-level dynamics behind macrosecuritization were the ideas and beliefs
of political leaders and policymakers, which shaped how they perceived the threat from biological
weapons. In the United States, the Nixon-Kissinger administration’s approach to foreign policy
and bilateral relations towards the Soviet Union had come under the influence of the strategic
178
principle of détente (Gaddis 1982; 2005), which made the White House less susceptible to a
securitization narrative of national securitization on biological weapons, since the general
improvement in bilateral relations and concrete achievements in disarmament and arms control
made it easier for the United States to perceive the intentions and capabilities of the Soviet Union
on biological weapons as non-threatening—incorrectly, in retrospect (Zilinskas 2016). Also
important was the belief amongst the American political leadership—shared by President Nixon,
National Security Advisor Kissinger, and Defence Secretary Laird—that biological weapons
lacked military utility, which meant that the United States could give up biological weapons
without sacrificing its national security interests. This made it possible for the White House to
consider other interests, such as President Nixon’s personal desire to be seen as a “man of peace”
within the context of the Vietnam War and score a political victory in both domestic and
international politics through the unilateral renunciation of biological weapons.
The Soviet political leadership under Leonid Brezhnev appears to have been more
susceptible to a narrative of national securitization. In particular, the Soviet leadership mistrusted
the American renunciation of biological weapons (Zilinskas 2016, 29–30), feared that the Soviet
Union had fallen behind technologically in the latest developments in biotechnology (Zilinskas
2016, 22–23), and hoped that it might achieve a military advantage from a clandestine biological
weapons program (Zilinskas 2016, 42). The fact, however, that the Soviet Union chose to pursue
a secret program shows that the Soviet leadership did not perceive biological weapons as
significant to strategic deterrence, since a capability that others believe does not exist cannot be
expected to deter. It also shows the importance that the Soviet political leadership attached to
concerns about status and prestige in international relations, especially the desire to be seen as a
responsible great power and a leader on disarmament diplomacy.
Finally, humanity securitization benefited from a clear and compelling narrative about the
indiscriminate, unpredictable, and uncontrollable effects of biological weapons. This narrative was
supported by an epistemic consensus amongst scientific and technical experts from both the
military and public health sectors, with the UNSG, WHO, and PSAC reports all providing in-depth
scientific and technical analysis which asserted the inherent unpredictability and uncontrollability
of pathogens and pandemics. Uncertainty about the potential scale of destruction contributed to
the credibility of framing biological weapons as an existential threat to humanity, since “no one
could predict how enduring the effects would be and how they would affect the structure of society
and the environment in which we live” (UNSG 1969, 87–88). This narrative clearly shaped how
179
political leaders and policymakers perceived the threat from biological weapons. President Nixon
stated that biological weapons “have massive, unpredictable, and potentially uncontrollable
consequences,” while the Soviet representative for disarmament, George Roschin, warned that
they could cause “incalculable harm to the world’s population” (USACDA 1969, 581). Against
the existential threat of biological warfare, there could be only one solution: the total elimination
of biological weapons. This securitization narrative of the indiscriminate, unpredictable, and
uncontrollable nature of biological weapons and the necessity of their complete elimination by
states was widely accepted by states and shaped the international security constellation on
biological weapons.
5.3.1 The Limits of Macrosecuritization
While the narrative of humanity securitization contributed to great power consensus on a treaty to
prohibit biological weapons, ultimately states proved unwilling to take extraordinary measures to
eliminate the existential threat of biological weapons. They did not, for instance, establish an
international authority with new powers for monitoring and verification or sanctions for non-
compliance (Goldblat 1997; Tucker 2002; Cross and Klotz 2020). On the surface, the absence of
monitoring and verification in the BWC was framed as a technical matter. From the outset, British
Minister Fred Mulley framed the verification of a treaty on biological weapons as technically
infeasible and political undesirable:
After much study we have been obliged to conclude that no comparable system [to
nuclear safeguards] is possible for microbiological or chemical weapons. Any such
system would be so intrusive as to be quite unacceptable, and even then could not
be fully effective. The principal difficulty arises from the fact that almost all the
material and equipment with which we are trying to deal here have legitimate
peaceful purposes; and it would be wrong to inhibit work of real value to humanity
in combating disease, for example, and impracticable to inspect every laboratory in
every country. We must accept, therefore, that no verification is possible in the
sense of the term as we normally use it in disarmament discussions (UNODA 1968,
561–562).
There is, however, good reason to believe that the verification problem was more political than
technical in nature. After all, no verification system can be 100% reliable and so the question in
disarmament and arms control is always “how much verification is enough?” (Krass 1985; Tulliu
2003). The inability or unwillingness to “inspect every laboratory in every country” could have
led to a robust system, a modest system, or no system. Ultimately, states chose no verification
system. Similarly, the question of (non-)compliance was left out of the discussion entirely. The
180
BWC does not specify any sanctions or enforcement mechanisms beyond the possibility of
referring cases of suspected noncompliance to the UN Security Council, where the veto powers of
the permanent members could prevail (Goldblat 1997).
Although the great power rivalry between the United States and the Soviet Union had
diminished in intensity by the late 1960s, the Cold War remains one of the main reasons for the
weaknesses of the Biological Weapons Convention (Cross and Klotz 2020). Indeed, the 1993
Chemical Weapons Convention provides a useful counterfactual: it was negotiated in the aftermath
of the Cold War and established a verification system—the Organization for the Prohibition of
Chemical Weapons—despite facing similar technical challenges of onsite inspections of chemical
facilities. During the Cold War, the Soviet Union strongly resisted intrusive onsite inspections
(Tucker and Mahan 2009; Cross and Klotz 2020), while after the Cold War the United States
assumed the securitizing role as a hegemonic patron for disarmament on chemical weapons.
Another explanation for the weakness of the BWC is that states were not wholly persuaded
by the securitization narrative of humanity securitization, which framed biological weapons as an
existential threat to humanity. While technical assessments of biological weapons—like the
UNSG, WHO, and PSAC reports—suggested the possibility that the epidemic spread of a highly
virulent pathogen could pose an existential threat to humanity, most of the analysis emphasized
the more limited consequences for civilian populations. Paradoxically, the high degree of
uncertainty about the potential effects of biological weapons both contributed to, and detracted
from the narrative of humanity securitization on biological weapons. Although biological weapons
could threaten the survival of humankind, thereby justifying an international prohibition against
biological weapons, the greater likelihood of more limited—though still potentially catastrophic—
consequences meant that the interest in protecting humanity, though a legitimate concern, existed
alongside other political and security interests of states.
5.4 Conclusion
The case of the successful macrosecuritization of biological weapons offers some theoretical and
practical lessons about the possibilities of macrosecuritization in international relations. Not all of
the historical circumstances that made macrosecuritization possible are generalizable to other
cases, but some may be. The first was the importance of great power consensus to making a
disarmament treaty a politically viable solution to the existential threat of biological weapons.
Only when the United States and the Soviet Union came to an agreement and submitted separate
181
but identical draft conventions did a treaty become possible. The second was the success of
macrosecuritization despite the existence of conflicting securitization narratives of humanity
securitization and national securitization. What made the narrative of humanity securitization
prevail in the macrosecuritization of biological weapons?
The theoretical framework developed here emphasizes three variables that strengthened
the narrative of humanity securitization on biological weapons and made great power consensus
on macrosecuritization possible. The first system-level variable was the stable distribution of
power in the international system, which reduced the intensity of great power rivalries and left
them more open to a narrative of humanity securitization. The relatively stable bipolar system that
existed by the late 1960s made it possible for the United States and the Soviet Union to deescalate
the Cold War, pursue the normalization of their diplomatic relations (i.e., détente), and revive the
disarmament and arms control diplomacy at Geneva to address shared threats to international peace
and security, including nuclear proliferation and biological weapons. The second state-level
variable was the weakness of the securitizing actors who adopted a narrative of national
securitization within the domestic security constellation of the United States. The U.S. Army was
the sole securitizing actor to invoke a narrative of national securitization but was unable to make
a persuasive case for retaining biological weapons to the broader American national security
establishment within the context of the NSC’s review of American BCW policy. The diplomatic
strategy of separating biological and chemical weapons in disarmament diplomacy and the
rhetorical challenge to the military utility of biological weapons undercut the narrative of national
securitization within the American security constellation. The third individual-level variable was
the ideas and beliefs of political leaders and policymakers. At this time, the Nixon administration’s
foreign policy towards the Soviet Union was inspired by the idea of détente to normalize relations
with the Soviet Union and reduce the intensity of great power rivalry. Moreover, Nixon’s personal
desire to be seen as a “man of peace” within the context of the Vietnam War made the policy of
the unilateral renunciation of biological weapons politically attractive to the American president.
And the securitization narrative about the indiscriminate, unpredictable, and uncontrollable effects
of biological weapons, supported by an epistemic consensus of scientific and technical experts in
the defence and public health sectors, had a strong impact on how political leaders and
policymakers understood the threat of biological weapons. Overall, these conditions favored a
narrative of humanity securitization over national securitization in international relations and made
great power consensus possible on an international treaty to prohibit biological weapons.
182
Of course, one should be cautious about drawing too strong conclusions from this case,
since the historical circumstances or the particularities of biological weapons may limit its
generalizability to other cases of macrosecuritization. For instance, the structural context of a
stable bipolar system is not generalizable to periods when the differential growth of power raises
the possibility of a structural transition in the international system. Similarly, the separation of
biological and chemical weapons in disarmament diplomacy and contestation of the military utility
of biological weapons may provide some practical lessons on how to challenge a narrative of
national securitization, but this rhetorical challenge to the national securitization of biological
weapons depended in part on the continuing existence of nuclear weapons for strategic deterrence.
In other words, the issue linkage between the existential threat of nuclear weapons made it possible
to reduce the existential threat of biological weapons.
The Biological Weapons Convention represents a case of macrosecuritization in
international relations, but it would be wrong to claim that the BWC eliminated the existential
threat of biological weapons and warfare. The weaknesses of the BWC as an international treaty
regime, especially the absence monitoring and verification, make noncompliance a risk. The Soviet
decision to launch a clandestine biological weapons program and cases of noncompliance by other
states—including Iraq, North Korea, and South Africa (Cross and Klotz 2020)—show that the
BWC has failed to neutralize the existential threat of biological weapons. Yet it would be equally
incorrect to claim that the BWC has done nothing to reduce the existential threat of biological
weapons, for the treaty has generated a strong international norm—or “taboo”—against the
possession and use of biological weapons (Nadelmann 1990; Price 1998; Finnemore and Sikkink
1998). This international norm against biological weapons continues to exercise a strong influence
over the conduct of states in peace and war. States have neither used or threatened the use of
biological weapons in war, nor do they claim to possess biological weapons for strategic deterrence
as they do with nuclear weapons. The BWC therefore represents a case of limited or partial
macrosecuritization because it diminished but did not eliminate the existential threat of biological
weapons. Since the continuing advance of the biological sciences and biotechnology have created
new capabilities and dangers, including bioengineered pathogens, the existential threat of the
development and use of biological weapons by state or non-state actors may be growing today.
The BWC lacks the intrusive measures for monitoring and verification to offset these emerging
risks, but may still offer the most promising avenue to mitigate the existential threat that biological
183
weapons pose to humanity by revitalizing and expanding this international treaty regime to keep
pace with twenty-first century realities.
184
Chapter 6
Artificial intelligence is the future, not only for Russia, but for all humankind. It
comes with colossal opportunities, but also threats that are difficult to predict.
Whoever becomes the leader in this sphere will become the ruler of the world.
— Vladimir Putin, 2017
The AI Revolution in International Relations
Artificial intelligence (AI) has quickly emerged as an issue at the forefront of international
relations. Since 2017, dozens of states—including all the great and major powers—have developed
“national AI strategies” to define their goals and interests, mobilize new funding and resources,
and establish dedicated agencies and offices to position themselves for global AI leadership. This
chapter explores the dynamics of the “AI revolution” in international relations in terms of two
conflicting securitization narratives about artificial intelligence. On the one hand, AI is the subject
of a narrative of humanity securitization, in which a growing number of AI experts frame the
development of “artificial general intelligence” (AGI) and/or “superintelligence” (ASI) as an
existential threat to humanity (2014–present). On the other hand, AI has become the focus of a
national securitization narratives that frames AI as a strategic technology with transformative
implications for the political, economic, and security interests of nations (2017–present). Although
states are generally aware of concerns about “existential AI risks,” this macrosecuritization
discourse has failed to become the primary AI narrative—or even a serious contender—in
international relations or to exercise a significant influence over the national AI strategies of states.
Why has the macrosecuritization of AI failed in international relations? This chapter argues
that the macrosecuritization of AI has failed because a competing securitization narrative of
national securitization has ultimately proven more compelling to the great and major powers. The
theoretical framework points to three variables to explain the emergence of national securitization
as the dominant narrative on AI in international relations. The first system-level driver of
macrosecuritization failure is the instability of the distribution of power in the international system.
The structural forces of differential growth in the nature and distribution of power and a structural
transition from unipolarity to bipolarity have led to the resurgence of great power rivalries and the
growing intensity of “technology competition,” especially between the United States and China.
The second state-level dynamics behind macrosecuritization failure involve the internal political
structures and relationships between securitizing actors and audiences that constitute the domestic
185
security constellations of the great powers. Importantly, the principal securitizing actors who wield
significant power and influence over the development and implementation of national AI
strategies—particularly the national security apparatuses and scientific advisory councils—have
embraced a narrative of national securitization on artificial intelligence, while the main securitizing
actors behind the narrative of humanity securitization—especially AI experts—have limited access
to and influence over state audiences. The third individual-level dynamics behind
macrosecuritization failure pertain to the ideas and beliefs of political leaders and policymakers
about artificial intelligence. In particular, the absence of epistemic consensus within the AI
community about the possibility, timing, and consequences of AGI/ASI has led to a general
skepticism or ignorance of existential AI risks amongst policymakers, while the idea of a “race”
for global AI leadership has exerted a powerful influence over how political leaders understand
the significance of the AI revolution in international relations.
The rest of the chapter is organized as follows. The first section explores the emergence of
a securitization narrative of humanity securitization amongst AI experts that frames AI—
specifically AGI/ASI—as an existential threat to humanity. The second section examines the
national AI strategies of the great and major powers in order to better understand the interests and
concerns of the leading states in the international system. Together, these sections reveal a
disjuncture between the securitization narrative of humanity securitization within the AI
community and the narrative of national securitization that characterizes the national AI strategies
of states. The final section shows how the dynamics of an unstable distribution of power, the power
and interests of securitizing actors within state bureaucracies, and the beliefs and perceptions of
policymakers have made national securitization the dominant securitization narrative on AI in
international relations, with the resurgence of great power rivalries and undermining the prospects
for macrosecuritization.
6.1 The Emergence of a Securitization Narrative of Humanity
Securitization on Artificial Intelligence
Whatever the aims of specific applications, the general aim behind the development of artificial
intelligence is to create systems that match or exceed the intelligence of human beings. “What if
we do succeed?”, asks Stuart Russell, Professor of Computer Science at the University of
California Berkley, and Peter Norvig, Director of Research at Google, in the final pages of their
leading textbook, Artificial Intelligence: A Modern Approach.
186
[I]t seems likely that a large-scale success in AI—the creation of human-level
intelligence and beyond—would change the lives of a majority of humankind. The
very nature of our work and play would be altered, as would our view of
intelligence, consciousness, and the future destiny of the human race. AI systems at
this level of capability could threaten human autonomy, freedom, and even survival.
For these reasons, we cannot divorce AI research from its ethical consequences
(Russell and Norvig 2010, 1051; emphasis added).
What is AI? Artificial intelligence is a general term used to describe—somewhat
tautologically—computer or digital technologies “that are capable of performing tasks commonly
thought to require intelligence” (Brundage et al. 2018, 9). Importantly, AI systems possess both
the characteristics of agency (i.e., an entity that perceives and acts in and on a digital and/or
physical environment), and rationality (i.e., they make decisions and take actions to achieve goals
or objectives) (Russell 2019, 42–48). While the history of AI shows that progress is subject to
periods of growth (“AI summers”) and stagnation (“AI winters”) (Bostrom 2014; Gonsalves 2018),
the overall trajectory of the technology has been towards increasingly capable AI systems.
Currently, progress in AI is being driven by a combination of gains in hardware (e.g., the
exponential growth of computing power), software (e.g., “machine learning” algorithms and
techniques, especially “neural networks” and “deep learning”), and data (e.g., the abundance of
digital information on the Internet) (Shanahan 2015). While AI experts may disagree about which
of these fundamental drivers is most important—for instance, Ray Kurzweil (1999; 2005) believes
it is computing power, Stuart Russell (2019) claims it is algorithms, and Kai-Fu Lee argues it is
data (2018, 14–15)—progress in all three has produced increasingly powerful AI systems.
Not surprisingly, AI has captured the attention of governments, companies, and the general
public in recent years. This has been fueled by some high-profile demonstrations of AI successes,
such as the victory of DeepMind’s “AlphaGo” system in May 2017 over the world’s top
grandmaster, Ke Jie, at the strategy boardgame “Go.” Yet today’s AI remain “narrow” systems
that can achieve or surpass human-level intelligence only in specific domains. These domains are
far simpler than the world in which humans live. For example, Go is a strategy game with two
players, fixed goals and rules, turn-based decisions, perfect information, and clear winners and
losers. However, scientists—and science fictions writers—have long speculated about the
possibility of “artificial general intelligence” (AGI), or the ability of AI to perform essentially all
the tasks—and more—that humans do. One of the first AI experts to seriously consider the
possibility of AGI was Alan Turing (1950), the “father” of computer science and the person
responsible for developing a machine to crack the Nazi’s “Enigma” machine. Turing described a
187
test—the “Turing Test”—for machine intelligence whereby an AI would seek to persuade a human
being that it too was human—something which online “chatbots” do every day.
Since then, AI experts have frequently raised the concern that AI systems could one day
far surpass human-level general intelligence, or “superintelligence” (ASI), which could threaten
the survival of humankind (see Good 1966; Moravec 1988; Vinge 1993; Joy 2000; Kurzweil 2005;
Yudkowsky 2008; Bostrom 2014; Shanahan 2015; Tegmark 2018; Russell 2019). For instance, in
1965, Irving John Good—a British mathematician and cryptologist who worked with Turing—
speculated that the first “ultraintelligent machine” would be the last invention that humans would
ever need to make.
Let an ultraintelligent machine be defined as a machine that can far surpass all the
intellectual activities of any man [sic] however clever. Since the design of machines
is one of these intellectual activities, an ultraintelligent machine could design even
better machines; there would unquestionably be an “intelligence explosion,” and
the intelligence of man [sic] would be left far behind. Thus the first ultraintelligent
machine is the last invention that man [sic] need ever make, provided that the
machine is docile enough to tell us how to keep it under control (Good 1965, 33;
emphasis added).
In 1988, Hans Moravec, Professor at the Robotics Institute at Carnegie Mellon, published
the book, Mind Children, which claimed that humans would soon find themselves in “a world in
which the human race has been swept away by the tide of cultural change, usurped by its own
progeny.” Machines would achieve an intelligence “transcending everything we know” and “grow
to confront immense and fundamental challenges in the larger universe,” while humans would be
left to “silently fade away” (Moravec 1988, 1). In 1993, Vernor Vinge, Professor of mathematics
and computer science at San Diego University, predicted that humans were on the verge of creating
“superhuman intelligence” and “shortly after, the human era will be ended.”
The acceleration of technological progress has been the central feature of this
century. I argue in this paper that we are on the edge of change comparable to the
rise of human life on Earth. The precise cause of this change is the imminent
creation by technology of entities with greater than human intelligence (Vinge
1993, 12; emphasis added).
“How bad could the Post-Human era be?”, Vinge asked, “Pretty bad… the physical extinction of
the human race is one possibility (Vinge 1993, 16).
In 2000, Bill Joy, computer engineer and elected member of the U.S. National Academy
of Engineering, published a widely read article in Wired magazine, entitled “Why the Future
Doesn’t Need Us.” Joy warned that a world of growing complexity and increasingly powerful
188
computers could be one in which “the fate of the human race would be at the mercy of the
machines.”
It might be argued that the human race would never be foolish enough to hand over
all the power to the machines. But we are suggesting neither that the human race
would voluntarily turn power over to the machines nor that the machines would
willfully seize power. What we do suggest is that the human race might easily
permit itself to drift into a position of such dependence on the machines that it
would have no practical choice but to accept all of the machines’ decisions. As
society and the problems that face it become more and more complex and machines
become more and more intelligent, people will let machines make more of their
decisions for them, simply because machine-made decisions will bring better
results than man-made ones [sic]. Eventually a stage may be reached at which the
decisions necessary to keep the system running will be so complex that human
beings will be incapable of making them intelligently. At that stage the machines
will be in effective control. People won’t be able to just turn the machines off,
because they will be so dependent on them that turning them off would amount to
suicide (emphasis added).
Joy’s article powerfully raises the existential question of “meaningful human control” in a world
in which AI and technological complexity exceed human intelligence.
Thus, AI experts have long speculated about a future in which progress in AI could match
and then exceed human intelligence and have suggested that the development of AGI/ASI could
constitute an existential threat to humanity. While these warnings by AI experts reflect the
rhetorical style of a macrosecuritization discourse, they have typically occurred in relative isolation
rather than catalyzing serious concern within the AI community, much less the general public. For
this reason, they do not meet the standard of a macrosecuritizing move or constitute a case of
macrosecuritization (failure).
6.1.1 Existential AI Risks
In recent years, there has been a notable shift in the thinking about AI, both within and beyond the
AI community, with it becoming increasingly legitimate to frame AI as posing an existential threat
to humankind (Omohundro 2009; Muller 2014; Russell et al. 2015; Sotala and Yampolskiy 2015;
Dafoe 2018; Sotala 2018; Everitt et al. 2018; Totschni 2019; Hoang 2019; Critch and Krueger
2020; Alfonseca 2021). This is not to say that there is consensus about “existential AI risks” within
the AI community. On the contrary, there is substantial debate amongst AI experts about many
dimensions of the AGI/ASI question, including its possibility, timing, and consequences (Boden
2016, 147). Although it is a contested narrative (Muller and Bostrom 2014; Grace et al. 2018; Ball
189
2020; Clarke et al. 2021), there are a growing number of voices within the AI community that have
taken on a macrosecuritization discourse in framing AI as an existential threat to humanity.
During the past decade, several leading AI experts have published books and edited
volumes (Yampolskiy ed. 2019) on existential AI risks, including Nick Bostrom (2014), Murray
Shanahan (2015), Max Tegmark (2017), and Stuart Russell (2019). Moreover, high-profile
scientists and tech-industry leaders have warned about existential AI risks. Elon Musk has called
AI a “fundamental existential risk for human civilization” and the “biggest existential threat” to
humanity (McFarland 2014; Domonoske 2017). Stephen Hawking suggested that “The
development of full artificial intelligence could spell the end of the human race” or “the worst
event in the history of our civilization” (Cellan-Jones 2014; Kharpal 2017). Microsoft Research
Director, Eric Horvit, warned that We could one day lose control of AI systems via the rise of
superintelligences that do not act in accordance with human wishes—and that such powerful
systems would threaten humanity” (Pagallo 2016, 211). Bill Gates, too, admitted to being in “the
camp that is concerned about super intelligence” as an existential threat (Holley 2015).
Unlike the isolated warnings of AI experts in the past, there is now an established research
agenda on existential AI risks (for a literature review, see Everitt et al. 2018). In 2015, dozens of
AI experts and scientists published an “open letter,” currently with more than 8,000 signatories
(FLI 2021), calling for research on “robust and beneficial artificial intelligence” (Russell et al.
2015). The letter asserted that,
There is now a broad consensus that AI research is progressing steadily, and that
its impact on society is likely to increase. The potential benefits are huge, since
everything that civilization has to offer is a product of human intelligence…
Because of the great potential of AI, it is important to research how to reap its
benefits while avoiding potential pitfalls… We recommend expanded research
aimed at ensuring that increasingly capable AI systems are robust and beneficial:
our AI systems must do what we want them to do (Future of Life Institute 2021).
Multiple research institutes have launched research projects on AI safety and existential AI risks,
such as the Machine Intelligence Research Institute (MIRI), the Center for Human-Compatible AI
(CHAI) at the University of California-Berkeley, the “100 Year Study on Artificial Intelligence
(AI100) at Stanford University, the Future of Humanity Institute at Oxford University, and even
Google’s “OpenAI.” In China, the Beijing Academy of Artificial Intelligence (BAAI)—and
endorsed by Tsinghua University, Peking University, the Chinese Academy of Sciences, Alibaba,
Baidu, and Tencent—announced the “Beijing AI Principles” which calls AI a “common challenge
190
for all humanity,” and suggests the need for “long-term planning,” including for the “potential
risks” from AGI/AGI and that AI must “always be beneficial to society and nature in the future”
(BAAI 2019). Taken together, the growing number of warnings expressed by experts in the AI
community within a relatively short period of time constitute the emergence of a securitization
narrative of humanity securitization on AI (or a “macrosecuritizing move”), which frames the
potential development of AGI/ASI as an existential threat to humanity.
Arguably, this securitization narrative on AI was catalyzed by the publication of Nick
Bostrom’s 2014 book, Superintelligence. As Bostrom writes in the preface:
If some day we build machine brains that surpass human brains in general
intelligence, then this new superintelligence could become very powerful. And, as
the fate of gorillas now depends more on us humans than on the gorillas themselves,
so the fate of our species would depend on the actions of the machine
superintelligenceThis is quite possibly the most important and most daunting
challenge humanity has ever faced. And—whether we succeed or fail—it is
probably the last challenge we will ever face (Bostrom 2014, vii; emphasis added).
Bostrom’s book provides a comprehensive theoretical exploration of the existential threat of
AGI/ASI, which has significantly shaped the discussion of existential AI risks in and beyond the
AI community. Bostrom begins by describing past and current progress in AI and some possible
pathways to superintelligence, such as “brute-force” increases in computing power, “whole-brain-
emulation” in neuroscience, or “machine learning” algorithms (Bostrom 2014, 22; also Shanahan
2015). Drawing on Good (1965) and Vinge’s (1993) idea of an “intelligence explosion,” Bostrom
(2014) describes the danger of an AI system that becomes more capable than humans at improving
its intelligence (e.g., by optimizing its algorithm), unleashing a runaway process of accelerating
returns in its capabilities (“recursive self-improvement”). The dynamics of a hypothetical
intelligence explosion point to the danger of a “fast takeoff,” whereby an AI system that reaches
human-level general intelligence could achieve superintelligence soon after (Bostrom 2014, 62-
64), at which point it could gain a “decisive strategic advantage” that would defeat all human
efforts to modify, curtail, or control it (Bostrom 2014, 78).
Why might superintelligence pose an existential threat to humanity? In one scenario (“AI
takeover”), the ASI undertakes a process of “recursive self-improvement” to augment its
intelligence, “covert preparation” in developing a strategy to achieve its long-term goals while
concealing its intentions from humans, and finally initiates a “strike” in which it “eliminates the
human species and any automatic systems humans have created that could offer intelligent
191
opposition to the execution of the AI’s plans” (Bostrom 2014, 97). In this scenario, the ASI
extinguishes humanity because the latter represents a threat or obstacle to achieving the former’s
fundamental goal. In another scenario (“perverse instantiation”), the ASI pursues some seemingly
benign goal—e.g., maximizing paperclip production or calculating the digits of pi—in a way that
threatens human survival—e.g., by converting the biosphere into paperclips or computing
resources, humans included (Bostrom 2014, 119–124). In this scenario, human extinction occurs
because the ASI’s fundamental goal is not properly “aligned” with human survival—much like
how many plant, insect, and animal species are threatened with extinction because their survival
is not aligned with humanity’s goals. The simple fix of making human prosperity and survival the
fundamental goal of an ASI does not necessarily eliminate existential AI risks, since the ASI may
find that the most efficient solution to optimizing human “happiness” or “security” is by
stimulating the pleasure center of human brains (i.e., drugs), or by keeping humans in perfectly
secure facilities (i.e., prisons). In practice, AI experts have catalogued a wide array of unexpected
failures—or “surprising creativity” (Lehman et al. 2020)—in the ways that AI systems have
pursued their goals.
Although AI disasters are a staple of science fiction, Bostrom (2014) and others (Pinker
2018; Russell 2019) have cautioned against “anthropomorphizing” AI by attributing human
motivations or emotions to machines—such as greed, spite, or envy—or to presume that an AI
would be “conscious” in the same way that humans are. Whether AI is “conscious” may be
immaterial to whether it poses an existential threat to humanity; for the problem is not the presence
of human-like consciousness or motivations but rather the possession of super-human capabilities
and misaligned goals. Indeed, if an ASI is “goal-oriented”—as AI systems are typically designed
to be—then it could be expected to pursue instrumental objectives—particularly, self-
improvement, resource acquisition, and self-preservation—that threaten human survival
(Omohundro 2008; Bostrom 2014; Russell 2019). Thus, human extinction may be the “default
outcome” of an AI that far surpasses the intelligence of human beings (Bostrom 2014, 115–116).
6.1.2 The Control Problem
The securitization narrative of humanity securitization on artificial intelligence typically frames
“security” (or “safety”) from existential AI risks in terms of maintaining “human control” over
AI—that is, “retaining absolute power over machines that are more powerful than us” (Russell
2019, xii). Though in principle security from existential AI risks could be achieved through
192
practices of restraint (Deudney 2007; Sears 2020)—such as Bill Joy’s (2000) suggestion that
humans relinquish the AI dream all together—the conventional wisdom asserts that security
through restraint is infeasible. As Vinge wrote (1993, 15):
[I]f the technological Singularity can happen, it will. Even if all the governments
of the world were to understand the “threat” and be in deadly fear of it, progress
toward the goal would continue… In fact, the competitive advantage—economic,
military, even artistic—of every advance in automation is so compelling that
passing laws, or having customs, that forbid such things merely assures that
someone else will get them first.
In short, if humans can create an AGI/ASI system, they will (Vinge 1993; Russell 2019).
This discourse frames the essence of the AI “control problem” as figuring out how to make
sure that an AGI/ASI system remains compatible with humanity’s prosperity and survival before
it reaches or exceeds human-level general intelligence (Bostrom 2014; Russell 2019). For once an
AGI/ASI exists, it may be impossible for humanity to maintain control over it, since methods that
seek either to curtail its capabilities or to modify its goals—such as containment (“boxing”) or
safeguards (“kill switches”)—are likely to be easily defeated (Bostrom 2014; Alfonseca et al.
2021). Most solutions to the control problem emphasize “value alignment” (Bostrom 2014; Everitt
et al. 2018; Eckersley 2019; Hoang 2019; Critch and Krueger 2020). There are two general
approaches to AI value alignment, each with its own problems.
The first deductive or normative approach would seek to “teach” human values or goals
directly to the AI, taking its cue from “supervised learning” in AI research and development. In
addition to the many technical complications of accurately expressing abstract moral principles in
programming language, the deeper philosophical question is “which human values?” Should an
AGI/ASI system be guided by, for instance, Aristotle’s “virtue ethics,” Immanuel Kant’s
“deontological ethics,” John Stuart Mill’s “consequentialist ethics,” or some non-Western system
of moral philosophy, such as Buddhism, Confucianism, or Indigenous knowledge? Some AI
experts simply advocate a particular ethical system, like Russell’s (2019, 217) endorsement of
utilitarianism. Bostrom (2014, 254) proposes a “common good principle,” whereby AI “should be
developed only for the benefit of all of humanity and in the service of widely shared ethical ideals.”
Yet it is uncertain what such general moral principles would mean in practice. The fact that humans
have debated moral philosophy for thousands of years does not bode well for the normative-
deductive approach to the control problem (Everitt et al. 2018, 13). The more likely scenario is
193
that whichever group were to develop the AGI/ASI would end up choosing its preferred moral
values for all of humanity (Carayannis and Draper 2022).
The second inductive or behavioral approach would have AI systems “learn” indirectly by
observing human behavior and seeking to empirically discover human goals or values. Arguably
the leading approach for thinking about the AI control problem is “inverse reinforcement
learning,” where an AI system would learn an agent’s “utility function” by observing its behavior
(Everitt et al. 2018). Eliezer Yudknowski (2004, 6) proposes the principle of “coherent
extrapolated volition,” whereby AI would seek to predict what humans would want “if we knew
more, thought faster, [and] were more the people we wished we were.” However, it is far from
clear that the inductive-behavioral approach would eliminate existential AI risks. Indeed, it could
lead an ASI system to run “large-N” experiments on human subjects—real or simulated—to gain
ever-increasing data about human preferences. More generally, the inductive-behavioral approach
seems to present ample opportunities for AI to learn the wrong lessons from humans: might the
ASI system learn humanity’s “lust for power,” like a Nietzschian Über-maschine, or its
materialistic excesses, like a wealth-maximizing robo-economicus? The danger that AI could learn
perverse lessons from humans was illustrated in March 2016 by Microsoft’s Twitter “chatbot”—
innocuously named “Tay”—which used machine-learning to interact with humans on social
media. After 16 hours, Microsoft took Tay offline after it had “learned” sexism and racism
(Vincent 2016). If an AGI/ASI system were to learn its fundamental goals from humans, then
perhaps “world domination” would not be farfetched.
In general, the AI community understands existential AI risks as a problem of technology
and thinks in terms of technical solutions to the control problem (Totschni 2019). Since the AI
community possesses expert scientific and technical knowledge about AI, its emphasis on
technical solutions shapes the narrative of humanity securitization on AI security/safety. It also
means that the AI community holds the position of a functional security actor, since their activities
have a direct impact on both the threat of, and security from, artificial intelligence. As Russell
argues, “It is essential that the AI community own the risks and work to mitigate them. The risks,
to the extent that we understand them, are neither minimal nor insuperable. We need to do a
substantial amount of work to avoid them” (Russell 2019, 160). For some, security from existential
AI risks requires the AI community to rethink its basic approach to artificial intelligence—or
“reshaping and rebuilding the foundations of AI” (Russell 2019, 160). For example, Eric Drexler
(2019) proposes “reframing superintelligence” in terms of “comprehensive AI services” rather
194
than as rational “agents.” Stuart Russell argues that the “standard model” of AI that seeks to
optimize a fixed, exogenously determined goal is “fundamentally flawed.” Instead, the AI
community must develop an entirely “new approach” to AI that “can be expected to achieve our
objectives” (Russell 2019, 247). Russell offers three principles for what he calls “provably
beneficial machines”:
1. The machine’s only objective is to maximize the realization of human
preferences.
2. The machine is initially uncertain about what those preferences are.
3. The ultimate source of information about human preference is human behavior
(Russell 2019, 173).
The call for a paradigm shift in the way that the AI community thinks about and develops AI
reflects the pattern of macrosecuritization: i.e., the call for extraordinary measures. In essence, the
AI community—including scientists, academics, engineers, businesses, and governments—must
abandon the existing approach to AI that has driven progress for decades, because it poses an
existential threat to humanity.
The AI community also recognizes the need for a global solution to the control problem,
since it has only a “limited” influence over the directions of international AI policy (Russel 2019,
184). In practice, AI R&D involves diverse stakeholders around the world, including academic
researchers and universities (e.g., Stanford, the Massachusetts Institute of Technology, Carnegie
Mellon), major technology firms (e.g., Google, Facebook, Amazon, Microsoft, IBM, Tencent,
Baidu, and Alibaba), and states (e.g., the United States, China, and the European Union). While
securitizing actors see international cooperation as crucial to AI security and safety, they lack a
clear and coherent plan for what this should look like. Some believe that transparency in R&D
could contribute to cooperation on AI (e.g., OpenAI), while others see transparency as potentially
exacerbating existential AI risks (Bostrom 2017). Some think that the governance of AI should be
centralized, while others believe it should be decentralized. Some look to disarmament and arms
control treaties (Maas 2019), while others see the International Panel on Climate Change as a
model for mitigating AI risks (Miailhe 2018). Some call for a robust international treaty regime to
restrain or prohibit the development of dangerous AI (Joy 2000), while others see such restrictions
on science and technology as infeasible and undesirable (Russell 2019). Ultimately, the narrative
of humanity securitization on AI sees existential AI risks as requiring both technical and political
solutions, for “When the future of humanity is at stake, hope and good intentions… are not enough”
(Russell 2019, 184).
195
6.2 The “AI Revolution” in International Relations
International relations is in the midst of an “AI revolution,” in the sense that states increasingly
perceive AI as a revolutionary technology that could turn established truths about the power and
interests of states in international relations on their heads (Jervis 1989, 15). Indeed, few issues
have emerged as quickly and decisively as a matter of international relations as artificial
intelligence. One of the salient features of the AI revolution is the active role that states are taking
to position themselves for global AI leadership in a competitive international environment. As one
observer writes, “The race to become the global leader in artificial intelligence has officially
begun” (Dutton 2018).
In March 2017, Canada unveiled the first national AI strategy (the “Pan-Canadian Artificial
Intelligence Strategy”), which aimed to “propel Canada to a position as leader in artificial
intelligence” and for Canada to “harness the benefits from artificial intelligence” (Canadian House
of Commons 2017, 103). Since then, dozens of states have followed suit in producing national AI
strategies (see Table 6.1), many of which set ambitious goals and objectives, allocate substantial
public funding and resources, and establish new laws and institutions to support national AI
research and development (Dutton 2018; Saran et al. 2018; FLI 2021). The European Union (EU)
has also developed a regional AI strategy to promote a “common European approach to AI” and
ensure that Europe becomes a “global leader” in AI for the shared benefit of “the whole of
European society and economy” (European Commission 2020, 2). Multilateral institutions like the
Group of Seven (G7), Group of Twenty (G20), Organization for Economic Cooperation and
Development (OECD), and United Nations Educational, Scientific, and Cultural Organization
(UNSECO) have also made several diplomatic statements, recommendations, and communiques
on artificial intelligence.
Table 6.1: National AI Strategies
Country
Date
Strategy
Canada
March 2017
Pan-Canadian AI Strategy
Japan
March 2017
Artificial Intelligence Technology Strategy
Singapore
May 2017
AI Singapore
China
July 2017
Next Generation AI Plan
Australia
September 2017
Digital Economy Strategy*
United Arab Emirates
October 2017
UAE Strategy for Artificial Intelligence
Finland
December 2017
Finland’s Age of Artificial Intelligence
Denmark
January 2018
Strategy for Denmark’s Digital Growth*
Kenya
February 2018
Blockchain & Artificial Intelligence task force*
196
France
March 2018
AI for Humanity (“Villani Report”)
Brazil
March 2018
Brazilian Strategy for Digital Transformation*
Italy
March 2018
AI: At the Service of Citizens
New Zealand
March 2018
Artificial Intelligence: Shaping a Future New Zealand
United Kingdom
March 2018
Sector Deal for AI
South Korea
May 2018
Five-Year AI Development Plan
India
June 2018
National Strategy for Artificial Intelligence
Mexico
June 2018
Towards an AI Strategy in Mexico
Austria
October 2018
AI Mission Austria 2030
Argentina
November 2018
Digital Agenda 2030*
Germany
November 2018
AI Made in Germany
Estonia
May 2018
Estonia’s National AI Strategy (“Kratt Report”)
Sweden
May 2018
National Approach for Artificial Intelligence
United States
February 2019
American AI Initiative
Spain
March 2019
RDI Strategy in Artificial Intelligence
Lithuania
April 2019
Lithuanian Artificial Intelligence Strategy
Poland
August 2019
Artificial Intelligence Development Policy in Poland
Netherlands
October 2019
Strategic Action Plan for Artificial Intelligence
Russia
October 2019
Russian National AI Strategy
Serbia
December 2019
Strategy for the Development of AI
Norway
January 2020
National Strategy for Artificial Intelligence
European Union
February 2020
AI: A European Approach to Excellence and Trust
Chile
Announced
Malaysia
Announced
Tunisia
Announced
Uruguay
Announced
Sources: Dutton (2018); Future of Life Institute (2021); OECD (2021).
Note: As of Fall 2021. An asterisk indicates that AI is one part of a broader national strategy.
The national AI strategies of states provide a window into understanding the meaning of
the AI revolution in international relations. This section analyzes the discourse and content of the
national AI strategies of the leading states in the international system to better understand their
interests and concerns. On the surface, these national AI strategies reveal substantial variation in
the policies and priorities of states, including, inter alia, on issues of politics and governance,
economics and industry, security and defence, society and ethics, research and innovation, and
education and training (see Table 6.2) (Dutton 2018; Saran et al. 2018). More fundamentally, they
show that states have embraced national securitization as the dominant securitization narrative on
AI in international relations. This securitization narrative frames AI as a strategic technology with
revolutionary implications for the vital national interests of states, including their national security,
economic competitiveness, political values, and international position. It therefore calls on the
state to take extraordinary action in mobilizing national resources to position the nation for global
197
leadership in a competitive international AI environment; for failure could threaten the state’s vital
national interests or even national survival.
The rest of this section is organized into three parts. The first part explores the American
national AI strategy, which sees the United States as the “world leader” in AI and aims to maintain
American AI “supremacy.” The second part looks at the national and regional AI strategies of
European nations, which see themselves as falling behind the United States and China and seek to
strengthen regional cooperation in order to reverse their relative technological decline and ensure
Europe’s “voice” in the world. The third part analyzes the national AI strategies of China and
Russia, which aim to become global AI powers in order to achieve their great power ambitions.
6.2.1 Maintaining Supremacy: The American National AI Strategy
The United States has come to understand the significance of the AI revolution in terms of
maintaining technological supremacy and American hegemony. In May 2016, President Barack
Obama (2009–2017) initiated a process to spark public discussion and develop a national AI R&D
strategy (The White House 2016). The White House stated that AI presented “tremendous
opportunities,” but like any “transformative technology” also posed some “complex policy
challenges.” The Obama administration announced a series of public dialogues on AI and directed
the National Science and Technology Council (NSTC) to create a new subcommittee to coordinate
and make recommendations on a national AI R&D strategy.
In October 2016, the NSTC released two reports. The first report, Preparing for the Future
of Artificial Intelligence, set an optimistic tone, stating that AI had the “potential to improve
people’s lives by helping to solve some of the world’s greatest challenges and inefficiencies”
(NSTC 2016a, 1). The report also provided a definition and brief history of AI, a case study on
autonomous vehicles and aircraft, and a discussion of some of the main policy issues, including
governance questions about justice and fairness, the economic implications for the workforce, and
defence and security matters of cybersecurity and “lethal autonomous robots.” The report
concluded that while AI posed some “risks and challenges,” the U.S. Government should be able
to minimize the dangers of AI through “thoughtful attention to its potential and to managing its
risks” (NSTC 2016a, 5, 39).
The second report, National Artificial Intelligence Research and Development Strategic
Plan, clearly framed AI as a revolutionary technology:
198
Artificial intelligence (AI) is a transformative technology that holds promise for
tremendous societal and economic benefit. AI has the potential to revolutionize how
we live, work, learn, discover, and communicate. AI research can further our
national priorities, including increased economic prosperity, improved educational
opportunities and quality of life, and enhanced national and homeland security
(NSTC 2016b, 3; emphasis added).
The American AI R&D Strategic Plan set out a “high-level framework” looking beyond “near-
term” capabilities and towards the “longer-term transformational impacts of AI on society and the
world” (NSTC 2016b, 7). It affirmed an important role for the U.S. Government in identifying
“strategic priorities” and supporting R&D on “important technical and societal challenges,”
especially in “long-term, high risk” initiatives that the private sector is unlikely to prioritize due to
weak or absent market incentives (NSTC 2016b, 7, 15). “Thus, to maintain a world leadership
position in this area,” the report claimed, “the United States [Government] must focus its
investments on high-priority fundamental and long-term AI research” (NSTC 2016b, 5).
The NSTC stated that the “ultimate goal” was to produce “new AI knowledge and
technologies that provide a range of positive benefits to society, while minimizing the negative
impacts” (NSTC 2016b, 3):
Driving this AI R&D Strategic Plan is a hopeful vision of a future world in which
AI is safely used for significant benefit to all members of society. Further progress
in AI could enhance wellbeing in nearly all sectors of society, potentially leading
to advancements in national priorities, including increased economic prosperity,
improved quality of life, and strengthened national security (NSTC 2016b, 8).
To accomplish this “hopeful vision,” the Strategic Plan outlined seven objectives for an American
national AI R&D strategy:
Make long-term investments in research to ensure that the United States remains a world
AI leader;
Develop methods for effective human-AI collaboration;
Understand and address the ethical, legal, and societal implications of AI;
Ensure the safety and security of AI to produce systems that are reliable, dependable, and
trustworthy;
Develop and make accessible high-quality public datasets for AI training and testing;
Set standards and benchmarks to measure and evaluate AI progress;
Understand the needs of the AI R&D workforce and support a strong community of AI
researchers (NSTC 2016b, 3–4).
199
During the Donald Trump administration (2017–2021), the U.S. Government increasingly
framed AI as a strategic technology of vital importance to American national interests, including
national security. In December 2017, the Trump administration released its National Security
Strategy (NSS) which asserted that “great power competition” had returned and that “revisionist
powers” like China and Russia were seeking to shift the balance-of-power and threaten American
values and interests (The White House 2017, 25–27). The NSS drew a strong link between
technology and national security, claiming that the United Stated “face[s] simultaneous threats
from different actors across multiple arenas—all accelerated by technology” (The White House
2017, 26). The United States must seek to “maintain our competitive advantage” by investing in
emerging technologies, including artificial intelligence (The White House 2017, 20).
The U.S. Department of Defense (DoD) has also played a crucial role in elevating AI’s
strategic importance. In January 2018, the DoD released a white paper on its National Defence
Strategy (NDS), which identified “long-term, strategic competition” with other great and major
powers—especially China and Russia—as the principal threat to the national prosperity and
security of the United States (DoD 2018, 1–2). Importantly, the NDS linked the threat of great
power competition to the interest of maintaining the American “technological advantage” in an
international environment characterized by the rapid change and diffusion of emerging
technologies. The NDS explicitly identified AI as one of the technologies that will ensure the
ability to “fight and win the wars of the future” (DoD 2018, 3).
Later that year, the DoD established the Joint Artificial Intelligence Center (JAIC) to
“harness the game-changing power of AI” and developed an Artificial Intelligence Strategy (AIS)
for the U.S. armed forces (JAIC 2021). In February 2019, it released a white paper summarizing
the AIS (“Harnessing AI to Advance our Security and Prosperity”). The AIS directed the DoD “to
accelerate the adoption of AI and the creation of a force fit for our time.”
A strong, technologically advanced Department is essential for protecting the
security of our nation, preserving access to markets that will improve our standard
of living, and ensuring that we are capable of passing intact to the younger
generations the freedoms we currently enjoy… As stewards of the security and
prosperity of the American public, we will leverage the creativity and agility of our
nation to address the technical, ethical, and societal challenges posed by AI and
leverage its opportunities in order to preserve the peace and provide security for
future generations (DoD 2019, 4; emphasis added).
One of the core premises of the AIS is the need for the United States to maintain its position
at the “forefront” of technological advances like AI “to ensure an enduring competitive military
200
advantage against those who threaten our security and safety” (DoD 2019, 5). Importantly, the AIS
emphasizes great power competition in technology as a threat to national security in order to justify
the speed, scope, and scale by which the American military must embrace AI.
Other nations, particularly China and Russia, are making significant investments
in AI for military purposes, including in applications that raise questions regarding
international norms and human rights. These investments threaten to erode our
technological and operational advantages and destabilize the free and open
international order. The United States, together with its allies and partners, must
adopt AI to maintain its strategic position, prevail on future battlefields, and
safeguard this orderThe costs of not implementing this strategy are clear. Failure
to adopt AI will result in legacy systems irrelevant to the defense of our people,
eroding cohesion among allies and partners, reduced access to markets that will
contribute to a decline in our prosperity and standard of living, and growing
challenges to societies that have been built upon individual freedoms (DoD 2019,
5; emphasis added).
The DoD’s AI Strategy directed the U.S. armed forces to accelerate the adoption of AI
through a “strategic approach” that covers five key areas:
Delivering AI for “next-generation capabilities” to support “day-to-day operations” and
“yield strategic advantages”;
Scaling AI’s impact through “decentralized development and experimentation” to exploit
the “innovative character” of the armed forces;
Building “strong partnerships” with industry, academia, and allies on AI R&D to address
“the world’s most pressing challenges”;
Cultivating a “leading AI workforce” through teaching and training and by attracting
“world-class AI talent”;
Leading in military AI ethics and safety to produce AI systems that are “resilient, robust,
reliable, and secure” and consistent with American values and international law (DoD
2019, 10–15).
The AIS concludes with a statement that emphasizes, on the one hand, the U.S. national interest
of maintaining American AI supremacy over great power rivals, and, on the other hand, the need
for “human-centered” AI that respects American values and freedoms (DoD 2019, 17).
A critical moment in the development of American national AI policy came on February
11, 2019, when President Trump issued Executive Order 13859, “Maintaining American
Leadership in Artificial Intelligence.” The executive order reaffirmed the importance of AI to
201
American national interests and established the goal of maintaining the United States as the “world
leader” in AI:
Artificial Intelligence (AI) promises to drive growth of the United States economy,
enhance our economic and national security, and improve our quality of life. The
United States is the world leader in AI research and development (R&D) and
deployment. Continued American leadership in AI is of paramount importance to
maintaining the economic and national security of the United States and to shaping
the global evolution of AI in a manner consistent with our Nation’s values, policies,
and priorities (The White House 2019, 1; emphasis added).
The U.S. national AI strategy (the “American AI Initiative”) set the overall goal of maintaining
American AI leadership, based on five principles and priorities for American AI R&D:
To drive technological breakthroughs in AI across government, industry, and academia to
encourage scientific discovery, economic competitiveness, and national security;
To promote appropriate technical standards and reduce barriers in order to safely test and
deploy AI technologies by new and existing industries;
To train current and future generations of American workers so that they possess the skills
to develop and apply AI in today’s economy and for the jobs of the future;
To foster public trust and confidence in AI amongst the American people and protect civil
liberties, privacy, and American values;
To promote an international environment that supports American AI R&D and industries,
while protecting the American technological advantage and critical AI technologies from
acquisition by strategic competitors and adversarial nations (The White House 2019, 1–2).
The rhetoric behind the American AI Initiative clearly reflects the centrality of national security to
the White House’s thinking about artificial intelligence. While not mentioning China or Russia
specifically, the American national AI strategy reaffirms the relationship between great power
rivalries and technology competition, with the Trump Administration asserting the need to protect
American AI technology from “acquisition by strategic competitors and adversarial nations.”
On May 21, 2019, the U.S. Senate introduced a bill to establish a “coordinated Federal
initiative to accelerate research and development on artificial intelligence for the economic and
national security of the United States, and for other purposes.” The National Artificial Intelligence
Initiative Act (NAIIA) became law on January 1, 2021. The purposes of the NAIIA are:
To ensure continued United States leadership in AI research and development;
202
To lead the world in the development and use of trustworthy AI systems in the public and
private sectors;
To prepare the present and future American workforce for the integration of AI systems
across all sectors of the economy and society; and
To coordinate ongoing AI R&D activities among the civilian agencies, the DoD, and the
intelligence community.
The NAIIA provided substantial new AI R&D funding to support national AI research institutes,
as well as a National Artificial Intelligence Initiative Office to oversee implementation, a National
AI Research Resource Task Force, and National AI Advisory Committee for coordination across
government. Although not mentioning existential AI risks, the NAIIA required the development
within two years of a “voluntary risk management framework” to develop standards and best
practices for “trustworthy” AI systems.
Since 2020, the United States has perceived growing challenges to its goal of maintaining
American AI supremacy, especially from China. The 2021 Annual Threat Assessment of the U.S.
Intelligence Community asserts that:
Following decades of investments and efforts by multiple countries that have
increased their technological capability, US leadership in emerging technologies is
increasingly challenged, primarily by China. We anticipate that with a more level
playing field, new technological developments will increasingly emerge from
multiple countries and with less warning (Office of the Director of National
Intelligence 2021, 20; emphasis added).
The Final Report of the U.S. National Security Commission on Artificial Intelligence (2021)
provides a window into contemporary American thinking:
Americans have not yet grappled with just how profoundly the artificial intelligence
(AI) revolution will impact our economy, national security, and welfareAmerica
is not prepared to defend or compete in the AI era. This is the tough reality we must
face. And it is this reality that demands comprehensive, whole-of-nation action. Our
final report presents a strategy to defend against AI threats, responsibly employ AI
for national security, and win the broader technology competition for the sake of
our prosperity, security, and welfare (NSCAI 2021, 1; emphasis added).
Thus, the United States sees great power competition as posing a serious threat to American AI
supremacy, singling out China and Russia as its principal rivals (CRS 2020, 20–26). In particular,
the Commission claims that AI is intensifying strategic competition with China” and that it
“take[s] seriously China’s ambition to surpass the United States as the world’s AI leader within a
203
decade” (NSCAI 2021, 2). The stakes are high: “For the first time since World War II, America’s
technological predominance—the backbone of its economic and military power—is under threat,”
and “our armed forces’ competitive military-technical advantage could be lost within the next
decade if they do not accelerate the adoption of AI across their missions” (NSCAI 2021, 7, 9). The
Commission asserts that the United States is “not organizing or investing to win the technology
competition,” and must substantially increase its AI resources “to protect its security, promote its
prosperity, and safeguard the future of democracy” (NSCAI 2021, 8). “The United States must act
now,” it concludes, “to win the AI era” (NSCAI 2021, 8).
6.2.2 Fear of Decline: European National and Regional AI Strategies
The AI revolution has also swept across Europe. As the European Commission asserts, “Like
electricity in the past, artificial intelligence is transforming our world” (European Commission
2018c, 1). Several European nations are seeking to position themselves—and Europe as a whole—
for global AI leadership, with France, Germany, and the United Kingdom (UK) at the front of the
pack. On March 29, 2018, French President Emmanuel Macron delivered a speech at the College
de France, where he outlined France’s national AI strategy—ironically called “AI for Humanity,”
considering its strong nationalism. President Macron announced that France would invest 1.5
billion euros ($1.85 billion) over five years to strengthen France’s AI R&D ecosystem, make the
country’s centralized datasets available for research and innovation, and develop an ethical
framework to meet the opportunities and challenges of AI.
Macron’s speech followed on the heels of a parliamentary mission spearheaded by Cédric
Villani between September 2017 and March 2018, which culminated in a major report, For a
Meaningful Artificial Intelligence: Towards a French and European Strategy (or “The Villani
Report”). The Villani Report framed AI as the “key” to power and prosperity for France and the
world.
[F]rom now on, artificial intelligence will play a much more important role than it
has done so far. It is no longer merely a research field confined to laboratories or to
a specific application. It will become one of the keys to the future. Indeed, we are
living in an ever more completely digital world. A world of data... Therefore,
artificial intelligence is one of the keys to power in tomorrow’s digital world (Villani
2018, 5–6; emphasis added).
The Villani Report describes the international AI environment as a competitive system dominated
by “AI heavyweights,” like the United States and China, and “emerging AI powers,” such as
204
Canada, Israel, and the United Kingdom (Villani Report 2018, 5–8). The Report makes several
policy recommendations for a French and European AI strategy: (i) develop a common data
strategy to “safeguard sovereignty” and “strategic autonomy”; (ii) target the four “strategic
sectors” of health, transportation, the environment, and security and defence where France
possesses a “comparative advantage”; (iii) boost financial and technical resources for the French
R&D ecosystem; (iv) plan for the impact of AI on labour; (v) make AI more environmentally
friendly; (vi) open up the “black box” of AI; (vii) and ensure that AI promotes social inclusivity,
diversity, and equality (Villani 2018, 8–17; also Dutton 2018; FLI 2021). What is most striking
about the Villani Report is what the rhetoric reveals about France’s concerns about the decline and
irrelevance of Europe. The Report asserts that “the role of the State must be reaffirmed” and that
a “coordinated response at [the] European level” is necessary so that France and Europe can
guarantee their “political independence” rather than become “cybercolonies” of the United States
and China, restore the “balance of power,” ensure that their “voices are heard,” and “take their
place on the world AI stage” (Villani Report 2018, 6–7). For France, the AI revolution represents
both a challenge and an opportunity for France and Europe to restore their power and influence in
the world.
In November 2018, Germany published its national AI strategy, Artificial Intelligence
Strategy: AI Made in Germany. The German strategy states that the Federal Government will “take
on the task” of rapidly advancing the field of AI “for the benefit of society at large.”
We want to safeguard Germany’s outstanding position as a research centre, to build
up the competitiveness of German industry, and to promote the many ways to use
AI in all parts of society in order to achieve tangible progress in society in the
interest of its citizens (Federal Government of Germany 2018, 6).
The strategy articulates three goals or aspirations “of equal ranking”:
We want to make Germany and Europe a leading centre for AI and thus help
safeguard Germany’s competitiveness in the future. We want a responsible
development and use of AI which serves the good of society. We will integrate AI
in society in ethical, legal, cultural and institutional terms in the context of a broad
societal dialogue and active political measures (Federal Government of Germany
2018, 6–7).
To achieve these goals, Germany’s national AI strategy allocates substantial new funding,
including 500 million euros ($595 million) in 2019, and around 3 billion euros ($3.6 billion)
between 2018 and 2025 (Federal Government of Germany 2018, 12). It also aims to strengthen
both national AI cooperation by establishing and expanding its “Centers of Excellence for AI,” as
205
well as regional AI cooperation through a “Franco-German research and development network”
and a “European innovation cluster.” Indeed, Germany envisions a symbiotic relationship between
German and European AI, and so Germany’s national AI strategy should be understood within the
framework of Europe’s regional strategy.
The United Kingdom released its national AI strategy, the Artificial Intelligence Sector
Deal, in April 2018 as part of its broader “Industrial Strategy.” The strategy identifies AI as one
of four “key challenges,” sets the ambitious goal of making the United Kingdom “the world’s most
innovative economy,” and asserts the need for the UK to “act now” to ensure its place at the
forefront of the industries that will shape our futures and have a transformative impact on society”
(UK Government 2018, 8, 11).
A revolution in AI technology is already emerging. If we act now, we can lead it
from the front. But if we ‘wait and see’ other countries will seize the advantage.
Together, we can make the UK a global leader in this technology that will change
our lives (UK Government 2018, 4; emphasis added).
The UK’s national AI strategy combines a sense of optimism about the future with an urgency to
take quick and decisive action: “We are at the cusp of one of the most exciting times in our lives,
and if we get our strategy for AI right, then we will be able to reap the rewards for our economy
for decades to come” (UK Government 2018, 11). The AI Sector Deal presents a “long-term
strategy” that emphasizes the need to build “strong partnerships” between the private and public
sectors, develop and attract “the best” in global AI talent, and make big investments in the AI R&D
“ecosystem.” The UK’s national AI strategy also establishes new institutions, including an “AI
Council” to bring together AI leaders from academia and industry, a Centre for Data Ethics and
Innovation, and a government “Office for AI” to oversee implementation. Ultimately, the UK’s
national AI strategy outlines an ambitious and comprehensive plan to maintain the UK’s position
as a global AI leader by “focusing on the areas where we can compete globally.”
The European Union has also taken major steps towards developing a regional strategy for
AI, which it has called “game-changing” (European Commission 2018b, 5) and “one of the most
important technologies of the 21st century” (European Commission 2018c, 1). Importantly, the
European Commission claims that “international competition [on AI] is fiercer than ever,” with
“massive investments” from the United States and China. Therefore, “To ensure its success,
coordination at European level is essential (European Commission 2018c, 1). The first step
towards the development of a regional AI strategy came in October 2017, when the European
206
Council asked the Commission to prepare a “European approach” to AI. On April 10, 2018, EU
Member States—including the United Kingdom—signed a “Declaration on Cooperation on
Artificial Intelligence,” in which members states agreed on:
Boosting Europe’s AI technology and industrial capacity;
Addressing socio-economic challenges;
Ensuring an adequate legal and ethical AI framework.
This declaration on European AI cooperation aimed to strengthen “the EU’s competitiveness,
attractiveness and excellence in [AI] R&D,” and transform a broad and diverse community of AI
stakeholders into a “European AI Alliance.” In practical terms, EU Member States agreed to
cooperate on, inter alia, (i) the allocation of R&D funding; (ii) development of AI research centers
and “Innovation Hubs”; (iii) access to public sector data; (iv) exchange of “best practices” on
ethical and legal frameworks; and (v) dialogue about opportunities and challenges. Finally, the
declaration affirmed the principle that would become known as “human-centric AI”: to “ensure
that humans remain at the centre of the development, deploying and decision-making of AI,
prevent the harmful creation and use of AI applications, and advance public understanding of AI”
(European Commission 2018a, 3).
On April 25, 2018, the EU Commission released a communication, “Artificial Intelligence
for Europe,” which interpreted the stakes of the AI revolution in dramatic fashion:
Like the steam engine or electricity in the past, AI is transforming our world, our
society and our industry. Growth in computing power, availability of data and
progress in algorithms have turned AI into one of the most strategic technologies of
the 21st century. The stakes could not be higher. The way we approach AI will
define the world we live in. Amid fierce global competition, a solid European
framework is needed (European Commission 2018b, 2; emphasis added).
In evaluating Europe’s relative “position in a competitive international [AI] landscape,” the EU
Commission assessed that “Europe is home to a world-leading AI research community,” but that
“Europe is behind” in private AI investment compared to the United States and China (European
Commission 2018b, 5). Nevertheless, the European Commission claimed that by strengthening
regional cooperation, “the EU can lead the way in developing and using AI for good and for all”
(European Commission 2018b, 3). The European Commission concluded confidently: “The main
ingredients are there for the EU to become a leader in the AI revolution, in its own way and based
on its values… Together, we can place the power of AI at the service of human progress” (European
Commission 2018b, 20).
207
By the end of 2018, the EU Commission had articulated the main tenets of the EU’s
regional AI strategy, or the Coordinated Plan on Artificial Intelligence. The strategy outlined a
vision of “AI made in Europe” to “ensure that the EU as a whole can compete globally” (European
Commission 2018c, 1–2). The Plan does not aim replace national AI strategies, but rather provide
a “strategic framework” for achieving “common objectives and complimentary efforts” across
Europe (European Commission 2018c, 2). Indeed, the key message from Brussels is one of global
competition through regional cooperation: “Stronger coordination is essential for Europe to
become the world-leading region for developing and deploying cutting-edge, ethical and secure
AI” (European Commission 2018d, 1). The EU Plan identified four priorities: (i) maximize
investments through national and regional and public and private partnerships; (ii) create
“European data spaces” to streamline data sharing across borders while ensuring compliance with
data protection regulations; (iii) nurture talent, skills, and life-long learning through advanced
education and training in “digital skills”; and (iv) develop ethical and trustworthy AI to ensure
respect for “fundamental rights” and ultimately “bring Europe’s ethical approach to the global
stage” (European Commission 2018d, 2). It also announced an increased financial commitment
for investment in AI—at least €20 billion ($23.8 billion) of private and public investment by the
end of 2020 and more than €20 billion annually for the following decade (European Commission
2018c, 3)—claiming that “AI is not a nice-to-have, it is about our future” (European Commission
2018d, 1).
Since then, the EU has continued to develop its regional AI strategy. On April 8, 2019, a
High-Level Expert Group on AI published a report on Ethics Guidelines for Trustworthy AI. On
February 18, 2020, the European Commission released a White Paper, On Artificial Intelligence:
A European Approach to Excellence and Trust. Finally, on April 21, 2021, the European
Commission published two further reports on AI. These new developments in Europe’s regional
strategy can be summed in its two main goals of fostering an “ecosystem of excellence” and an
“ecosystem of trust” in AI (European Commission 2020, 3; also European Commission 2021). The
principle of “excellence” is based on the EU’s perception of “fierce global competition” in the
international AI environment and the goal of making Europe a world AI leader (European
Commission 2020, 1). The principle of “trust” is based on the EU’s concerns about the ethical
implications and “high-risk” applications of AI and the aim of making sure that AI is “trustworthy”
and “human-centric” (European Commission 2021a, 6–7). It also makes clear that the EU’s
understanding of “human-centric AI” is based on “European values” (European Commission 2020,
208
1), and hopes that the “European approach” to AI will be the foundation for “new ambitious global
norms” (European Commission 2021, 4–5).
In summary, Europe’s national and regional AI strategies seek to position Europe for global
AI leadership in a competitive international environment dominated by the United States and
China. The twin goals of AI “excellence” and “trust” serve the purpose of ensuring that Europe
has the technological capabilities to preserve its political independence, economic
competitiveness, and security and defense, and to assert its values and voice on the world stage.
Thus, the best way to understand the meaning of the AI revolution in Europe is from the perspective
of Europe’s fear of relative decline and the desire to ensure its relevance in the world. In the words
of President Macron:
When you look at artificial intelligence today, the two leaders are the US and
China… And Europe has not exactly the same collective preferences as US or
China. If we want to defend our way to deal with privacy, our collective preference
for individual freedom versus technological progress, integrity of human beings
and human DNA, if you want to manage your own choice of society, your choice
of civilization, you have to be able to be an acting part of this AI revolution. That’s
the condition of having a say in designing and defining the rules of AI. That is one
of the main reasons why I want to be part of this revolution and even to be one of
its leaders (Thompson 2018).
6.2.3 Great Power Ambitions: China and Russia’s National AI Strategies
China and Russia’s national AI strategies should be interpreted from the vantage point of their
respective great power ambitions. Both China and Russia are highly motivated by concerns about
prestige and status in international relations, especially great power status (Larson and Shevchenko
2010; Schweller and Pu 2011; Paul et al. 2014; Goldstein 2020). Importantly, they see a strong
relationship between great power status and technology leadership and are seeking to position
themselves as global AI leaders. At the 19th Party Congress in October 2019, China’s President, Xi
Jinping, articulated the aspiration for China to become a “science and technology superpower” (in
Ding 2018, 7). Russia’s President, Vladimir Putin, specifically linked AI leadership to great power
status—or even global hegemony—claiming that, “Whoever becomes the leader in this sphere will
become the ruler of the world” (Vincent 2017).
China’s national AI strategy stands out for its ambition to become the world AI leader (State
Council 2017). Before 2017, Chinese AI policy was either treated at the level of local governments
or alongside other emerging technologies. China’s “Made in China 2025” and 13th Five-Year Plan
established the goal of managing China’s economic transition from the “world’s factory” based on
209
low-cost manufacturing to an advanced industrial economy based on high-tech manufacturing
(U.S. Chamber of Commerce 2017). The Chinese Communist Party (CCP) initially perceived AI
“merely as one technology among many” (Roberts et al. 2021, 60). In March 2016, China
experienced a “Sputnik Moment” when Google’s AlphaGo defeated Lee Sedol, the world
champion of “Go.” Since then, “China has caught AI fever” (Lee 2018).
On July 8, 2017, the State Council of the People’s Republic of China released its New
Generation of Artificial Intelligence Development Plan. China’s national strategy frames AI as a
revolutionary technology that “will profoundly change human social life and the world” (State
Council 2017, 1). It outlines China’s AI ambitions in three stages:
By 2020, China’s overall AI capabilities should “keep up” with the most advanced
countries and the “core” AI industry should be worth more than 150 billion yuan ($23.2
billion).
By 2025, China will achieve “world-leading level” in some AI capabilities and make AI
“the main driving force” in China’s economic and industrial transition towards an “artificial
intelligence society,” with a core AI industry valued at more than 400 billion yuan ($61.9
billion).
By 2030, China will become the world AI leader and the AI “innovation center of the
world,” with a core AI industry exceeding 1 trillion yuan ($154.7 billion) (State Council
2017, 5–6).
China’s AI strategy identifies four core national interests (State Council 2017, 2–3; also
Allen 2018; Roberts et al. 2021). The first is national security and international competition.
China’s national AI strategy is deeply shaped by its perception of an increasingly competitive
international system. China sees AI as a “strategic technology” with profound implications for its
national security interests:
Artificial intelligence has become the new focus of international competition.
Artificial intelligence is thought to be the strategic technology leading the future,
the world’s major developed countries are taking the development of AI as a major
strategy to enhance national competitiveness and protect national security… in the
new round of international science and technology competition. At present, China’s
national security and international competition situation are more complex, so we
must look at the world, layout the artificial intelligence development on the national
strategic level, grasp firmly the strategic initiative of international competition
during the new stage of artificial intelligence development, create new competitive
advantage, open up new spaces of development, and effectively protect national
security (State Council 2017, 2; emphasis added).
210
Indeed, the People’s Liberation Army (PLA) perceives AI as revolutionary military technology
that could fundamentally transform the nature of warfare (Kania 2017). The PLA’s pursuit of
military AI anticipates a transition from today’s “informationized” warfare to a future of
“intelligentized” warfare. For China, AI offers an opportunity to no longer play “catch up” in
military power (Gilli and Gilli 2018/2019), but rather to “leapfrog development” and deploy a
“trump card” in military technology in wars of the future (Kania 2017; Allen 2019). China’s
national AI strategy seeks to strengthen “military-civilian integration” to achieve timely and
effective advances in military AI capabilities for national security (State Council 2017, 5, 21). As
President Xi Jinping stated in 2017, “under a situation of increasingly fierce international military
competition, only the innovators win” (Kania 2020, 2).
The second core interest is economic growth and development. China’s strategy asserts
that AI “has become a new engine of economic development” and “the core driving force of a new
round of industrial transformation” (State Council 2017, 2). China’s strategy sees AI as a way to
produce “new technologies, new products, [and] new industries,” promote the structural transition
of the economy, and, more generally, “inject new kinetic energy into China’s economic
development” and “achieve the remarkable jump of social productivity as a whole.” To do so,
China’s AI strategy adopts its well-established economic practice of supporting “national
champions,” especially technology giants like Alibaba, Baidu, Huawei, and Tencent (Jing and Dai
2017; Roberts et al. 2021), as well as supporting the development of “tech parks” in its urban
centers, including Beijing, Shanghai, Guangdong, Guizhou (Daxue Consulting 2020, 11). The
CCP has also begun wielding anti-monopoly laws in a “crackdown” against major technology
firms to ensure political supremacy over China’s technology industry and that national priorities
drive commercial innovations (Feng and Shen 2021; Yang 2021).
The third core interest is social governance. China’s AI strategy asserts that AI presents
“new opportunities for social construction” (State Council 2017, 2). China believes that it has made
significant progress towards “building a well-off society,” but still faces “grim” challenges, like
an aging population, environmental pressures, and resource constraints. China can use AI to more
accurately perceive, predict, and warn about trends in “social security,” which could become
“indispensable for the effective maintenance of social stability.” China sees an important role for
AI in areas such as law enforcement, judicial decision-making, and monitoring and surveillance.
In recent years, China’s system of AI-powered mass surveillance has attracted international
attention, especially its “social credit system” that ranks citizens in terms of “trustworthiness”
211
(Kobie 2019), and for the mass surveillance and detention of the Uyghurs ethnic minority in
Xinjiang (Allen-Ebrahimian 2019; Buckley and Mozor 2019). More generally, China’s strategy
perceives the potential of AI to “promote social interaction and mutual trust” (State Council 2017,
21).
The final core interest concerns AI ethics, safety, and risks. China’s strategy notes that
“uncertainties” in the development of AI “bring new challenges” and that AI is a “disruptive
technology that can affect government management, economic security, social stability, and global
governance. “While vigorously developing artificial intelligence,” the strategy asserts that, “we
must attach great importance to the possible safety risk challenges, strengthen the forward-looking
prevention and restraint guidance, minimize risk and ensure the safe, reliable and controllable
development of artificial intelligence” (State Council 2017, 2–3). However, China’s AI strategy
does not make clear how it understands AI safety and risks.
On December 12, 2017, China’s Ministry of Industry and Information Technology (MIIT)
released a “Three-Year Action Plan for Promoting the Development of a New Generation of
Artificial Intelligence Industry.” The Action Plan provides short-term guidance on the
implementation of China’s national AI strategy, including four “action goals”:
Scale-up the development of key AI products;
Significantly enhance core competencies in AI;
Deepen the development of smart manufacturing;
Establish the foundation for an AI industry support system.
The Action Plan provides a list of R&D priorities for “smart” technologies (e.g., intelligent
networked vehicles, intelligent unmanned aerial vehicles, video image identification systems) and
“core” capabilities (e.g., smart sensors, neural network chips, open-source platforms), along with
benchmarks for measuring technological progress. Behind these priorities and benchmarks is the
aim to achieve an “international competitive advantage” or “internationally advanced level” in AI
technologies (Triolo et al. 2018, 6). The MIIT asserts that “a new round of technological revolution
and industrial revolution is under way” and that China must “seize the historical opportunity” of
AI to “speed up” China’s economic growth and development. The aim is for China to become a
“manufacturing superpower,” a “cyber superpower,” and a “science and technology superpower”
(Triolo et al. 2018, 3–5).
In sum, China’s national AI strategy identifies four core interests: national security,
economic development, social governance, and ethics and safety. China’s overall ambition of
212
becoming the world leader in AI serves the historical purpose of achieving the “Chinese Dream”
of great power revival—or, in the words of Xi Jinping, “the great rejuvenation of the Chinese
nation” (Greer 2019). Indeed, China’s New Generation AI Development Plan explicitly adopts this
central tenet of “Xi Jinping Thought” as its “guiding principle”:
Science and technology innovation ability is the main direction, developing an
intelligent economy, building an intelligent society, safeguarding national security,
building an ecosystem of knowledge groups, technology groups, industrial groups,
and the mutual support of talents, institutions, and cultures, and anticipating risk
challenges and promoting humanity. Sustainable development as the center of
intelligence, comprehensively enhance social productivity, comprehensive national
strength and national competitiveness, and provide for the acceleration of building
an innovative country and the world's technological power, achieving the goal of
"two hundred years" and the Chinese nation's great rejuvenation of the Chinese
dream (State Council 2017, 4; emphasis added).
Russia’s national AI strategy is also shaped by its great power ambitions. Russia adopted
its national AI strategy in October 2019 by a Presidential Decree, signed by Vladimir Putin.
Russia’s strategy sets the goal of making Russia a “leading AI power” by 2030 (Office of the
President of the Russian Federation 2019, 1), and affirms that the “priority directions” of the
development and use of AI shall be determined by the “national goals and strategic objectives” of
the Russian Federation (Office of the President 2019, 8).
The goals of the development of artificial intelligence in the Russian Federation
shall consist of ensuring the improvement of the well-being and quality of life of
its population, ensuring national security and rule of law, and achieving the
sustainable competitiveness of the Russian economy, including leading positions
the world over in the field of artificial intelligence (Office of the President 2019, 9;
emphasis added).
In addition to declaring that Russia’s AI “goals and primary objectives” must serve the
“purpose of protecting national interests and implementing strategic national priorities” (Office of
the President 2019, 3), Russia’s national AI strategy establishes certain “basic principles” that are
“obligatory” during its implementation (Office of the President 2019, 7–8):
Human rights
Security
Transparency
Technological sovereignty
Innovation cycle integrity
213
Reasonable thrift
Support for competition
Of relevance is Russia’s understanding of “technological sovereignty,” which the strategy defines
as “the assurance of the necessary level of Russian Federation self-sufficiency in the field of
artificial intelligence, including that achieved through the predominant use of domestic artificial
intelligence technologies and technological solutions” (Office of the President 2019, 7–8).
Russia’s national strategy seeks to strengthen state support to achieve Russian “leadership”
in AI technology (Office of the President 2019, 8–9). The strategy claims that Russia possesses
“considerable potential” to become an “international leader” in AI due to its national strengths in
physics, mathematics, and programming; a “state-of-the-art” information and communication
infrastructure and Internet access; and an “active and steadily growing community of [AI]
specialists” (Office of the President 2019, 6). However, it also recognizes the serious challenges
posed by a competitive international AI environment.
The few leading participants on the global artificial intelligence market are taking
active steps to ensure their dominance on this market and to gain lasting competitive
advantages by creating substantive barriers to the achievement of competitive
positions by other market participants… Taking into account the current situation
on the global artificial intelligence market and medium-range forecasts for its
development, the implementation of Strategy [sic] at hand is a necessary condition
for the Russian Federation’s entry into the group of world leaders in the field of the
development and introduction of artificial intelligence technologies, and
consequently, for the country’s technological independence and competitiveness
(Office of the President 2019, 7; emphasis added).
Overall, Russia’s national strategy defines its “national goals” and “strategic objectives”
for AI primarily in terms of national security, economic competitiveness, and technological
sovereignty, and sees a vital role for the state. The strategy suggests that if Russia’s efforts are
“inadequate,” then its “scientific and technological development will slow,” and ultimately its
economic and technological competitiveness will “lag behind” the rest of the world (Office of the
President 2019, 7). Russia sees a clear relationship between its great power ambitions and
international leadership in science and technology. Thus, Russia must seek to become a “world AI
leader” by 2030 and the state must take an active role to overcome the obstacles of a competitive
international AI environment.
214
6.3 Speculative, Hypothetical, and a Long Way Off
How has the narrative of humanity securitization shaped the ways in which states understand the
AI revolution? The short answer is that the securitization narrative about AGI/ASI and existential
AI risks has had a negligible impact on the thinking and policies of states. An examination of the
national, regional, and multilateral strategies of states—including a broader sample of both
national governments and intergovernmental organizations (see Appendix 10)—reveals that only
half of the state and intergovernmental actors make any mention of AGI/ASI or some comparable
AI system (e.g., “strong AI”) (see Figure 6.1). The other half are either unaware of expert
discussions about AGI/ASI or believe that they do not merit serious policy considerations.
Even the state or intergovernmental actors that do raise the possibility of AGI/ASI in the
future do not seem to see this as an important consideration for contemporary national or
international policy. In a few cases, states mention the possibility of AGI/ASI only in passing. For
example, Germany defines “strong AI” as a system that has “the same intellectual capabilities as
humans, or even exceed[s] them,” but offers no assessment of the probability, timing, or
consequences of such a development (Federal Government of Germany 2018, 4–5). Similarly, the
European Commission mentions the possibility of AGI/ASI only in the footnote of a report entitled
Ethics Guidelines for Trustworthy AI:
While some consider that Artificial General Intelligence, Artificial Consciousness,
Artificial Moral Agents, Super-intelligence or Transformative AI can be examples
of such long-term concerns (currently non-existent), many others believe these to
be unrealistic (High-Level Expert Group on Artificial Intelligence 2018, 35).
Clearly, the EU High-Level Expert Group on AI finds itself closer to the latter position, for
AGI/ASI receives no further attention in the report.
In general, states tend to invoke language that downplays the possibility of AGI/ASI. For
instance, Russia uses the term “prospective” (Office of the President of the Russian Federation
2019, 4), the United States uses “notional” (NSTC 2016b, 7), the OECD uses “hypothetical”
(OECD 2019b, 22), and the European Commission uses “unrealistic” to describe the possibility of
AGI/ASI (High-Level Expert Group on Artificial Intelligence 2019, 35). India’s national AI
strategy appears to take the possibility of AGI/ASI more seriously, but still concludes that, “While
big strides have been made in Artificial Narrow Intelligence… The weight of expert opinion is
that we are a long way off the emergence of General AI” (NITI Aayog 2018, 15). Similarly, the
U.S. National Science and Technology Council (2016b, 7) claims that progress towards AGI has
215
“made little headway” and that the “current consensus of the private-sector expert community,
with which the NSTC Committee on Technology concurs, is that General AI will not be achieved
for at least decades” (NSTC 2016b, 7). These assessments suggest that states understand AGI/ASI
as, if anything, only a long-term concern.
If the prospect of AGI/ASI has received little attention from states, then concerns about
existential AI risks have been almost entirely ignored (see Figure 6.1). When states and
international actors mention the possibility of AGI/ASI, they are generally ambivalent about the
potential risks. India mentions that actors like OpenAI have been established to “discover and
enact the path to safe artificial general intelligence,” but does not clarify the meaning of a “safe”
(or dangerous) AGI/ASI system (NITI Aayog 2018, 63). Although China’s national AI strategy
makes no mention of AGI/ASI, the “Beijing AI Principles” state that “continuous research on the
potential risks” of AGI/ASI “should be encouraged,” and that “strategic designs should be
considered to ensure that AI will always be beneficial to society and nature in the future” (BAAI
2019). However, the BAAI does not clarify whether the “potential risks” of AGI/ASI constitute
an existential threat to humanity. Russia suggests that AGI could lead to “positive changes” as well
as “negative consequences,” but does not elaborate on the nature of these consequences for
humanity (Office of the President of the Russian Federation 2019, 5–6). Instead, Russia suggests
that AGI/ASI should be one of the goals of national policy: “Basic scientific research must be
aimed at the creation of fundamentally new scientific results, including the creation of artificial
general intelligence (strong artificial intelligence)” (Office of the President 2019, 10).
216
Figure 6.1: AGI/ASI and Existential AI Risks
Notes: The sample includes the national AI strategies of Canada, China, France, Germany, India, Japan, Russia, the
United Kingdom, and the United States, as well as the regional and multilateral strategies of the European Union, G7,
G20, OECD, and UNSECO.
One OECD report, Artificial Intelligence in Society, claims that AI will cause “new
opportunities, risks and challenges,” and that the “possible advent” of AGI would “greatly amplify
these consequences” (OECD 2019b, 22).
Artificial narrow intelligence (ANI) or “applied” AI is designed to accomplish a
specific problem-solving or reasoning task… Applied AI is often contrasted to a
(hypothetical) AGI. In AGI, autonomous machines would become capable of
general intelligent action. Like humans, they would generalise and abstract learning
across different cognitive functions. AGI would have a strong associative memory
and be capable of judgment and decision making. It could solve multifaceted
problems, learn through reading or experience, create concepts, perceive the world
and itself, invent and be creative, react to the unexpected in complex environments
and anticipate. With respect to a potential AGI, views vary widely. Experts caution
that discussions should be realistic in terms of time scales. They broadly agree that
ANI will generate significant new opportunities, risks and challenges. They also
agree that the possible advent of an AGI, perhaps sometime during the 21st century,
would greatly amplify these consequences (OECD 2019b, 22; emphasis added).
The OECD report also mentions several concepts that are germane to expert concerns about
existential AI risks, such as “recursive self-improvement,” “value alignment,” and “human
control” (OECD 2019b, 142). However, the report avoids framing AI as an existential threat to
humanity and is generally vague about the possibility, timing, and consequences of AGI/ASI.
Instead, the OECD report only suggests the need to “avoid strong assumptions” about the “upper
217
limits” of future AI capabilities and to “plan carefully for the possible development of artificial
general intelligence (AGI)” (OECD 2019b, 141). A more recent OECD report in June 2021
dropped even this minimal nod to concerns about existential AI risks.
The United Nations has also refrained from framing AI as an existential threat to humanity.
In his remarks to the UN General Assembly on January 22, 2020, UN Secretary General António
Guterres made a speech in which he referred to “geopolitical tensions, the climate crisis, global
mistrust and the downsides of technology” as the “four horsemen” that “endanger 21st-century
progress and imperil 21st-century possibilities.” In this speech, Guterres described AI as part of
the “dark side of the digital world”:
Artificial intelligence is generating breathtaking capacities and alarming
possibilities. Lethal autonomous weapons—machines with the power to kill on
their own, without human judgement and accountability—are bringing us into
unacceptable moral and political territory (Guterres 2020).
Unlike the “existential climate crisis,” however, Guterres frames AI as the less-than-existential
threat of lethal autonomous weapons systems (Guterres 2020). Similarly, UNESCO Director-
General Azoulay has described the “AI revolution” as the “dawn of a new era” and “humanity’s
new frontier” that “will lead to a new form of human civilization.” However, Director General
Azoulay makes only an ambiguous reference to existential AI risks—that “the guiding principle
of AI is not to become autonomous or replace human intelligence”—while stating that AI offers
“tremendous opportunities for achieving the Sustainable Development Goals” (Azoulay 2021).
Importantly, nowhere in UNESCO’s draft “Recommendation on the Ethics of Artificial
Intelligence” is there any mention of AGI/ASI, or suggestion that AI could pose an existential
threat to humanity.
The only states that explicitly mention the expert concerns about existential AI risks are
France and the United States. France mentions the “existential threat” of AI in the Villani Report,
but only to disregard the danger as “speculative” (Villani 2018, 14, 113).
In different parts of the world, experts, regulators, academics, entrepreneurs and
citizens are discussing and sharing information about the undesirable effects [of
AI]—current or potential—caused by their use and about ways to reduce them…
Aside from these purely speculative considerations concerning AI’s “existential
threats” to humanity, debates tend to crystallize around the “everyday” algorithms
(Villani 2018, 113; emphasis added).
218
The United States also acknowledged the existence of expert concerns about existential AI risks
from “super-intelligent machines” in the 2016 NSTC report, Preparing for the Future of Artificial
Intelligence.
People have long speculated on the implications of computers becoming more
intelligent than humans. Some predict that a sufficiently intelligent AI could be
tasked with developing even better, more intelligent systems, and that these in turn
could be used to create systems with yet greater intelligence, and so on, leading in
principle to an “intelligence explosion” or “singularity” in which machines quickly
race far ahead of humans in intelligence. In a dystopian vision of this process, these
super-intelligent machines would exceed the ability of humanity to understand or
control. If computers could exert control over many critical systems, the result
could be havoc, with humans no longer in control of their destiny at best and extinct
at worst (NSTC 2016b, 8; emphasis added).
However, the NSTC ultimately downplays these “longer-term speculative risks” about AGI/ASI.
Thus, policymakers in France and the United States recognize the existence of expert concerns
about existential AI risks, but only to downplay or dismiss them.
In short, the narrative of humanity securitization on AI has failed to significantly impact
the ways in which states and intergovernmental organizations understand the meaning of the AI
revolution. When states do raise concerns about “AI safety” and “risks,” or when they endorse
normative principles such as “human-centric” and “trustworthy” AI, they tend to frame these issues
in terms of the societal implications of AI for, inter alia, human rights, data privacy, economic
disruption, or diversity and inclusion. In other words, states understand questions of AI safety,
ethics, and risks as falling within the realm of normal politics—i.e., the distribution of costs and
benefits of AI between and within societies—not as an existential threat to human survival. Despite
growing concerns about existential AI risks within the AI community, the securitization narrative
of humanity securitization has been unable to persuade an international audience of states to
seriously consider AI as an existential threat, much less take extraordinary action for the security
of humanity. Thus, the macrosecuritization of AI has failed.
6.4 The Sources and Dynamics of Macrosecuritization Failure
As the previous sections have shown, artificial intelligence has become the subject of two
competing securitization narratives, humanity securitization and national securitization. On the
one hand, humanity securitization frames humanity as its referent object, the creation of an
AGI/ASI system that matches or exceeds human intelligence as an existential threat, and the need
219
for a paradigm shift in AI R&D—or even the abandoning the “AI dream” all together—as
extraordinary measures for the safety and security of humankind. On the other hand, national
securitization frames the nation as its referent object, national technological decline and defeat in
a global “AI race” as a threat to national security, and the urgent mobilization of national resources
for global AI leadership as extraordinary measures to ensure national power, prosperity, and
security. The national AI strategies of the leading states in the international system reveal the
failure of the narrative of humanity securitization in international relations: not only have states
refrained from taking extraordinary measures to reduce existential AI risks (i.e., a failure of action),
but they rarely even recognize that AI could pose as an existential threat to humanity (i.e., a failure
of discourse).
Why has the macrosecuritization of AI failed in international relations?
Macrosecuritization has failed both because states have rejected humanity securitization as a
legitimate securitization narrative on AI, and because the narrative of national securitization has
taken hold over the thinking and action of states. The theoretical framework developed here
explains the sources and dynamics of macrosecuritization failure on AI in terms of three variables,
which have ultimately favored national securitization as the dominant securitization narrative on
AI amongst the great and major powers. The first system-level force is an unstable distribution of
power in the international system, driven by the differential growth in the relative capabilities of
the United States and China and a structural transition from unipolarity towards bipolarity, which
is fueling the growing intensity of great power rivalries and “technology competition” in
international relations. The second state-level driver is that some of the principal securitizing actors
in the domestic security constellations of the great powers with significant influence over national
AI strategies—particularly within the national security apparatuses and scientific advisory
bodies—have embraced the narrative of national securitization. The third individual-level
dynamics are the beliefs and perceptions of political leaders and policymakers in the great and
major powers, whereby policymakers appear to be skeptical about AGI/ASI and existential AI
risks, while the idea of a “race” for global leadership has become central to their understanding of
the “AI revolution” in international relations.
6.4.1 The Instability of a Post-Unipolar World
The first system-level driver behind the macrosecuritization failure of AI is the structural
environment of an unstable distribution of power in the international system, which is fueling the
220
resurgence of great power rivalries and leaving the great and major powers more susceptible to a
securitization narrative of national securitization. The principal factor shaping the national AI
strategies of the leading states in the international system are concerns about their relative
capabilities and position in an era of intensifying international competition, especially over
emerging technologies. The United States sees itself as the “world leader” in AI and its national
strategy seeks to maintain American AI supremacy against emerging great power competitors,
especially from China. The Europeans see themselves as falling behind the United States and
China and their national and regional AI strategies aim to strengthen regional cooperation in order
to reverse the trajectory of relative technological decline and ensure that Europe remains a
powerful “voice” in the world. China and Russia both have great power ambitions and their
national AI strategies seek global AI leadership.
Behind the intensity of great power rivalry is an unstable distribution of power in the
international system, characterized by the decline of American hegemony and a structural
transition to a post-unipolar world. There is a longstanding debate about the “durability” of the
“unipolar moment” (Krauthammer 1990/91): one side asserts that U.S. primacy is robust and
American hegemony enduring (Wohlforth 1999; Posen 2003; Brooks and Wolhforth 2008;
2015/16; Beckley 2011/12; 2018), while the other side contends that the United States faces
hegemonic decline and inevitable rise of great power rivals (Kennedy 1987; Layne 1993; 2006;
Waltz 1993; 2000; Zakaria 2008; Mahbubani 2008; Kupchan 2012; Acharya 2017). Regardless of
whether the international system is indeed in the midst of a structural transition towards bipolarity
or multipolarity, the foreign policies of the great and major powers are increasingly shaped by the
premise of great power competition (Goddard and Nexon 2016; Kroenig 2020). The United States
has affirmed that political, economic, and military competition with great/major power rivals like
China and Russia poses the principal challenge to American national security and prosperity (NSS
2017, 2; NDS 2018, 2; The White House 2021, 6). China and Russia are also unmistakably
motivated by the interests of achieving great power status, contesting American (“liberal”)
hegemony, and restoring a multipolar balance-of-power (Larson and Shevchenko 2010; Schweller
and Pu 2011; Paul et al. 2014; Wright 2015; Bettiza and Lewis 2020). Russian foreign policy under
Vladimir Putin has become increasingly aggressive, including the use and threat of military force
against neighboring states and “grey zone” competition with the United States (Nitoiu 2017; Gotz
2017). Similarly, Chinese foreign policy under Xi Jinping has become more assertive (Friedberg
2014), moving from the guiding principle of “peaceful rise/development” (Bijian 2005) to the
221
“great rejuvenation of the Chinese nation” (Jinping 2017; Goldstein 2020). The structural
dynamics of great power politics reflect the pattern of the “rise and fall of great powers” (Kennedy
1987), whereby there is a conflict between the “status quo” interests of a hegemonic state
concerned with halting its relative decline and preserving dominance, and the “revisionist”
interests of rising powers with great power aspirations (Morgenthau 1948; Gilpin 1981; Schweller
1994; Mearsheimer 2001; Allison 2017).
One of the central dynamics of contemporary great power rivalry is its emphasis on
technology competition (Kennedy and Lim 2018; Steff et al. 2020; Allison et al. 2021). Nowhere
is this more obvious than in the “race” to become the world AI leader (Horowitz 2018; Lee 2018;
Allison 2019; Hwang 2020; Scharre 2021). Importantly, the global distribution of AI capabilities
reflects the pattern of differential growth between the great and major powers. While the United
States has maintained its overall leadership position in the international AI environment, it faces
serious competition from both China and the European Union (Ding 2018; OECD 2019c; Arnold
2020; Castro and McLaughlin 2021) (see Figures 6.3 and 6.4). For example, one recent report by
the Center for Data Innovation compared the relative AI capabilities of the United States, China,
and the European Union, and found that “the United States still holds a substantial overall lead,
but that China has continued to reduce the gap in some important areas.” It concluded that “absent
significant policy changes” in the United States, it is “likely” that China “will eventually close the
gap with the United States” (Castro and McLaughlin 2021, 1).
Figure 6.2: The International Distribution of AI Capabilities
222
Many AI analysts already see China as a rival to the United States (Kania 2017; Ding 2018;
Allen 2019; Castro and McLaughlin 2021; Roberts et al. 2021), and some have even called China
an “AI superpower” (Lee 2018; Westerheide 2020). As Kai-Fu Lee (2018) writes:
Not long ago, China lagged years, if not decades, behind the United States in
artificial intelligence. But over the past three years… Chinese AI companies and
researchers have already made up enormous ground on their American
counterparts, experimenting with innovative algorithms and business models that
promise to revolutionize China’s economy. Together, these businesses and scholars
have turned China into a bona fide AI superpower, the only true national
counterweight to the United States in this emerging technology (Lee 2018, 1;
emphasis added).
These analysts point to metrics where China has already overtaken the United States—such as
published research papers and patent applications (CISTP 2018; Castro and McLaughlin 2021)—
as evidence of China’s relative AI growth. They offer examples where China has surprised the
world with cutting-edge AI capabilities, such as Chinese researchers winning international contests
on computer vision and facial recognition (Kania 2017). And they point to China’s “structural
advantages” in AI R&D, including substantial government funding and investment, a large
domestic pool of researchers, a protected market for indigenous technology firms, a model of civil-
military integration, and vast amounts of data and weak privacy rights (Kania 2017; Lee 2018;
CRS 2020).
Not surprisingly, the United States sees China as “by far” its “closest competitor” in the
international AI environment (CRS 2020, 21). The U.S. Government has taken actions to slow
China’s relative growth through an economic strategy of “decoupling” the American and Chinese
technology sectors. It has introduced a variety of coercive measures against China, including
economic sanctions against Chinese tech companies (Swanson et al. 2019), banning American
investments in PLA-linked companies (Pamuk et al. 2020), restrictions on semiconductors and
technology transfers (Borak 2021; Feng and Pan 2021), and even the denial of student visas and
spying on faculty and graduate students of Chinese origin (Qin 2021). Furthermore, the United
States has taken steps to increase its relative AI capabilities by strengthening international
partnerships, including through a bilateral US-UK “Strategic Pact” on AI (Cureton 2020), the
decision to join the “Global Partnership on AI,” the launching of the American-led AI “Partnership
for Defense” with 13 allied nations (JAIC 2020), and establishing a US-EU Trade and Technology
Council to reduce the technology industry’s “dependency” on China (Magnier and Berming 2021).
In 2021, the United States passed major legislation to make big investments in the American
223
technology industry, including a $250 billion Industrial Policy and $1.2 trillion Infrastructure Bill,
which received bipartisan support as necessary measures to compete with China (Sanger et al.
2021; Tankersley 2021). These actions have been taken in the interest of American national
security. In the words of President Biden:
The world’s leading powers are racing to develop and deploy emerging
technologies, such as artificial intelligence and quantum computing, that could
shape everything from the economic and military balance among states to the
future of work, wealth, and inequality within them… America must reinvest in
retaining our scientific and technological edge and once again lead, working
alongside our partners to establish the new rules and practices that will allow us to
seize the opportunities that advances in technology present (Biden 2021, 8–9;
emphasis added).
For its part, China has responded to the shift in American foreign policy and a less favorable
international environment with a push to achieve “technological self-sufficiency” (or “technology
security”) under its 14th Five-Year Plan (Wang et al. 2020; Shen 2021a; 2021b), including through
a major increase in R&D spending by 7% per year between 2021 and 2025 (Kharpal 2021). China
has also taken steps to restructure domestic markets and discipline technology firms to ensure that
they follow the CCP’s priorities (McDonald and Soo 2021). It has created “supply chain chiefs”
to oversee AI R&D and secure access to semiconductors and other crucial supplies (Qu 2021).
President Xi Jinping has declared that China must be prepared for “unprecedented” competition
over science and technology, which has become the “main battleground” for great power
competition (Feng 2021).
In sum, the structural context behind the AI revolution is an unstable distribution of power
in the international system, characterized by the decline of American hegemony and transition to
a post-unipolar world. The main structural force behind this instability is the differential growth in
power, where the changing nature and distribution of emerging technologies like AI can have a
profound effect on the relative power and position of states. This instability in the distribution of
power is fueling the growing intensity of great power rivalry, in which technology competition
over emerging capabilities like AI has taken root as one of its core dynamics. Overall, the
resurgence of great power rivalry has left states highly susceptible to a securitization narrative of
national securitization, which frames as AI as a strategic technology and an essential source of
national power, prosperity, and security in the twenty-first century.
224
6.4.2 The Influence of the National Security Apparatus
The second state-level factor behind macrosecuritization failure is the adoption of a securitization
narrative of national securitization by powerful securitizing actors with a direct influence over the
development and implementation of the national AI strategies of the great and major powers. In
the United States, the White House, the Department of Defense, the National Science and
Technology Council, and even Congress have all accepted an AI narrative of national
securitization. In China, the State Council, the Ministry of Industry and Information Technology,
and the People’s Liberation Army have taken on the narrative of national securitization. And in
Europe, the EU Commission, the High-Level Expert Group on AI, and leading nations like France,
Germany, and the United Kingdom have all adopted a discourse of national securitization.
Conversely, the principal securitizing actors behind the narrative of humanity securitization are
individual experts (e.g., Nick Bostrom and Stuart Russell), research institutes (e.g., CSER, CHAI,
FHI, FLI, MIRI, and SERI), and technology companies (e.g., OpenAI) within the AI community.
Not only does the AI community lack a coherent and coordinated security narrative—e.g., about
the nature and timing of existential AI risks or how to mitigate them—but they also lack the
structural and productive power of state bureaucracies as securitizing actors (Buzan et al. 1998;
Barnett and Duvall 2005). Thus, the securitizing actors who possess the power and authority to
“speak” and “do security” on AI within the domestic security constellations of the great and major
powers strongly favours the narrative of national securitization.
Two types of state actors stand out for their influence over the national AI strategies of the
great and major powers: the military and scientific councils. In the United States, the DoD has
taken an increasingly hawkish perspective towards great power competition and the need to
maintain American technological supremacy as one of the pillars of national power, prosperity,
and security, particularly on AI. The DoD has established the JAIC and developed its own AI
Strategy, which clearly invokes the narrative of national securitization.
Our adversaries and competitors are aggressively working to define the future of
these powerful technologies according to their interests, values, and societal
models. Their investments threaten to erode U.S. military advantage, destabilize
the free and open international order, and challenge our values and traditions with
respect to human rights and individual liberties. The present moment is pivotal: we
must act to protect our security and advance our competiveness [sic], seizing the
initiative to lead the world in the development and adoption of transformative
defense AI solutions that are safe, ethical, and secure… The speed and scale of the
changes required are daunting, but we must embrace change if we are to reap the
225
benefits of continued security and prosperity for the future (DoD 2018b, 17;
emphasis added).
The U.S. Intelligence Community has also warned about the national security implications of AI,
especially as China—the United States’ “primary strategic competitor”—challenges American
leadership in emerging technologies (Office of the Director of National Intelligence 2021, 20).
Considering the influence that DoD and the Intelligence Community hold over American national
security policy, it is not surprising that their embrace of a national securitization narrative has had
a powerful effect in shaping the American national AI strategy.
In China, the PLA has exercised a similar influence over national policy (Kania 2017; Allan
2019). China’s leadership sees AI as the key to transforming the PLA into a “world-class military”
(Fedasiuk et al. 2021). The PLA’s strategic thinking already refers to the operational concept of
“intelligentized” warfare as the next military-technological revolution to replace “informatized”
warfare (Kania 2017, 4; 161). According to China’s 2019 Defense White Paper (China’s National
Defense in the New Era):
Driven by the new round of technological and industrial revolution, the application
of cutting-edge technologies such as artificial intelligence (AI), quantum
information, big data, cloud computing and the Internet of Things is gathering pace
in the military field. International military competition is undergoing historic
changes. New and high-tech military technologies based on IT are developing
rapidly… War is evolving in form towards informationized warfare, and intelligent
warfare is on the horizon (emphasis added).
While China has made “great progress” towards a “Revolution in Military Affairs with Chinese
characteristics,” the PLA “still lags far behind the world’s leading militaries,” and therefore
“technology surprise” and a “growing technological generation gap” are a threat to Chinese
national security (State Council 2019). The PLA is therefore investing heavily in AI and related
capabilities, such as drones, cloud computing, big data analytics, and quantum information (Office
of the Secretary of Defense 202, 161). Moreover, China’s policy of “Military-Civil Fusion”
requires all Chinese technology firms—especially “national champions,” like Huawei and
Tencent—to share technology and information with the military and intelligence agencies
(Fedasiuk et al. 2021). Overall, the PLA has become a powerful securitizing actor behind the
national securitization of AI in China’s domestic security constellation.
The other powerful securitizing actors that have taken on an important role in crafting
national AI strategies are scientific and technological councils and advisory bodies within
governments. In the Unites States, the NSTC took the reigns on the early development of the
226
American national AI strategy with its “Strategic AI R&D Plan.” The Strategic Plan framed AI as
a “transformative technology” with the “potential to revolutionize how we live, work, learn,
discover, and communicate.” Importantly, the NSTC makes an explicit connection between the
goal of maintaining the U.S. “world leadership position” in AI R&D and the protection of its vital
national interests, “including increased economic prosperity, improved educational opportunities
and quality of life, and enhanced national and homeland security” (NSTC 2016b, 3, 15). In China,
the MIIT developed a three-year plan to promote the development of a “new generation AI
industry,” based on the “guiding ideology” of Xi Jinping thought—i.e., the “great rejuvenation of
the Chinese nation”—and the goal of making China a “science and technology superpower” (MIIT
2018).
In France, the Parliamentary Mission led by Cédric Villani was even more explicit in taking
on a narrative of national securitization:
Artificial intelligence is one of the keys to power in tomorrow’s digital world…
France and Europe need to ensure that their voices are heard and must do their
utmost to remain independent. But there is a lot of competition: The United States
and China are at the forefront of this technology and their investments far exceed
those made in Europe… Considering that France and Europe can already be
regarded as “cybercolonies” in many aspects, it is essential that they resist all
forms of determinism by proposing a coordinated response at [the] European level.
This is why the role of the State must be reaffirmed: market forces alone are proving
an inadequate guarantee of true political independence… Now more than ever, we
have to provide a meaning to the AI revolution (Villani 2018, 6; emphasis added).
In short, both the national security bureaucracies and scientific and technology councils within the
great and major powers have embraced a security narrative of national securitization on artificial
intelligence. Their power and authority within the domestic security constellations of states have
guaranteed that this securitization narrative would exert a strong influence over the development
and implementation of the national AI strategies of the great and major powers.
6.4.3 The Skepticism of Policymakers
The third individual-level dynamics behind macrosecuritization failure are the beliefs and
perceptions of political leaders, especially the skepticism of policymakers towards AGI/ASI and
existential AI risks. The absence of epistemic consensus about AGI/ASI and existential AI risks
has significantly weakened the influence of the narrative of humanity securitization over
policymakers. While scientific uncertainty and expert debate may be inherent to all technological
revolutions, nowhere is this more apparent than within the field of artificial intelligence. Indeed,
227
there is even a lack of consensus about the meaning of the term “artificial intelligence,” with many
experts preferring the term “machine learning.”
Scientific uncertainty and debate surround the question of whether AI may one day match
or exceed human intelligence and what it could mean for humanity (Everitt et al. 2018; Duettman
ed 2017). One 2014 survey, conducted by Vincent Muller and Nick Bostrom (2014), aimed to
discover the views of AI experts about the timing and consequences of “high-level machine
intelligence” and “superintelligence.” The survey found that:
The median estimate of respondents was for a one in two chance that high-level
machine intelligence will be developed around 2040-2050, rising to a nine in ten
chance by 2075. Experts expect that systems will move on to superintelligence in
less than 30 years thereafter. They estimate the chance is about one in three that
this development turns out to be ‘bad’ or ‘extremely bad’ for humanity (Muller and
Bostrom 2014, 1; emphasis added).
Another 2018 survey found there to be strongly divergent views about the possibility, timing, and
consequences of “high-level machine intelligence” (Grace et al. 2018). On the question of timing,
the survey asked AI experts “When will AI exceed human performance?” and found that
“researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years [or
2061]” (Grace et al. 2018, 729, 733). In terms of consequences, the survey revealed that the
majority of AI experts believed that AI would lead to “positive outcomes,” but that “catastrophic
risks” were possible: “The median probability was 25% for a ‘good’ outcome and 20% for an
‘extremely good’ outcome. By contrast, the probability was 10% for a bad outcome and 5% for an
outcome described as ‘Extremely Bad’ (e.g., human extinction)” (Grace et al. 2018, 733). A recent
survey asked AI safety and risk researchers for their views about various existential AI risk
scenarios and found there to be “considerable disagreement among researchers about which risk
scenarios are most likely”—although this does not imply that researchers believed existential AI
risks to be small or negligible (Clarke et al. 2021).
Questions about the possibility, timing, and consequences of AGI/ASI—especially
existential AI risks—are both polemical and polarizing within and beyond the AI community. As
Margaret Boden (2016, 147–148) writes:
This notion [of AGI/ASI] is hugely contentious. People disagree about whether it
could happen, whether it will happen, when it might happen, and whether it would
be a good or bad thing. Singularity believers (S-believers) argue that AI advances
make the Singularity inevitable. Some welcome this. They foresee humanity’s
problems solved. War, disease, hunger, boredom, even personal death… all
banished. Others predict the end of humanity—or anyway, of civilized life as we
228
know it… By contrast, the Singularity skeptics (S-skeptics) don’t expect the
Singularity to happen—and certainly not in the foreseeable future. They allow that
AI provides plenty to worry about. But they don’t see an existential threat.
The controversy over AI has led to both vocal proponents (Omohundro 2009; Bostrom 2014;
Sotala and Yampolskiy 2015; Shanahan 2015; Tegmark 2017; Hawking 2018; Russell 2019) and
opponents of the narrative of humanity securitization (Boden 2016; Walsh 2017; Pinker 2018; Lee
2018). The absence of epistemic consensus on the feasibility of AGI/ASI and whether AI poses an
existential threat to humanity means that the AI community is essentially divided into both
securitizing actors and de-securitizing actors.
Not surprisingly, expert uncertainty about the future of AI is mirrored in the technocratic
assessments of states. For instance, France’s Villani Report states that “There are considerable
uncertainties on the effects of the development of artificial intelligence, automation and robotics”
(Villani 2018, 11), and that “it goes without saying that much remains uncertain, that the lines are
moving constantly and that the forecasts constantly need clarifying” (Villani 2018, 85). The EU’s
High-Level Expert Group also points to the difficulty of “extrapolating into the future with a longer
time horizon,” and that “the high-impact nature of these concerns [about AGI/ASI], combined with
the current uncertainty in corresponding developments, calls for regular assessments of these
topics” (European Commission 2019, 35). According to the American NSTC:
[A] survey of AI researchers found that 80 percent of respondents believed that
human-level General AI will eventually be achieved, and half believed it is at least
50 percent likely to be achieved by the year 2040. Most respondents also believed
that General AI will eventually surpass humans in general intelligence. While these
particular predictions are highly uncertain… such surveys of expert judgment are
useful, especially when they are repeated frequently enough to measure changes in
judgment over time (NSTC 2016a, 23).
Clearly, the absence of epistemic consensus about AGI/ASI within the AI community has had an
impact on the beliefs and perceptions of policymakers. But scientific uncertainty alone does not
explain why policymakers have leaned towards skepticism rather than caution with respect to
existential AI risks. After all, in other instances—including biological weapons—policymakers
have responded to scientific uncertainty about an existential threat through the logic of the
“precautionary principle” (Clarke 2006): that is, taking immediate action for safety and security
rather than waiting to see if something poses an existential threat (Aradau and Munster 2007).
Scientific uncertainty and lack of consensus mean that predictions about AI—even
amongst experts—may seem more like speculation than science, fancy than fact. At least two
229
factors may account for the high degree of skepticism amongst policymakers about existential AI
risks. The first is that AI experts have a poor track record for predictions about AGI/ASI. At the
1956 Dartmouth Workshop, 10 scientists gathered for a summer with the purpose of creating an
AGI-like system: We think that a significant advance can be made in one or more of these
problems if a carefully selected group of scientists work on it together for a summer” (McCarthy
et al. 1955, 1). They failed. Since then, progress in AI has experienced recurrent “summers” and
“winters,” and AI experts have frequently erred in their predictions about AGI/ASI. For example,
I. J. Good (1965, 78) claimed: “It is more probable than not that, within the twentieth century, an
ultra intelligent machine will be built and that it will be the last invention that man [sic] need
make.” Hans Moravec (1988, 6) predicted that “Robots with human intelligence will be common
within fifty years.” And Vernor Vinge (1993, 11) asserted that “Within thirty years, we will have
the technological means to create superhuman intelligence. Shortly after, the human era will be
ended.” The history of “AI winters” and failed predictions cast doubt on contemporary concerns
about AGI/ASI amongst experts and policymakers alike.
The second is what might be called the “Hollywood effect.” As Yorick Wilks (2017, 65)
writes, the idea of “superintelligence” has been a “staple of science fiction” and inevitably
influences the “pessimistic view about AI’s future.” Many science fiction films are based on plots
about humanity’s loss-of-control over AI, including 2001: A Space Odyssey (1968), Her (2013),
and Ex Machina (2014). Most of the science fiction movies that depict AI as an existential threat
to humanity—like Terminator (1984) or I, Robot (2004)—commit the error of anthropomorphizing
the nature or goals of AI. As Stuart Russell (2016, 58) writes:
Hollywood’s theory that spontaneously evil machine consciousness will drive
armies of killer robots is just silly. The real problem relates to the possibility that
AI may become incredibly good at achieving something other than what we really
want.
There are a few films that come closer, such as Disney’s Wall-E (2008), which depicts Bill Joy’s
concern of humanity entrusting its fate to AI decision-making in a world of technological
complexity, or Dr. Strangelove (1964), which depicts the dangers of automation for nuclear
deterrence in the “doomsday machine” (Geist and Lohn 2018; Boulanin ed. 2019).
Even when science fiction movies accurately reflect expert concerns, they may contribute
to shared beliefs that existential AI risks are purely fictional, providing ammunition to skeptics and
230
making it harder for policymakers to take these concerns seriously. According to France’s Villani
Report:
Its visionary nature makes AI one of the most fascinating scientific endeavors of
our time; and as such its development has always been accompanied by the wildest,
most alarming and far-fetched fantasies that have deeply colored the general
population’s ideas about AI and the way researchers themselves relate to their own
discipline. (Science) fiction, fantasy and mass projections have accompanied the
development of artificial intelligence and sometimes influence its long-term
objectives… [I]t is probably this relationship between fictional projections and
scientific research which constitutes the essence of what is known as AI (Villani
2018, 4; emphasis added).
The Report dismisses fears about the “omnipotence of artificial intelligence” as “fantasies” and
“existential threats” as “purely speculative” (Villani 2018, 5, 113). Instead of this “gloom-
mongering approach,” the Villani Report recommends that, “We need to address this challenge
head-on, without succumbing to panic, in spite of the major uncertainties weighing down on us”
(Villani 2018, 81–82).
The American NSTC report, Preparing for the Future of Artificial Intelligence, provides a
window into the U.S. Government’s thinking about existential AI risks. The NSTC acknowledges
the concerns of AI experts and “influential industry leaders” that super-intelligent machines
would exceed the ability of humanity to understand or control” (NSTC 2016b, 7-8; original
emphasis), but compares this “dystopian vision” to scenarios that have “long been the subject of
science fiction stories.” Similarly, the NSTC’s National AI R&D Strategic Plan mentions the risk
that AI may “eventually” be capable of “recursive self-improvement” and identifies the need for
research on the “safety of self-modifying systems” and on “long-term AI safety and value-
alignment” (NSTC 2016a, 28, 30)—an esoteric way of talking about AGI/ASI and existential AI
risks. The report also cites many of the leading AI experts who have voiced concerns about
existential AI risks, including Stuart Russell, Max Tegmark, and Roman Yampolsky (NSTC 2016a,
28; footnote 93). However, the NSTC reframes the securitization narrative about existential AI
risks in terms of the “normal” language of AI safety, including the “research challenge” of
improving “explainability” and “transparency” and the need for AI systems to be designed to be
“reliable, dependable, and trustworthy” (NSTC 2016a, 27–28).
The NSTC came to the following conclusion about how concerns about existential AI risks
should factor into the American national AI strategy:
231
The NSTC Committee on Technology’s assessment is that long-term concerns
about super-intelligent General AI should have little impact on current policy. The
policies the Federal Government should adopt in the near-to-medium term if these
fears are justified are almost exactly the same policies the Federal Government
should adopt if they are not justified. The best way to build capacity for addressing
the longer-term speculative risks is to attack the less extreme risks already seen
today, such as current security, privacy, and safety risks, while investing in research
on longer-term capabilities and how their challenges might be managed.
Additionally, as research and applications in the field continue to mature,
practitioners of AI in government and business should approach advances with
appropriate consideration of the long-term societal and ethical questions—in
additional to just the technical questions—that such advances portend. Although
prudence dictates some attention to the possibility that harmful super-intelligence
might someday become possible, these concerns should not be the main driver of
public policy for AI (NSTC 2016b, 8; emphasis added).
In effect, the NSTC’s recommendation to the U.S. Government is that the American national AI
strategy should not be based on concerns about the survival of humankind—at least not today.
Although the NSTC does not dismiss these concerns entirely, the NSTC comes to the—highly
questionable
2
—conclusion that American national AI policy would be “almost exactly the same”
regardless of whether AI poses an existential threat to humanity. While “prudence” suggests that
policymakers should pay “some attention” to concerns about existential AI risks, this should
mostly be in the form of “research” as the “field continue[s] to mature.” The NSTC therefore
understands existential AI risks as a long-term concern and a technical problem for research. In
short, it is a matter of normal politics, not extraordinary action.
The absence of epistemic consensus about AGI/ASI within the AI community has led to
skepticism amongst policymakers, who either downplay or dismiss concerns about existential AI
risks. Today, the political leaders and policymakers of the great and major powers do not perceive
AI as an existential threat to humanity that requires extraordinary action for security and survival.
6.5 Conclusion
There is a growing consensus that the world is in the midst of an “AI revolution,” which has the
potential to transform human societies. Not surprisingly, AI has shifted from being a technical
2
This presumes that current policy decisions will not have unintended long-term consequences that make
it more difficult to address existential AI risks should AGI/ASI become possible in the future. This ignores
the possibility of, for example, making decisions today that lead to a self-reinforcing and competitive
international AI environment, which undermines the possibility of international cooperation on AGI/ASI
safety and security in the future.
232
domain of interest only to computer scientists and technology firms to a serious concern of states
and a matter of international relations. Within just a few short years, dozens of states—including
all the great and major powers—have produced national AI strategies that establish ambitious
goals and interests, mobilize public funding and resources, and create new agencies and regulations
to position themselves for global AI leadership.
There is, however, a contradiction between how some within the AI community and many
national governments understand the stakes of the AI revolution. As this chapter has shown, AI
has become the subject of two competing securitization narratives, which frame the referent object,
threat, and security measures with respect to AI in different ways. On the one hand, the narrative
of humanity securitization by a growing number of AI experts voices serious concerns about
existential AI risks (Bostrom 2014; Shanahan 2015; Tegmark 2017; Russell 2019). In essence, this
narrative claims that progress in AI could culminate in the creation of an AI system that matches
or exceeds human intelligence—i.e., artificial general intelligence or superintelligence,
respectively—at which point the AI could become misaligned with human values and beyond
human control, an existential threat to human survival. For this reason, humanity should take the
AI control problem seriously and prioritize action to reduce existential AI risks, whether this means
changing the paradigm for AI R&D (Russell 2019) or abandoning the AI dream all together (Joy
2000). On the other hand, the narrative of national securitization emphasizes the implications of
AI for the national power, prosperity, and security of states. This narrative frames AI as a
revolutionary strategic technology that is becoming an essential source of national power and
security in a competitive international system. Therefore, the state must take urgent action in
mobilizing national resources to ensure global AI leadership; for victory in the AI race promises
national power and prosperity, while defeat could threaten the very survival of the state or its way
of life.
The discourse and content analysis of national and regional AI strategies shows that
national securitization has become the dominant securitization narrative on AI, while the narrative
of humanity securitization has had little to no impact on how states understand the AI revolution
in international relations. Indeed, not only have stated refrained from taking extraordinary
measures to reduce existential AI risks (i.e., a failure of action), but they rarely even recognize that
AI could pose as an existential threat to humanity (i.e., a failure of discourse). Instead, the primary
concern shaping the thinking and actions of the great and major powers is their relative power and
position in a competitive international AI environment. The United States sees itself as the world
233
AI leader and seeks to maintain AI supremacy against emerging great power rivals, particularly
China. The European nations perceive themselves as falling behind the United States and China
and seek to strengthen regional AI cooperation to reverse their decline and ensure their relevance
in the world. China and Russia want to become global AI leaders in order to achieve their great
power ambitions. While most of the leading states do raise concerns about AI risks, or endorse
normative principles like “human-centric” and “trustworthy” AI, these concerns tend to be limited
to the social ramifications of AI for, inter alia, human rights, data privacy, economic disruption,
and diversity and inclusion. In other words, they perceive the risks of AI in terms of safety and
ethics—i.e., normal politics about the distribution of costs and benefits of AI between and within
societies—not as a matter of survival. When states do raise the question of AGI/ASI, it is invariably
to suggest its speculative and/or temporally distant nature, which serves to dismiss or downplay
concerns about existential AI risks.
Why has the macrosecuritization of AI failed in international relations? The basic argument
here is that macrosecuritization has failed both because states have rejected humanity
securitization as a legitimate securitization narrative, and because the narrative of national
securitization has taken hold over the thinking and action of states in the AI domain. The
theoretical framework here contends that three variables explain why national securitization has
become the dominant narrative on AI amongst the great and major powers. The first is the system-
level forces of an unstable distribution of power, characterized by the structural transition towards
a post-unipolar world. Specifically, the differential growth in the relative power of the United
States and China is driving the prospect of American “hegemonic decline” and the “rise of China.”
This has not only fueled the resurgence of great power rivalries but also the increasing intensity of
“technology competition,” with AI being one core emerging capabilities behind the shifting
distribution of power. The second is the state-level driver of the domestic security constellations,
whereby securitizing actors within state bureaucracies with significant power and influence over
the development and implementation of national AI strategies have adopted a narrative of national
securitization. These include, for example, the DoD and NSTC in the United States, the PLA in
China, and the EU Commission in Europe. The third is the individual-level dynamics of the beliefs
and perceptions of political leaders and policymakers towards AI, especially skepticism towards
the possibility of AGI/ASI and the idea of an AI race for global leadership, which has become
central to how policymakers understand the meaning of the AI revolution in international relations.
Importantly, the absence of epistemic consensus within the AI community about the possibility,
234
timing, and consequences of AGI/ASI have made it possible for policymakers to downplay or
ignore existential AI risks within the national AI strategies of states.
Together, these conditions have favored an AI narrative of national securitization in
international relations. This has not only undermined any prospect of great power consensus on
macrosecuritization but is also leading to international dynamics that could exacerbate the dangers
of existential AI risks. A competitive international AI environment is one in which states are more
likely to prioritize “speed” over “safety” in AI R&D (Scharre 2018), or emphasize the pursuit of
relative advantages in national AI capabilities over mutual limitations and restraint in international
relations (Sears 2020a). Unfortunately, the national securitization of AI in the present could have
long-term implications for existential AI risks by fostering a competitive international AI
environment that may be difficult to escape, even if states and non-state actors pay lip service to
normative principles such as “human-centric” and “trustworthy” AI. For if states come to perceive
the creation of AGI/ASI as technically feasible, then the great powers may compete to produce the
first AGI/ASI system—an AI race that humanity could lose.
235
Chapter 7
The dilemma before us seems obvious enough. Threats to people’s lives and well-
being arise increasingly from processes that are world-wide in scope. The
possibility of general nuclear war has been the most dramatic expression of our
shared predicament, but potentially massive ecological… cause at least as much
concern… We are faced, in short, with demands for some sort of world security, but
have learned to think and act only in terms of the security of states.
— R.B.J. Walker, 1990
We must cast our eyes to the future with hope. But we must also do so without
illusion. Today I want to speak to you in stark and simple terms about the challenges
we face. I see “four horsemen” in our midst—four looming threats that endanger
21st-century progress and imperil 21st-century possibilities.
— António Guterres, 2020
Conclusion
Humanity in the twenty-first century lives under the specter of the proliferation of existential
threats that have their origins in human agency and could bring about the collapse of modern global
civilization or even human extinction (Leslie 1996; Rees 2003; Posner 2004; Bostrom and
Cirkovic eds. 2008; Torres 2017; Ord 2020). Despite its vast and growing knowledge and
prosperity (Pinker 2018), humanity has made radical changes to the technological and ecological
conditions of its material environment on a scale that make possible its self-destruction (Sears
2021a). As Toby Ord (2020, 3) writes, “We stand at a crucial moment in the history of our species.
Fueled by technological progress, our power has grown so great that for the first time in humanity’s
long history, we have the capacity to destroy ourselves—severing our entire future and everything
we could become.” An appreciation for the revolutionary circumstances of humanity’s increasing
power and prospects of self-destruction raises the question of what political and social forces are
behind humanity’s failure to prevent and mitigate such dangerous conditions from emerging and
persisting (Sears 2020b). Why has humanity failed to take effective action to reduce and eliminate
the existential threats to its security and survival?
Humanity’s failure to act decisively in the interest of its security and survival poses a
theoretical and empirical puzzle for International Relations and International Security. Indeed, it
appears to challenge many of the core premises and assumptions of some of the leading “schools”
236
of IR theory, including the premise in neorealist theory that the fundamental interest of states is
security and survival, the ontological and/or methodological assumption of rationalism which
posits that states are rational actors, and the constitutive and causal logic of securitization theory,
which sees security as the discursive and political act to frame a particular issue as an existential
threat to a referent object and mobilize extraordinary measures for survival. International Relations
theory would appear to suggest that rational, security-seeking states should take extraordinary
action to neutralize the existential threats to humanity’s survival. The historical record, however,
tells a different story. Since the Second World War, there have been multiple historical episodes
in which a securitizing actor with a reasonable claim to legitimacy frames an issue as an existential
threat to humanity and calls for extraordinary measures for survival (i.e., a macrosecuritization
move). In a few instances, states have accepted that an issue constitutes an existential threat to
humanity and taken action beyond the normal practices of international relations to prevent or
mitigate the danger (i.e., macrosecuritization). More frequently, however, states have either
contested this framing of an issue as an existential threat or failed to take extraordinary measures
to reduce the threat (i.e., macrosecuritization failure). In short, the historical record reveals an
empirical pattern of the recurrent failure of macrosecuritization. This raises an important question
for theory and policy in international relations: Why does the macrosecuritization of humanity fail?
7.1 Synthesis of the Argument and Findings
This dissertation has argued that the prospect of great power consensus and problem of great power
rivalry is the principal force that shapes the success and failure of macrosecuritization in
international relations. When the great powers come to a shared understanding that an issue poses
an existential threat to humanity and agree to take extraordinary measures for survival, then great
power consensus has led to macrosecuritization—as demonstrated by the Nuclear Non-
Proliferation Treaty to mitigate the spread of nuclear weapons (1961–1967), the Biological
Weapons Convention to establish an international prohibition regime against biological weapons
(1968–1972), the Montreal Protocol to protect Earth’s Ozone layer from depletion (1987–1989),
and the Strategic Arms Reductions Treaties (START I and II) to make deep cuts to existing nuclear
arsenals (1982–1991). Conversely, when one or more of the great powers contests this
understanding of an issue or rejects the call for extraordinary measures, then the outcome has been
macrosecuritization failure—as demonstrated by the failure of the great powers to achieve the
international control over atomic energy (1942–1946), the limited progress towards the mitigation
237
of climate change (1979–1991; 2017–present), the lack of support by nuclear weapons states for
the Nuclear Prohibition Treaty (2007–2017), the disregard for the existential risks of artificial
intelligence (2014–present), or the failure to mobilize sufficient attention and resources for the
protection of biodiversity (2018–present). History shows that great power consensus or its absence
has a decisive impact on the macrosecuritization of humankind.
What, then, shapes the prospects for great power consensus on macrosecuritization? As the
case studies in this dissertation have illuminated, the historical instances of macrosecuritization
(failure) have been subject to conflicting securitization narratives, which frame an issue in terms
of distinct—and ultimately opposing—narratives about the referent object (i.e., who or what must
survive?), existential threats (i.e., what poses a threat to survival?), and emergency measures for
security (i.e., what must be done to ensure survival?). While a given security issue may in principle
be open to multiple plausible securitization narratives, the empirical analysis has shown a historical
pattern of contestation between two generalizable securitization narratives with important
implications for international relations, whether on the atomic bomb, biological weapons, or
artificial intelligence. On the one hand, humanity securitization frames an issue as an existential
threat to humanity and calls on states to take international action for the survival of humankind.
On the other hand, national securitization frames an issue as a threat to the nation and calls on the
state to take unilateral action for the power, prosperity, and security of the nation. Ultimately, these
securitization narratives exist within a competitive dynamic, since each seeks to define the
appropriate story about how states think about and act towards a security issue in international
relations. Thus, only one securitization narrative can triumph.
The atomic bomb was framed both as an existential threat to humanity that required an
international authority with extraordinary powers to prevent an arms race and nuclear war, and as
a threat to national survival that demanded the acquisition and possession of the atomic bomb for
national security against a great power rival. Biological weapons were framed both in terms of
their potential military and strategic importance for national security within the established
category of “weapons of mass destruction,” and in terms of their uniquely indiscriminate,
unpredictable, and uncontrollable effects that could constitute an existential threat to humankind.
Artificial intelligence has been framed both in terms of the existential risks associated with the
possible creation of “artificial general intelligence” or “superintelligence”, and as a strategic
technology with revolutionary implications for the national power, prosperity, and security of
states in a competitive international system. In the case of biological weapons, the humanity
238
securitization narrative persuaded the great powers to take the unprecedented step of negotiating a
treaty for an international prohibition against an entire category of weapons of mass destruction.
In the other two cases, securitization narratives of national securitization convinced the great
powers to prioritize their national interests over the security of humankind.
While the historical record shows that macrosecuritization is not doomed to failure in
international relations, the empirical analysis shows that the outcomes of cases of
macrosecuritization are subject to the contest between the narratives of humanity securitization
and national securitization over the great powers. Under what circumstances should we expect
humanity securitization to prevail as the dominant securitization narrative? The theoretical
framework developed here has attempted to explain the relative influence of these conflicting
securitization narratives over the great powers in terms of three variables at different levels-of-
analysis: (1) the stability of the distribution of power and capabilities in the international system;
(2) the power and interests of securitizing actors vis-á-vis state audiences within domestic security
constellations; and (3) the beliefs and perceptions of the political leaders and policymakers of the
great powers.
The first variable concerns the system-level forces behind the stability of the distribution
of power in the international system. The theoretical logic here is that the stability of the
distribution of power shapes the intensity of great power rivalry and the possibility of cooperation:
a stable distribution of power reduces the intensity of great power rivalries and makes them more
receptive to a narrative of humanity securitization, while an unstable distribution of power
increases the intensity of great power rivalries and makes them more susceptible to a narrative of
national securitization. The main structural property that shapes the distribution of power is the
polarity of the international system, while the primary driver of (in)stability is the changing nature
and/or distribution of capabilities, which leads to the rise and fall of great powers over time. The
case of the international control over atomic energy shows how the unstable distribution of power
in the aftermath of the Second World War undermined the possibility of forging great power
consensus between the United States and the Soviet Union on the creation of an international
authority that could have neutralized the existential threat of the atomic bomb. Instead, the
structural transition from a multipolar to a bipolar system, the asymmetric distribution of military
capabilities between the United States and the Soviet Union, and the revolutionary nature of the
atomic bomb for military power, all made for an unstable distribution of power that left the great
239
powers more concerned about the implications of the atomic bomb for their national power and
security than the dangers of a nuclear arms race and nuclear war for the survival of humankind.
The case of the BWC shows how a stable distribution of power can dampen the intensity
of great power rivalries and create opportunities for great power consensus on macrosecuritization.
By the late 1960s, the bipolar system had become the entrenched structure of international relations
and the United States and the Soviet Union took steps to normalize their diplomatic relations
(“détente”) and cooperate on matters of disarmament and arms control. The escalation of the great
power rivalry in the early 1960s had brought the world to the brink of nuclear war during the Cuban
Missile Crisis (October 16th–28th, 1962), a memory which contributed to the de-escalation of the
Cold War and great power cooperation on existential threats, such as nuclear proliferation and
biological weapons. The case of AI again shows how an unstable distribution of power can
intensify great power rivalry and undermine great power consensus to neutralize an existential
threat to humanity. The growing concern within the AI community about the existential risks
associated with the potential development of AGI/ASI has occurred within the context of a
structural transition away from a unipolar system characterized by American hegemony towards
either a bipolar or multipolar international system characterized by the resurgence of great power
competition, primarily between the United States and China. The dynamics of contemporary great
power rivalry are increasingly focused on technology competition, including on AI, with the great
and major powers seeking to position themselves for leadership in the “AI revolution” to ensure
their national power, prosperity, and security in a competitive international system.
The second variable concerns the state-level dynamics of security constellations in
domestic politics, especially the power and interests of the principal securitizing actors vis-á-vis
the state. The theoretical logic is that the adoption of a particular securitization narrative by
powerful actors within domestic politics influences the state as an audience: when powerful
securitizing actors adopt a narrative of humanity securitization, then macrosecuritization is more
likely, but when powerful securitizing actors adopt a narrative of national securitization, then
macrosecuritization failure is more likely. What makes an actor “powerful” in securitization? In
general, powerful securitizing actors are those who possess the authority or legitimacy to “speak”
and “do security” on an issue. As the empirical analysis in this dissertation shows, this typically
favors the “voices” of securitizing actors with the institutionalized authority of government,
especially the governmental departments and agencies that are responsbile for national security
and foreign policy, or which possess specialized knowledge and technical expertise on an issue. In
240
the case of the international control over atomic energy, the atomic scientists of the Manhattan
Project were powerful securitizing actors that embraced a narrative of humanity securitization and
exercised significant influence over American policy (i.e., the Acheson-Lileanthal Plan and
Baruch Plan). Over time, however, their influence waned as the atomic scientists left their posts
within the government, while the Department of Defence and Department of State increasingly
adopted a narrative of national securitization and held primary responsibility over American
defence and foreign policy, including on the atomic bomb.
In the case of the BWC, the leading securitizing actors behind the narrative of humanity
securitization initially came from the international environment, especially the British diplomat to
the Eighteen-Nation Committee on Disarmament, Frederick Mulley, and experts within the United
Nations and the World Health Organization. This put diplomatic and political pressure on the
United States and the Soviet Union to reexamine their postures on biological and chemical
weapons. The American decision to unilaterally dismantle its biological weapons arsenal and to
diplomatically support the negotiation of the BWC was driven less by powerful securitizing actors
adopting a narrative of humanity securitization than the weakness of the securitizing actors who
pushed a narrative of national securitization within the U.S. National Security Council. Ultimately,
the Nixon administration determined that it could score a diplomatic and political victory in
supporting the BWC without compromising American national security interests. In the case of
AI, the leading securitizing actors behind the narrative of humanity securitization have been
predominantly scholars and researchers—and the occasional tech industry leader—within the AI
community, who have struggled to have their concerns heard and taken seriously by an audience
of states and intergovernmental organizations within the broader international conversation about
AI. Conversely, powerful actors within state bureaucracies with responsibility for crafting and
implementing policy—including scientific and technical councils and the national security
apparatus—have adopted a narrative of national securitization, which has ensured that the pursuit
of national power, prosperity, and security are central to the national AI strategies of states.
The third variable concerns the individual-level dynamics of the threat related ideas and
beliefs of political leaders and policymakers of the great powers. The theoretical logic here is that
the ideas and beliefs of political leaders shape how they interpret and perceive a securitization
narrative. In the case of the international control over atomic energy, American and Soviet political
leaders were strongly influenced by historical experience and political ideology. The political
ideology of liberalism made American political leaders highly suspicious of Soviet totalitarian
241
communism and naive about the benign nature of the American atomic monopoly, while the
ideology of Marxist-Leninism led the Soviet leadership to believe in the irreconcilable conflict
between communism and the imperialist-capitalism of Western nations. Moreover, both the
American and the Soviet leadership were wary of the other side because of their recent historical
experience of surprise attack and invasion during the Second World War. In short, political
ideology and historical experience shaped the ideas and beliefs of political leaders in ways that
undermined the narrative of humanity securitization.
In the BWC, the idea that biological weapons were indiscriminate, unpredictable, and
uncontrollable weapons not only contributed to the perception that biological warfare could pose
a catastrophic or existential threat to humanity, but also the belief amongst the political leadership
that these weapons lacked military utility for strategic deterrence or on the battlefield. Also
important to the American decision were the idea of “détente” as a guiding principle of U.S. foreign
policy during the Nixon-Kissinger administration and President Nixon’s desire to be seen as a
“man of peace” within the context of the Vietnam War. In the case of AI, the idea that AI represents
a strategic technology with revolutionary implications for societies and states has strongly
influenced the perceptions of political leaders in the great and major powers, including Donald
Trump, Emmanuel Macron, Vladimir Putin, and Xi Jinping. At the same time, the absence of
epistemic consensus within the AI community about the possibility, timing, or consequences of
AGI/ASI has undermined the narrative of humanity securitization, since expert concerns about
existential AI risks appear far more hypothetical—even science fiction—than the implications of
AI for the national interests of states.
In short, the success or failure of macrosecuritization depends on whether a securitization
narrative of humanity securitization or national securitization prevails over the great powers. When
macrosecuritization has failed it has been because the dynamics of great power rivalries have lead
fear of “the Other” to outweigh fear of an existential threat to humanity, undermining the
possibility of great power consensus on macrosecuritization. Since the theoretical logic behind this
argument was developed largely based on the empirical findings from the case studies of the
international control over atomic energy, the Biological Weapons Convention, and the AI
revolution, promising areas for future research on macrosecuritization (failure) in international
relations should seek to determine whether this theoretical logic holds in other cases, such as the
environmental cases of climate change and biodiversity loss. The rudimentary empirical research
done on these other cases in Chapter 3 suggests the importance of great power consensus or its
242
absence for the outcomes of macrosecuritization. It is therefore plausible that the theoretical
argument and framework developed in this dissertation should apply to these other cases as well.
7.2 The Final Tragedy of Great Power Politics?
This argument has broad theoretical and practical implications for our understanding of the
relationship between international relations and the efforts to prevent and mitigate the existential
threats to humankind. It calls attention to two contemporary trends of preponderant importance.
The first is the resurgence of great power rivalries and the growing strength of securitization
narratives of national securitization in international relations. The impacts of great power rivalries
are already being felt throughout the world in terms of the growing use or threat of military force
by the great and major powers, such as China’s growing assertiveness with respect to its territorial
and maritime sovereignty claims in East Asia and the Western Pacific (e.g., over the
Senkaku/Diaoyu Islands in the East China Sea, the “nine dash line” in the South China Sea, and
Taiwan), and Russia’s willingness to use military force in its regional conflicts (e.g., the 1999
Chechen War, the 2008 Georgian War, the 2014 annexation of Ukraine and support for pro-
Russian separatist forces in the Donbass region, the 2015 intervention in the Syrian civil war, and
the 2022 invasion of Ukraine). The United States has responded to the military challenges from
China and Russia through demonstrations of its military power projection, such as its “Freedom
of Navigation Operations” in the South China Sea, or joint military exercises with allies in Europe
(e.g., “Swift Response” with NATO members) and East Asia (e.g., the Malabar exercises with
India and Japan).
The great and major powers are also jockeying for geopolitical power and influence
through the expansion of military bases, strategic arms and trade deals, and alliance diplomacy.
China, for example, has opened its first overseas military base in Djibouti, created military
installations on artificial islands in the South China Sea, and concluded a “secret” security
agreement with the Soloman Islands, in addition to expanding its economic influence over
neighboring countries through the Asian Infrastructure Investment Bank (AIIB) and the Belt and
Road Initiative (BRI). The United States, too, is busily building a new security architecture to
contain China’s growing power and influence in East Asia and the Pacific region (e.g., the so-
called “Quad” between Australia, India, Japan, and the United States, and “Aukus” between
Australia, the United Kingdom, and the United States), and strengthening the “San Francisco
system” of its regional bilateral alliances (e.g., Japan, the Philippines, South Korea, and Thailand)
243
and partnerships (e.g., Indonesia, Malaysia, Singapore, Vietnam). It is simultaneously revitalizing
NATO to counter the threat of Russian aggression, especially in the wake of Russia’s invasion of
Ukraine (e.g., the potential entrance of Finland and Sweden into NATO, Germany’s decision to
increase its defence spending by over $100 billion, and the expanded deployment of military forces
to NATO’s eastern frontier). The great and major powers are investing heavily in force
modernization and state-of-the-art military capabilities (International Institute for Strategic Studies
2021), including cyber weapons, space lift and communications, hypersonic missiles, lethal
autonomous weapons systems, missile defence, and nuclear weapons. China in particular has made
major advances in its military modernization and now possesses significant capabilities to deter
and defend against U.S. military power projection in the Western Pacific (so-called “anti-
access/area-denial capabilities”) (Montgomery 2014), and is expanding its own capabilities for
military power projection (e.g., aircraft carriers and a navy that exceeds that of the United States
in number of principal surface combatants) (Office of the Secretary of Defence 2020, ii). Western
analysts have predicted that China’s military spending could soon surpass that of the United States
at some point during the 2020s (International Institute for Strategic Studies 2013, 42), and that the
United States may have already lost the ability to deter or defeat China in a military conflict over
Taiwan (O’Hanlon 2022).
What has driven this return of great power rivalries to the forefront of international
relations? The system-level driver behind the resurgence of great power rivalries is differential
growth in the distribution of power and a structural transition to a post-unipolar world. After
roughly 30 years, the “unipolar moment” appears to be at an end (Krauthammer 1990/91). The
structural forces of differential growth have produced a structural transition from a unipolar to a
bipolar system, characterized by the United States and China as the two great powers in the
international system. Currently, China is the only nation that rivals the United States in all the core
elements of national capabilities, including military capabilities and defence spending, economic
size and financial capital, a large population and expansive territory, and advanced scientific
research and technological innovation. This structural transition follows the historical pattern of a
declining hegemon and a rising great power (Organski 1958; Modelski 1978; Gilpin 1981; 1988;
Levy 1987; DiCicco and Levy 1999), which is likely to exacerbate the dangers of great power
rivalry and conflict as the United States seeks to prevent its relative decline and China aims to
assert itself as a great power (Mearsheimer 2014; Allison 2017).
244
It is also possible that the changing distribution of power could lead to a multipolar system.
There are several major powers in the international system, including the European Union, India,
Japan, Russia, and the United Kingdom. If, for instance, the 27-nation European Union continues
to become more “state-like” in terms of its political capacity to centrally coordinate and mobilize
the continent’s substantial capabilities, or if India increases its relative power to a level comparable
with China and the United States, then the international system would become multipolar. The
structural transition from a unipolar system to either a bipolar or multipolar system has significant
implications for both the intensity of great power rivalries (Layne 1993; Wohlforth 1999; Monteiro
2014; Sears 2018), and the international response to existential threats to humankind. The
instability of the distribution of power during a period of structural transition to a post-unipolar
world is likely to make the great and major powers more susceptible to securitization narratives of
national securitization and prioritize concerns about national power and security over their shared
interest in human survival.
The state-level dynamics behind the resurgence of great power rivalries are the
reorientation of domestic security constellations towards strategic competition with other great and
major powers. In the United States, for example, the national security apparatus has framed great
power competition as the principal threat to the American national security interests. During the
Trump administration, the 2017 U.S. National Security Strategy declared that “great power
competition [has] returned” and framed China and Russia as “revisionist powers,” which seek to
“shape a world antithetical to U.S. values and interests” and “challenge American power,
influence, and interests, attempting to erode American security and prosperity (The White House
2017, 2, 25–27). Similarly, the U.S. Department of Defense’s 2018 National Defense Strategy
stated that “The central challenge to U.S. prosperity and security is the reemergence of long-term,
strategic competition” against the “authoritarian” and “revisionist powers,” China and Russia
(DoD 2018, 2). The Director of National Intelligence’s 2019 World Threat Assessment claimed
that “threats to US national security will expand and diversify in the coming year, driven in part
by China and Russia as they respectively compete more intensely with the United States and its
traditional allies and partners… As China and Russia seek to expand their global influence, they
are eroding once well-established security norms and increasing the risk of regional conflicts”
(Office of the Director of National Intelligence 2019, 4).
Under the Biden administration, the U.S. National security apparatus has maintained its
emphasis on great power rivalry. The White House’s 2021 Interim National Security Strategy
245
claims that the “world is at an inflection point,” driven by a changing “distribution of power across
the world” (The White House 2021, 3, 7). The Strategy identifies an “increasingly assertive China”
and a “destabilizing Russia” as the principal strategic challenges to American national security
interests (The White House 2021, 14). Similarly, the U.S. national intelligence community’s 2021
World Threat Assessment reiterated its emphasis on “great power competition.”
China, in particular, has rapidly become more assertive. It is the only competitor
potentially capable of combining its economic, diplomatic, military, and
technological power to mount a sustained challenge to a stable and open
international system. Russia remains determined to enhance its global influence and
play a disruptive role on the world stage. Both Beijing and Moscow have invested
heavily in efforts meant to check U.S. strengths and prevent us from defending our
interests and allies around the world (Office of the Director of National Intelligence
2021, 4).
This narrative has had a powerful effect on the audience of the American government and public.
Indeed, within the increasingly polarized domestic politics of the United States, the only thing that
Democrats and Republican appear to agree upon is the need to counter great power rivals, like
China and Russia. Overall, the adoption of a securitization narrative of national securitization that
emphasizes great power rivalry by powerful securitization actors within the national security
apparatus of the United States and other great and major powers is likely to weaken their
receptiveness to a narrative of humanity securitization, thereby undermining the prospects for great
power consensus on issues relevant to the security of humankind.
The individual-level catalyst behind the resurgence of great power rivalries are the ideas
and beliefs of political leaders, which increasingly perceive the ends of foreign policy in terms of
revisionist and reactionary calls to restore national greatness. In the United States, former President
Donald J. Trump won the 2016 presidential election on a campaign to “make America great again”
(MAGA), an idea that presupposes former greatness and dynamism lost and in need of recovery.
President Biden also sees it as the historical mission of the United States to “revitalize” American
democracy and national strength in order to “outpace every challenger,” especially “antagonistic
authoritarian powers,” such as China and Russia (The White House 2021, 3, 7). Indeed, the Biden
administration has taken to framing great power rivalry as an ideological struggle between the
forces of democracy and autocracy (The White House 2021, 1).
In Russia, Vladimir Putin has called the collapse of the Soviet Union “the greatest
geopolitical catastrophe of the [twentieth] century,” and has aimed to recover for Russia its lost
power and status in the international system. In the post-Brexit United Kingdom, Boris Johnson
246
has articulated the vision of a “global Britain,” which seeks to expand the capabilities and influence
of the United Kingdom on the world stage. And in China, Xi Jinping has articulated the grandiose
goal of the “great rejuvenation of the Chinese nation” (or the “Chinese dream”), which glorifies
China’s imperial history and hegemony, blames hostile foreign powers for China’s decline and
suffering (the “Century of Humiliation”), and seeks to recover China’s great power status, by force
if necessary. Indeed, Xi has warned against any efforts by foreign powers to contain or prevent its
rise, saying: “The Chinese people will never allow foreign forces to bully, oppress or enslave us.
Whoever nurses delusions of doing that will crack their heads and spill blood on a Great Wall of
steel built from the flesh and blood of 1.4 billion Chinese” (Baker and Teh 2021). There is
evidently a strong link between the nationalistic views and rhetoric of these political leaders and
the interest of achieving or maintaining great power status. Importantly, the ideas and beliefs of
political leaders appear to make them highly susceptible to securitization narratives of national
securitization, which is not only likely to exacerbate great power rivalries but also to undermine
any narrative of humanity securitization that seeks to put the survival of humanity ahead of the
national interest.
The second trend is the widening spectrum of existential threats that have their origins in
human agency and could bring about the collapse of modern global civilization or even human
extinction, such as nuclear war, climate change, biodiversity loss, bioengineered pathogens, and
artificial intelligence (see Appendix 1). During the Cold War, the struggle for power and security
between the great powers occurred under the shadow of nuclear annihilation. On a few occasions,
the United States and the Soviet Union came extremely close to nuclear war. Fortunately, the great
powers managed to avoid nuclear war for nearly half a century through a combination of nuclear
deterrence and arms control. Yet even fear of “mutually assured destruction” was insufficient to
put an end to great power rivalry or catalyze extraordinary action to neutralize the existential threat
of nuclear war. Since humanity now faces not only the military threat of nuclear war, but also
environmental dangers like climate change and biodiversity loss, and technological risks like
biotechnology and artificial intelligence, the resurgence of great power rivalry in the twenty-first
century appears to be even more perilous than during the Cold War (Sears 2021b). What might be
the implications of great power rivalries for this growing spectrum of existential threats?
The resurgence of great power rivalries is likely to undermine any efforts to diminish and
neutralize the existential threat of nuclear war. Rather, the great and major powers are actively
seeking to modernize and expand their nuclear arsenals. Russia has developed novel nuclear
247
capabilities, such as hypersonic missiles and a submarine-launched nuclear torpedo that could
destroy coastal cities with a radioactive tsunami (Huet 2022). China has also made significant
gains in its delivery systems and allegedly seeks to possess 1,000 nuclear warheads by 2030
(Cooper 2021), which would make China the third nuclear power to possess a sufficiently large
nuclear arsenal to pose an existential threat to humanity (Sears 2021b). Military technological
change is introducing new nuclear risks. Nuclear deterrence is becoming increasingly complex
with the development of “C4ISR” systems (command, control, communications, computers,
intelligence, surveillance, and reconnaissance), which are influenced by technological
developments in, inter alia, cybersecurity (e.g., computer worms and “zero days”), missile defense
(e.g., kinetic and direct-energy weapons), and precision strike-capabilities (e.g., drones, cruise
missiles, and hypersonic weapons). The “survivability” of a state’s second-strike capability no
longer depends solely on the deployment and protection of a “nuclear triad” (i.e., bombers, missile,
and submarines), but also on the vulnerability and resilience of C4ISR systems to cyber-attacks,
anti-satellite weapons, and precision-strike capabilities (Gartzke and Lindsay 2017; Miller and
Fontaine 2017). Russia and China have both expressed concerns that the United States could
launch a well-coordinated “disarming” strike—perhaps through a combination of disruptive cyber-
attacks and anti-satellite capabilities against C4ISR systems, and precision conventional or nuclear
strikes against fixed silos, mobile launchers, and strategic bombers on the ground—and then rely
on missile defense to “mop up” a diminished and disorganized retaliation (Miller and Fontaine
2017, 25). To make matters worse, the “entanglement” of conventional and nuclear military
systems increase the risk that conventional conflicts could escalate to nuclear war, especially if a
conventional war threatens a state’s second-strike capabilities, generating a “use-it-or-lose-it”
dilemma (Acton 2018).
There are also worrying signs of growing nuclear risks due to the decline of arms control,
the introduction of new strategic doctrines, and the deterioration of the “nuclear taboo”
(Tannenwald 2018). In recent years, the United States and Russia have systematically dismantled
the nuclear arms control regime, including the demise of the Anti-Ballistic Missile Treaty, the
Intermediate-Range Nuclear Forces Treaty, and the Open Skies Treaty, as well as the uncertain
future of the New START Agreement and widespread discontent with the Nuclear Non-
Proliferation Treaty. They have also introduced new strategic doctrines lowering the thresholds for
the use of nuclear weapons, such as Russia’s “escalate to de-escalate” doctrine and the U.S. 2018
Nuclear Posture Review, which open the possibility of nuclear responses to conventional military
248
force. And some political leaders have adopted a cavalier attitude about nuclear weapons, such as
U.S. President Donald Trump’s threats to unleash “fire and fury” on Pyongyang, or Vladimir
Putin’s recent threats to use nuclear weapons within the context of the War in Ukraine. Far from
taking action to reduce and neutralize an existential threat, the great and major powers are acting
in ways that increase the risk of nuclear war. Arguably, the threat of nuclear war is higher today
than at any time since the early 1960s.
Great power rivalries also threaten to undermine the international response to
environmental dangers, like climate change and biodiversity loss. Great power rivalries are not
only likely to generate competitive international dynamics and crises that increase the pressures
on states to prioritize their relative economic growth and prosperity—like the energy crisis
generated by international sanctions and supply chain disruptions following Russia’s invasion of
Ukraine—but also to weaken the mutual trust necessary for overcoming global collective action
problems through international cooperation and policy coordination. The decision in 2017 to
withdraw the United States from the Paris Agreement on Climate Change was motivated, in part,
by such relative gains/losses thinking, which the Trump administration believed to “unfairly
disadvantage” U.S. national economic competitiveness relative to other nations, like China and
India (The White House 2017). When it comes to relative gains/losses, Russia appears to be the
geopolitical “winner” from climate change over the next decades, with rising temperatures and the
melting of permafrost promising to open huge swathes of territory in Siberia for agriculture, and
coastal regions and sea-lanes in the Arctic for Russian naval power and commercial traffic
(Lustagarten 2020). While the United States and China have identified climate change as an issue
for bilateral great power cooperation against a shared threat (or “win-win” cooperation, as China
calls it) (U.S. Department of State 2021), great power cooperation has thus far failed to generate
extraordinary action to reduce carbon emissions. Indeed, China has even suggested that it could
make cooperation on climate change contingent on overall progress in the U.S.-China bilateral
relationship (Kaliq 2021), which threatens to hold climate action hostage to great power rivalry.
If states fail to keep climate change within “safe planetary boundaries” (Rockstrom el at.
2009), then they may turn towards the far more dangerous “Plan B” of geoengineering—or the
“purposeful manipulation by humans of global-scale Earth System processes” (Steffen et al. 2007,
619). The technical feasibility of geoengineering, including “solar radiation management,” has
serious consequences for international relations. If climate change threatens the national survival
of some states before others, then they may perceive geoengineering as their best option for
249
national security and take unilateral action to intervene in Earth’s climate system (Corry 2017).
Geoengineering could pose a security dilemma, whereby one state’s efforts to increase its security
from climate change by intervening in Earth’s climate system could inadvertently threaten the
environmental security of others. In the “Anthropocene,” the struggle for power and security
between the great powers could extend to geopolitical control over Earth’s climate systems (Sears
2021a). At that point, the stability of Earth’s natural environment could become subject to the
instability of the international system—a prospect that would likely exacerbate the existential
threat of climate change.
Finally, great power rivalries can aggravate the existential threats from increasingly
powerful emerging technologies, such as biotechnology, nanotechnology, and artificial
intelligence (Danzig 2018; Scharre 2019; Deudney 2020). Great power rivalries are likely to
weaken international cooperation to regulate and restrain dangerous technologies, especially if
they generate a competitive international R&D environment that leads to a general retrenchment
from international collaboration on science and technology (de Troulloud 2020). In such an
international environment, decisions about high-risk emerging technologies may be made, at best,
through multilateral coordination between like-minded states, or, at worst, by the unilateral
decisions of individual state or non-state actors. Even if states do not possess malign intentions,
national decisions could have existential implications simply due to the increasing power of
technology. For example, advances in biotechnology—like the revolutionary gene-editing
technology, CRISPR/Cas-9—could bring biological evolution “fully under human control”
(Doudna and Sternberg 2018, xvi). As one of CRISPR’s discoverers, Jennifer Doudna, writes,
“What will we, a fractious species whose members can’t agree on much, choose to do with this
awesome power?” (Doudna and Sternberg 2018, xvi). When increasingly powerful biotechnology
makes it possible for humans to rationally manipulate the DNA of all living organisms—including
humans—differences in national policy could amount to evolution by national selection.
Moreover, if states take a laissez-faire approach to science and technology, then such decisions
could be ceded to individual technology firms or scientific researchers. For example, in November
2018, Chinese researcher, He Jiankui, reportedly used CRISPR/Cas-9 to edit the germ-line of twin
baby girls, making this the first instance of genetically modified human beings that could pass on
their genetics to future humans. In July 2016, Elon Musk launched the company, Neuralink, which
aims to develop brain-computer interfaces to merge the human brain with machines. Great power
250
rivalries could exacerbate the existential threats to humanity by weakening international
cooperation and policy coordination on increasingly powerful technologies.
Importantly, great power rivalries are likely to shape state decisions about what
technologies to produce, how they should be used, and towards what ends (Sears 2021a), as
demonstrated by the decision to pursue the atomic bomb within the context of the Second World
War and the failure of the international control over atomic energy in its aftermath. If AI experts
determine that AGI/ASI is technically feasible, would the great powers work together to prioritize
AGI/ASI safety to reduce existential risks, or would they pursue the rapid and clandestine
development of an AGI/ASI system in the hopes that it could become a decisive instrument of
national power and security? When new technologies promise strategic first-mover advantages,
then states may calculate that the national security imperative of being first—or fear of being
second—outweighs any concerns about safety, a type of security-safety dilemma that appears to
apply to AI (Bostrom 2014; Horowitz 2018; Ramamoorthy and Yampolskiy 2018; Lee 2018;
Scharre 2019). Again, the historic example of the atomic bomb and the nuclear arms race suggests
that great power rivalries could generate a competitive international environment, whereby the
state’s pursuit of national power and security is prioritized over the security and survival of
humankind. Currently, the emergence of a competitive international AI environment—an “AI
race”—does not bode well for future international cooperation on AGI/ASI safety, since
international AI competition in the present could lead to self-reinforcing dynamics that make
international cooperation on AGI/ASI more difficult in the future. The belief that national AI
strategies today can proceed as if they had no impact on future risks when an AGI/ASI could
become feasible is misguided and dangerous. Overall, a future characterized by the resurgence of
great power rivalries under the growing specter of existential threats is an extremely dangerous
world for humankind.
7.3 Who Shall Speak and Do Security for Humankind?
The theoretical and empirical study of macrosecuritization (failure) raises a question with
important normative and practical implications for international relations: Who can “speak” and
“do security” on the existential threats to humankind? The enduring structural feature of an
“anarchic” international system means that there is no world political entity with the authority and
capabilities to speak and do security on behalf of humanity. Macrosecuritization—i.e., the
phenomenon of calling something an existential threat to humanity and mobilizing extraordinary
251
measures to neutralize it—is inherently a process of securitization under anarchy and is subject to
the actions of states and the relations between them. States, however, are self-regarding political
entities that are predisposed to think and act in terms of their national interests rather than the
interests of humankind. As Kenneth Waltz (1979, 109) wrote,
Strong sense of peril and doom may lead to a clear definition of ends that must be
achieved. Their achievement is not thereby made possible. The possibility of
effective action depends on the ability to provide necessary means. It depends even
more so on the existence of conditions that permit nations and other organizations
to follow appropriate policies and strategies. World-shaking problems cry for
global solutions, but there is no global agency to provide them. Necessities do not
create possibilities. Wishing that final causes were efficient ones does not make
them so. Great tasks can be accomplished only by agents of great capability. That
is why states, and especially the major ones, are called on to do what is necessary
for the world’s survival. But states have to do whatever they think necessary for
their own preservation, since no one can be relied on to do it for them.
When the securitization of humanity comes into conflict with the securitization of the nation,
national securitization is likely to prevail. In this sense, macrosecuritization failure is the normal
pattern of international relations.
One can imagine an alternative world in which the securitizing actors, functional actors,
and audiences of macrosecuritization would exist within a hierarchic security constellation, similar
to the ways in which “lower-level” processes of securitization occur under the hierarchic political
authority of the state, which possesses the authority and capability to act on behalf of the national
community. The survival of humanity represents a powerful justification for the formation of a
“world state” (Morgenthau 1945; Herz 1959; Craig 2003; Wendt 2003; Deudney 2007; Bostrom
2019; Sætra 2022). Indeed, to the extent that the legitimacy of states is contingent upon their
capacity to ensure the security of citizens and the nation (Mittiga 2021), the inadequacy of states
and the international system to take effective action to protect citizens, nations, and humanity from
existential threats poses a challenge to the legitimacy of the state as the basic political structure of
the modern world (Sears 2021a). The formation of a world state would entail a revolutionary
transformation of the political structure of international relations—in some sense, the end of inter-
national or inter-state relations—whereby states would transfer some degree of their political
sovereignty to a world political entity, whether in the form of a centralized state (Craig 2003) or a
federation of states (Deudney 2007), which would possess the authority and capability to speak
and do security on behalf of humankind. The transfer of political sovereignty from nation-states to
252
a world state would represent a Hobbesian escape from international anarchy for the security of
humankind (Craig 2003).
This sort of modification of sovereignty was exactly what the atomic scientists envisioned
as the solution to the existential threat of the atomic bomb: “one world, or none” (Masters and
Way, eds. 1946). The Baruch Plan for the international control over atomic energy glimpsed
beyond the political reality of international anarchy to imagine a world order in which a
supranational organization—an Atomic Development Authority—would possess the authority and
capability to protect humanity from the peril of the atomic bomb (Baratta 1985; Kearn 2010). Its
failure speaks to the political difficulties behind the formation of a world state, with states jealously
guarding their sovereignty and independence even when the survival of humanity is at stake.
We therefore appear to be at an impasse between two fundamental realities of politics and
security. On the one hand, the state appears to us as an immutable object, whereby any desire to
replace them with a world political entity of higher authority and greater capability appears
immediately naive or foolish. On the other hand, the recurrent failure of states to take effective
action to reduce and neutralize existential threats seems to expose humanity to grave and
unnecessary dangers. We cannot wish away the state any more than we can pretend that the
existential threats facing humanity do not exist or are less serious than they are. And yet this seems
to be exactly the approach that states have taken by tacitly accepting the possibility of nuclear
annihilation as the normal condition of international relations. As Hans Morgenthau (1961) wrote,
An age whose objective conditions of existence have been radically transformed by
the possibility of nuclear death evades the need for a radical transformation of its
thought and action by thinking and acting as though nothing of radical import had
happened. This refusal to adapt thought and action to radically new conditions has
spelled the doom of men [sic] and civilizations before. It is likely to do so again.
What then can be done? In the absence of a world political entity that can speak and act for
humankind, security from the existential threats to humanity depends on those states with the
capacity for extraordinary action: the great powers. Yet as we have seen, the great powers have
frequently failed to heed the calls for extraordinary action for the security of humanity, especially
when great power rivalries leave them susceptible to securitization narratives of national
securitization which prioritize the national interest over humanity’s survival. One political solution
is therefore to challenge the securitization narrative of national securitization and to shift the
narrative of humanity securitization so that the dynamics of great power rivalries serve the ends of
macrosecuritization.
253
This could be done through a narrative of humanity securitization that emphasizes great
power responsibility and management of existential threats (Bull 1977; Waltz 1979, 194-199; Cui
and Buzan 2016; Bernstein 2020). The great powers possess both unequal status and differential
responsibility in international relations, as demonstrated by the permanent positions of China,
France, Russia, the United Kingdom, and the United States in the UN Security Council, which
gives them both greater authority and responsibilities over the management of international
security. When the great powers pursue prestige and status, they do so by seeking recognition of
their power and position by other states (Paul et al. 2014). The ability to take extraordinary action
to reduce and neutralize the existential threats to humanity would appear to offer significant
opportunities for the great powers to demonstrate their power and capabilities to other states, or
for major powers to join the ranks of the great powers. Competition between the great and major
powers to demonstrate leadership on diminishing existential threats could become a mark of
international status and prestige, similar to how great powers have delivered other international
public goods in the past, such as upholding the balance-of-power, protecting the free flow of
international trade and maritime navigation, or maintaining international peace and security (Bull
1977; Gilpin 1987).
If such a narrative of great power responsibility were to take hold, then the resurgence of
great power rivalries could become a force for macrosecuritization. On nuclear weapons, the
reduction of nuclear arsenals to levels below the threshold that constitutes an existential threat but
that still act as a strategic deterrent could become the mark of a responsible great power (i.e., a
policy of “minimum credible deterrence”). On climate change, becoming the first great power to
achieve carbon neutrality could become a symbol of great power status, like the “space race”
during the Cold War (e.g., a “race to carbon zero”). On artificial intelligence, the great powers
could pursue the highest standards of safety in AI R&D as a demonstration of great power
responsibility (e.g., “provably beneficial AI”), and contribute to this becoming an international
norm by making technology transfers contingent on such standards and placing sanctions on states
with unsafe or irresponsible AI practices.
Since the great power rivalry between the United States and China is quickly becoming the
dominant security constellation in international relations, like the Cold War, it is bound to have
implications for the prospects of great power consensus and macrosecuritization. In the long-term,
a stable bipolar system could create new opportunities for great power consensus on
macrosecuritization, as it did on nuclear proliferation and biological weapons in the past. In the
254
short-term, the danger is that the great powers begin to perceive ever issue through the
securitization narrative of national securitization, making them less likely to achieve great power
consensus on macrosecuritization, as fear of “the Other” comes to outweigh fear of existential
threats. This creates a pressing need to resist the narrative of national securitization and to oppose
it with a stronger narrative of humanity securitization. The purpose of a narrative of humanity
securitization that emphasizes great power responsibility and management of existential threats is
to make great power rivalries work for and not against the security and survival of humankind.
The ability to “speak” and “do security” for humankind presents novel opportunities for the great
powers to demonstrate their power and capabilities in international relations. Otherwise, the
resurgence of great power rivalries under the growing specter of existential threats could end in
tragedy for humankind.
cclv
Bibliography
Abrahms, Max. (2008) What Terrorists Really Want: Terrorist Motivations and Counterterrorism
Strategy. International Security 32(4): 78–105.
Acharya, Amitav. (2017) After Liberal Hegemony: The Advent of a Multiplex World Order. Ethics &
International Affairs 31(3): 271–285.
Acheson, Dean, et al. (1946) A Report on the International Control of Atomic Energy. Washington: U.S.
Department of State.
Acheson, Dean. (1945) Memorandum by the Acting Secretary of State to President Truman, September
25. Foreign Relations of the United States. Washington: Government Printing Office.
Adler, Emanuel. (1992) The Emergence of Cooperation: National Epistemic Communities and the
International Evolution of the Idea of Nuclear Arms Control. International Organization 46(1):
101–145.
———. (2005) The Spread of Security Communities: Communities of Practice, Self-Restraint, and
NATO’s Post—Cold War Transformation. European Journal of International Relations 14(2):
195-230.
———. (2020) Control Power as a Special Case of Protean Power: Thoughts on Peter Katzenstein and
Lucia Seybert’s Protean Power: Exploring the Uncertain and Unexpected in World Politics.”
International Theory 12: 422–423.
Adler, Emanuel, and Michael Barnett, eds. (1998) Security Communities. Cambridge: Cambridge
University.
Adler, Emanuel, and Steven Bernstein. (2005) Knowledge in Power: The Epistemic Construction of
Global Governance. In Power and Global Governance, eds. Michael Barnett and Raymond
Duvall. Cambridge: Cambridge University Press.
Alfonseca, Manuel, Manuel Cebrian, Antonio Fernández Anta, Lorenzo Coviello, Andrés Albeliuk, and
Iyad Rahwan. (2021) Superintelligence Cannot Be Contained: Lessons from Computability
Theory. Journal of Artificial Intelligence Research 70: 65–76.
Allan, Bentley B. (2017) Second Only to Nuclear War: Science and the Making of Existential Threat in
Global Climate Governance. International Studies Quarterly 61: 809–820.
Allen, Gregory C. (2019) Understanding China’s AI Strategy: Clues to Chinese Strategic Thinking on
Artificial Intelligence and National Security. Washington: Center for a New American Security.
Allen-Ebrahimiam, Bethany. (2019) Exposed: China’s Operating Manual for Mass Internment and Arrest
by Algorithm. International Consortium of Investigative Journalists, November 24.
<https://www.icij.org/investigations/china-cables/exposed-chinas-operating-manuals-for-mass-
internment-and-arrest-by-algorithm/.>
Allison, Graham. (1971) Essence of Decision: Explaining the Cuban Missile Crisis. United States of
America: Harper Collins Publishers.
———. (2017) Destined for War: Can America and China Escape Thucydides’ Trap? Boston: Houghton
Mifflin Harcourt.
Alperovitz, Gar. (1965) Atomic Diplomacy: Hiroshima and Potsdam: The Use of the Atomic Bomb and
the American Confrontation with Soviet Power. New York: Vintage Books.
cclvi
Alvarez, Luis, Walter Alvarez, Frank Asaro, and Helen V. Michel. (1980) Extraterrestrial Cause for the
Cretaceous-Tertiary Extinction. Science 208(4448): 1095–1108.
Aoki Inoue, Cristina Yumie, and Paula Franco Moreira. (2016) Many Worlds, Many Nature(s), One
Planet: Indigenous Knowledge in the Anthropocene. Revista Brasileira de Política Internacional
59(2): 1–19.
Aradua, Claudia, and Rens Van Munster. (2007) Governing Terrorism Through Risk: Taking Precautions,
(un)Knowing the Future. European Journal of International Relations 13(1): 89–115.
Armstrong, Stuart, Nick Bostrom, and Carl Shulman. (2013) Racing to the Precipice: A Model of
Artificial Intelligence Development. Technical Report 2013-1. Future of Humanity Institute,
Oxford University: 1–8.
Arnold, Zachary. (2020) What Investment Trends Reveals about the Global AI Landscape. Brookings.
<https://www.brookings.edu/techstream/what-investment-trends-reveal-about-the-global-ai-
landscape/>.
Axelrod, Robert. (1984) The Evolution of Cooperation. New York: Basic Books.
Axelrod, Robert, and Robert O. Keohane. (1985) Achieving Cooperation under Anarchy: Strategies and
Institutions. World Politics 38(1): 226–254.
Azoulay, Audrey. (2021) Towards an Ethics of Artificial Intelligence. United Nations.
<https://www.un.org/en/chronicle/article/towards-ethics-artificial-intelligence>.
BAAI. (2019) Beijing AI Principles. BAAI. <https://www.baai.ac.cn/news/beijing-ai-principles-en.html>.
Baker, Sinéad, and Cheryl Teh. (2021) Xi Jinping Said Other Countries Will ‘Crack their Heads and Spill
Blood’ If They Come after China, a Stark Warning to Mark 100 Years of the Communist Party.
Insider, July 1. <https://www.businessinsider.com/xi-countries-opposing-china-crack-heads-spill-
blood-2021-7>.
Ball, Philip. (2020) The AI Delusion: Why Humans Trump Machines: Artificial Intelligence May Never
Match the Brain. Prospect Magazine, January 25.
<https://www.prospectmagazine.co.uk/magazine/the-ai-delusion-why-humans-trump-machines-
robots-artificial-intelligence-alpha-go-deepmind-marcus-davis-koch-mitchell-review>.
Balzacq, Thierry, ed. (2011) Securitization Theory: How Security Problems Emerge and Dissolve.
London: Routledge.
———. (2019) Securitization Theory: Past, Present, and Future. Polity 51(2): 331–348.
Balzacq, Thierry, Sarah Leonard, and Jan Ruzicka. (2015) Securitization Revisited: Theory and Cases.
International Relations 30(4): 494–531.
Balzacq, Thierry, Stefano Guzzini, Michael C Williams, Ole Wæver, and Heikki Patomki. (2014)
Forum: What Kind of Theory—If Any—Is Securitization? International Relations 29(1): 1–41.
Baratta, Joseph Preston. (1985) Was the Baruch Plan a Proposal of World Government? The International
History Review 7(4): 592–621.
Barnett, Michael, and Raymond Duvall. (2005) Power in International Politics. International
Organization 59(1): 39–75.
cclvii
Baruch, Bernard. (1960) The Baruch Plan: Statement by the United States Representative (Baruch) to the
United Nations Atomic Energy Commission, 1946, June 14. In Documents on Disarmament, 1945-
1959: Volume I. Washington.: U.S. Department of State.
Baum, Seth, Stuart Armstrong, Timoteus Ekenstedt, Olle Hggstrom, Robin Hanson, Karin Kuhlemann,
Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres,
Alexey Turchin and Roman V. Yampolskiy. (2019) Long-Term Trajectories of Human
Civilization. Foresight 21(1): 53–83.
Beard, S. J., Lauren Holt, Asaf Tzachor, Luke Kemp, Shahar Avin, Phil Torres, and Haydn Belfield.
(2021) Assessing Climate Change’s Contribution to Global Catastrophic Risk. Futures 127.
Beck, Ulrich. (2006) Living in the World Risk Society. Economy and Society 35(3) 329–345.
———. (2007) World at Risk. Cambridge: Polity.
Beckley, Michael. (2011/12) China’s Century? Why America’s Edge Will Endure. International Security
36(3): 41–78.
———. (2018) Unrivaled: Why America Will Remain the World’s Sole Superpower. Ithaca: Cornell
University Press.
Bengtsson, Louise, and Mark Rhinard. (2019) Securitisation Across Borders: The Case of ‘Health
Security’ Cooperation in the European Union. West European Politics 42(2): 346–368.
Bennett, Andrew, and Jeffrey T. Checkel, eds. (2014) Process Tracing: From Metaphor to Analytic Tool.
Cambridge: Cambridge University Press.
Bernstein, Barton J. (1974) The Quest for Security: American Foreign Policy and International Control of
AtomicEnergy, 1942-1946. The Journal of American History 60(4): 1003-1044.
———. (1975) Roosevelt, Truman, and the Atomic Bomb, 1941-1945: A Reinterpretation. Political Science
Quarterly 90(1): 23–69.
Bernstein, Steven. (2020) The Absence of Great Power Responsibility in Global Environmental Politics.
European Journal of International Relations 26(1): 8–32.
Bernstein, Steven, Richard Ned Lebow, Janice Gross Stein, and Steven Weber. (2000) God Gave Physics
the Easy Problems: Adapting Social Science to an Unpredictable World. European Journal of
International Relations 6(1): 43–76.
Berling, Trine Villumsen. (2011) Science and Securitization: Objectivation, the Authority of the Speaker
and Mobilization of Scientific Facts. Security Dialogue 42(4): 385–397.
Bettiza, Gregorio, and David Lewis. (2020) Authoritarian Powers and Norm Contestation in the Liberal
International Order: Theorizing the Power Politics of Ideas and Identity. Journal of Global Security
Studies 5(4): 559–577.
Bijian, Zheng. (2005) China’s ‘Peaceful Rise’ to Great-Power Status. Foreign Affairs 84(5): 18–24.
Blair, Bruce. (1985) Strategic Command and Control: Redefining the Nuclear Threat. Brookings
Institution Press.
Borak, Masha. (2021) US-China Tech War: Taiwan’s TSMC Joins American Chip Coalition in Another
Blow to China’s Self-Sufficiency Drive. South China Morning Post, May 12.
<https://www.scmp.com/tech/tech-war/article/3133235/us-china-tech-war-taiwans-tsmc-joins-
american-chip-coalition-another>.
cclviii
Bostrom, Nick. (2001) Anthropic Bias: Observation Selection Effects in Science and Philosophy. New
York: Routledge.
———. (2002) Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards. Journal of
Evolution and Technology 9: 1–36.
———. (2005) Transhumanist Values. <https://www.nickbostrom.com/ethics/values.pdf>.
———. (2013) Existential Risk Prevention as Global Priority. Global Policy 4(1): 15–31.
———. (2014) Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.
———. (2019) The Vulnerable World Hypothesis. Global Policy 10(4): 455–476.
Bostrom, Nick and Milan Cirkovic, eds. (2008) Global Catastrophic Risks. Oxford: Oxford University
Press.
Brodie, Bernard. (1959) The Anatomy of Deterrence. World Politics 11(2): 173–191.
Brodie, Bernard. (1959) Strategy in the Missile Age. Santa Monica: The RAND Corporation.
Brooks, Stephen G., and William C. Wohlforth. (2008) World Out of Balance: International Relations
and the Challenge of U.S. Primacy. New Jersey: Princeton University Press.
———. (2015/16) The Rise and Fall of the Great Powers in the Twenty-First Century: China’s Rise and the
Fate of America’s Global Position. International Security 40(3): 7–53.
Buckley, Chris, and Paul Mozur. (2019) How China Uses High-Tech Surveillance to Subdue Minorities.”
The New York Times, May 22. <https://www.nytimes.com/2019/05/22/world/asia/china-
surveillance-xinjiang.html>.
Bueger, Christian. (2013) Communities of Security Practice at Work? The Emerging African Maritime
Security Regime. African Security 6: 297–316.
Bull, Hedley. (1977) The Anarchical Society: A Study of Order in World Politics. London: Macmillan
Press.
Bulletin of the Atomic Scientists. (1945) Pearl Harbor Anniversary and the Moscow Conference. The
Bulletin of the Atomic Scientists 1(1): 1.
Buzan, Barry. (2006) Will the 'Global War on Terrorism' Be the New Cold War? International Affairs
82(6):1101–1118.
Buzan, Barry, Charles Jones, and Richard Little. (1993) The Logic of Anarchy: Neorealism to Structural
Realism. New York: Columbia University Press.
Buzan, Barry, and Mathias Albert. (2010) Differentiation: A Sociological Approach to International
Relations Theory. European Journal of International Relations 16(3): 315–337.
Buzan, Barry, and Ole Wæver. (2003) Regions and Powers. Cambridge: Cambridge University Press.
———. (2009) Macrosecuritisation and Security Constellations: Reconsidering Scale in Securitisation
Theory. Review of International Studies 35: 253–276.
Buzan, Barry, Ole Wæver, and Jaap de Wilde. (1998) Security: A New Framework for Analysis.
Boulder, Colorado: Lynne Reinner Publishers, Inc.
Brundage, Miles, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, and Allan Dafoe
(2018) The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.
London: Oxford University.
cclix
Cardoso dos Santos, Marcos. (2018) Identity and Discourse in Securitization Theory. Contexto
Internacional 40(2): 229–248.
Carlson, Robert. (2003) The Pace and Proliferation of Biological Technologies. Biosecurity and
Bioterrorism 1(3): 1–12.
Carneiro, Robert L. (1970) A Theory of the Origin of the State. Science, New Series 169(3947): 733–738
Carr, E. H. (1939) The Twenty Years’ Crisis, 1919–1939: An Introduction to the Study of International
Relations. London: The MacMillan Press Ltd.
Carr, E. H. (1961) What Is History? London: Penguin Books.
Castro, Daniel, and Michael McLaughlin. (2021) Who Is Winning the AI Race: China, the EU, or the
United States? Center for Data Innovation.
Cellan-Jones, Rory. (2014) Stephen Hawking Warns Artificial Intelligence Could End Mankind. BBC,
December 2. <https://www.bbc.com/news/technology-30290540>.
Childs, Craig. (2013) Apocalyptic Planet: A Field Guide to the Future of the Earth. Vintage.
China Institute for Science and Technology Policy (CISTP). (2018) China AI Development Report, 2018.
China: Tsinghua University.
Churchill, Winston S. (1925) Shall We Commit Suicide? Historical Outlook 16(2): 56–58.
Cirincione, Joseph. (2007) Bomb Scare: The History and Future of Nuclear Weapons. New York:
Columbia University Press.
Clarke, Sam, Alexis Carlier, and Jonas Schuett. (2021) Survey on AI Existential Risk Scenarios. Less
Wrong, June 8. <https://www.lesswrong.com/posts/WiXePTj7KeEycbiwK/survey-on-ai-
existential-risk-scenarios>.
Clarke, Steve. (2005) Future Technologies, Dystopic Futures and the Precautionary Principle. Ethics and
Information Technology 7: 121–126.
Climate Emergency Declaration and Mobilisation in Action (Cedemia). (2021) Climate Emergency
Declarations. <https://www.cedamia.org/global/>.
Cochran, Thomas B., Robert S. Norris, and Oleg A. Bukharin. (1995) Making the Russian Bomb: From
Stalin to Yeltsin. Boulder: Natural Resources Defense Council.
Congressional Research Service (CRS). (2020). CRS Report: Artificial Intelligence and National Security.
Washington: Government Printing Office.
Cooper, Helene. (2021) China Could Have 1,000 Nuclear Warheads by 2030, Pentagon Says. The New
York Times, November 3. <https://www.nytimes.com/2021/11/03/us/politics/china-military-
nuclear.html?smid=tw-nytimes&smtyp=cur>.
Corry, Olaf. (2012) Securitisation and ‘Riskification’: Second-Order Security and the Politics of Climate
Change. Millennium: Journal of International Studies 40(2): 235–258.
Coté, Adam. (2016) Agents Without Agency: Assessing the Role of the Audience in Securitization Theory.
Security Dialogue 47(6) 541–558.
Craig, Campbell. (2003) Glimmer of a New Leviathan: Total War in the Realism of Niebuhr,
Morgenthau, and Waltz. NewYork: Columbia University Press.
cclx
Critch, Andrew, and David Krueger. (2020) AI Research Considerations for Human Existential Safety
(ARCHES). <https://arxiv.org/pdf/2006.04948.pdf>.
Cross, Glenn, and Lynn Klotz. (2020) Twenty-First Century Perspectives on the Biological Weapon
Convention: Continued Relevance or Toothless Paper Tiger. Bulletin of the Atomic Scientists 76(4):
185–191.
Cui, Shunji, and Barry Buzan. (2016) Great Power Management in International Society. The Chinese
Journal of International Politics 9(2): 181–210.
Cureton, Demond. (2020) Trump Gov't Signs US-UK Strategic Pact on AI To Counter 'Repressive' China
Amid Beijing's Tech Rise. Sputnik, September 25. <https://sputniknews.com/20200925/trump-
govt-signs-us-uk-strategic-pact-on-ai-to-counter-repressive-china-amid-beijings-tech-rise-
1080571969.html>.
Dafoe, Allan. (2017/18). AI Governance: A Research Agenda. Oxford: Future of Humanity Institute.
Dalaqua, Renata H. (2013) Securing our Survival (SOS): Non-State Actors and the Campaign for a
Nuclear Weapons Convention through the Prism of Securitisation Theory. Brazilian Political
Science Review 7(3): 90–117.
Daxue Consulting. (2020) The AI Ecosystem in China, 2020. Beijing: Daxue Consulting.
De Troulliod, Julien. (2020). US-China Rivalry: When Great Power Competition Endangers Global
Science. Bulletin of Atomic Scientists, October 16. https://thebulletin.org/2020/10/us-china-
rivalry-when-great-power-competition-endangers-global-science/
Deudney, Daniel H. (1991) Environment and Security: Muddled Thinking. The Bulletin of Atomic
Scientists 47(3): 22.
———. (2000) Geopolitics as Theory: Historical Security Materialism. European Journal of International
Relations 6(1): 77–107.
———. (2007) Bounding Power: Republican Security Theory from the Polis to the Global Village. New
Jersey: Princeton University Press.
———. (2020) Dark Skies: Space, Expansionism, Planetary Geopolitics, and the Ends of Humanity.
Oxford: Oxford University Press.
Deudney, Daniel H., and G. John Ikenberry. (2021) The Intellectual Foundations of the Biden Revolution.
Foreign Policy Magazine, July 2. <https://foreignpolicy.com/2021/07/02/biden-revolution-
roosevelt-tradition-us-foreign-policy-school-international-relations-interdependence/>.
Deutsch, Karl W., et al. (1957) Political Community and the North Atlantic Area: International
Organizations in the Light of Historical Experience. New Jersey: Princeton University Press.
Diamond, Jared. (2005) Collapse: How Societies Choose to Fail or Succeed. New York: Penguin Books.
DiCicco, Jonathan M., and Jack S. Levy. (1999) Power Shifts and Problem Shifts: The Evolution of the
Power Transition Research Program. Journal of Conflict Resolution 43(6): 675–704.
Dijkstra, Hylke, Petar Petrov, and Esther Versluis. (2018) Governing Risks in International Security.
Contemporary Security Policy 39(4): 537–543.
Dillon, Michael. (2007) Governing Terror: The State of Emergency of Biopolitical Emergence.
International Political Sociology 1(1): 7–28.
cclxi
Ding, Jeffrey. (2018) Deciphering China’s AI Dream: The Context, Components, Capabilities, and
Consequences of China’s Strategy to Lead the World in AI. Oxford: Future of Humanity Institute,
Oxford University.
Domonoske, Camila. (2017) Elon Musk Warns Governors: Artificial Intelligence Poses Existential Risk.
NPR, July 17. <https://www.npr.org/sections/thetwo-way/2017/07/17/537686649/elon-musk-
warns-governors-artificial-intelligence-poses-existential-
risk#:~:text=Elon%20Musk%3A%20Artificial%20Intelligence%20Poses%20'Existential%20Ris
k'&text=%22AI%20is%20a%20fundamental%20existential,is%20%22the%20scariest%20proble
m.%22>.
Dosseto, Anthony, Simon P. Turner, and James A. Van-Orman, eds. (2011) Timescales of Magmatic
Processes: From Core to Atmosphere. Wiley-Blackwell.
Doudna, Jennifer A., and Samuel H. Sternberg. (2017) A Crack in Creation: Gene Editing and the
Unthinkable Power to Control Evolution. United States of America: Mariner Books.
Doyle, Michael W. (1983) Kant, Liberal Legacies, and Foreign Affairs. Philosophy and Public Affairs
12(3): 205–235.
———. (1986) Liberalism and World Politics. The American Political Science Review 80(4): 1151–1169.
Drezner, Daniel. (2021) Power and InternationalRelations: A Temporal View. European Journal of
International Relations 27(1): 29–52.
Drexler, K. Eric. (1986) Engines of Creation: The Coming Era of Nanotechnology. United States:
Doubleday.
———. (2006) Engines of Creation 2.0: The Coming Era of Nanotechnology. K. Eric Drexler.
Drolet, Jean-Francois, and Michael C. Williams. (2019) The View from MARS: US Paleoconservatism
and Ideological Challenges to the Liberal World Rrder. International Journal 74(1): 15–31.
Dunne, Tim, Lee Hansen, and Colin Wight. (2013) The End of International Relations Theory?
European Journal of International Relations 19(3): 405–425.
Dunton, Tim. (2018) Building an AI World: Report on National and Regional AI Strategies. Canada:
CIFAR.
Ehrlich, Paul R., and Anne Ehrlich. (1968) The Population Bomb: Population Control or Race to
Oblivion? United States: Sierra Club and Ballantine Books.
Ehrlich, Paul R., Carl Sagan, Donald Kennedy, and Walter Orr Roberts. (1984) The Cold and the Dark:
The World after Nuclear War. New York: W. W. Norton & Company.
Einstein, Albert. (1947). Letter: Emergency Committee of Atomic Scientists Incorporated.
<https://sgp.fas.org/eprint/einstein.html>.
———. (2021) Einstein’s Letter to President Roosevelt 1939. Atomic Archive.
<https://www.atomicarchive.com/resources/documents/beginnings/einstein.html>.
Elman, Colin, and Miriam Fendius Elman. (2002) How Not to Be Lakatos Intolerant: Appraising
Progress in IR Research. International Studies Quarterly 46(2): 231–262.
cclxii
Enserink, Martin. (2011) Scientists Brace for Media Storm Around Controversial Flu Studies. Science
Magazine. <https://www.sciencemag.org/news/2011/11/scientists-brace-media-storm-around-
controversial-flu-studies>.
Erskine, Hazel Gaudet. (1963) The Polls: Atomic Weapons and Nuclear Energy. Public Opinion
Quarterly 27(2): 155–190.
European Commission. (2018a) Cooperation on Artificial Intelligence. Brussels: European Commission.
<https://digital-strategy.ec.europa.eu/en/news/eu-member-states-sign-cooperate-artificial-
intelligence>.
———. (2018b) Artificial Intelligence for Europe. Brussels: European Commission.
<https://ec.europa.eu/transparency/documents-register/detail?ref=COM(2018)237&lang=en>.
———. (2018c) Coordinated Plan on Artificial Intelligence. Brussels: European Commission.
<https://www.kowi.de/Portaldata/2/Resources/fp/2018-COM-CP-Artificial-Intelligence-
Annex.pdf>.
———. (2018d) Member States and Commission to Work Together to Boost Artificial Intelligence ‘Made
in Europe’. <https://digital-strategy.ec.europa.eu/en/news/member-states-and-commission-work-
together-boost-artificial-intelligence-made-europe>.
———. (2020) White Paper: On Artificial Intelligence—A European Approach to Excellence and Trust.
Brussels: European Commission. <https://ec.europa.eu/info/sites/default/files/commission-white-
paper-artificial-intelligence-feb2020_en.pdf>.
European Commission, High-Level Expert Group on Artificial Intelligence. (2019) Ethics Guidelines for
Trustworthy AI. Brussels: European Commission. <https://digital-
strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai>.
Everitt, Tom, Gary Lea, and Marcus Hutter. (2018) AGI Safety Literature Review. International Joint
Conference on Articial Intelligence (IJCAI). arXiv: 1805.01109.
Fearon, James D. (1995) Rationalist Explanations for War. International Organization 49(3): 379–414.
Feng, Coco. (2021) Chinese President Xi Jinping Seeks to Rally Country’s Scientists for
‘Unprecedented’ Contest.” South China Morning Post, May 29.
<https://www.scmp.com/news/china/politics/article/3135328/chinese-president-xi-jinping-seeks-
rally-countrys-scientists>.
Feng, Coco, and Che Pan. (2021) US-China Tech War: Supercomputer Sanctions on China Begin to
Bite as Taiwan’s TSMC Said to Suspend Chip Orders. South China Morning Post, April 13.
<https://www.scmp.com/tech/tech-war/article/3129362/us-china-tech-war-supercomputer-
sanctions-china-begin-bite-taiwans?utm_source=CSIS%20All&utm_campaign=9f27e9b6f5-
EMAIL_CAMPAIGN_2018_11_08_05_05_COPY_01&utm_medium=email&utm_term=0_f32
6fc46b6-9f27e9b6f5-222587529>.
Feng, Coco, and Xinmei Shen. (2021) China Tech Crackdown: In 2021, Technology Giants Came under
Intense Scrutiny after Sleeping Watchdogs Awakened. South China Morning Post, December 22.
<https://www.scmp.com/tech/big-tech/article/3160529/china-tech-crackdown-2021-technology-
giants-came-under-intense>.
Finnemore, Martha, and Kathryn Sikkink. (1998) International Norm Dynamics and Political Change.
International Organization 52(4): 887–917.
cclxiii
Fisher, Richard. (2019) The Perils of Short-Termism: Civilization’s Greatest Threat. BBC Future,
January 9. <https://www.bbc.com/future/article/20190109-the-perils-of-short-termism-
civilisations-greatest-threat>.
Floyd, Rita. (2011) Can Securitization Theory Be Used in Normative Analysis? Towards a Just
Securitization Theory. Security Dialogue 42(4/5): 427–439.
Flyvbjerg, Bent. (2006) Five Misunderstandings about Case Studies. Qualitative Inquiry 12(2): 219–
245.
Franck, James, et al. (1945) Report of the Committee on Political and Social Problems Manhattan Project
"Metallurgical Laboratory” (“The Franck Report”). Chicago: University of Chicago.
Friedberg, Aaron L. (2014) The Sources of Chinese Conduct: Explaining Beijing’s Assertiveness. The
Washington Quarterly 37(4): 133–150.
Frischknecht, Friedrich. (2003) The History of Biological Warfare. EMBO Reports 4: 47–52.
Furmanski, Martin. (2014) Threatened Pandemics and Laboratory Escapes: Self-Fulfilling Prophecies.
Bulletin of Atomic Scientists. <https://thebulletin.org/2014/03/threatenedpandemics- and-
laboratory-escapes-self-fulfilling-prophecies/>.
Future of Life Institute. (2015) An Open Letter: Research Priorities for Robust and Beneficial Artificial
Intelligence. <https://futureoflife.org/ai-open-letter/>.
Gaddis, John Lewis. (1972) The United States and the Origins of the Cold War, 1941-1947. New York:
Columbia University Press.
———. (1982) Strategies of Containment: A Critical Appraisal of American National Security Policy during
the Cold War. Oxford: Oxford University Press.
———. (2005) The Cold War: A New History. New York: Penguin Books.
Garrick, Jon, ed. (2017) First International Colloquium on Catastrophic and Existential Risk.
California: UCLA.
Gat, Azar. (2006) War in Human Civilization. Oxford: Oxford University Press.
———. (2009) So Why Do People Fight? Evolutionary Theory and the Causes of War. European Journal of
International Relations 15(4): 571–599.
Geddes, Barbara. (1990) How the Cases You Choose Affect the Answers You Get: Selection Bias in
Comparative Politics. Political Analysis 2: 131–150.
Gerber, Larry G. (1982) The Baruch Plan and the Origins of the Cold War. Diplomatic History 6(4): 69–
96.
Gilli, Andrea, and Mauro Gilli. (2018/19) Why China Has Not Caught Up Yet: Military-Technological
Superiority and the Limits of Imitation, Reverse Engineering, and Cyber Espionage.
International Security 43(3): 141–189.
Gilpin, Robert. (1981) War & Change in World Politics. New Jersey: Princeton University Press.
———. (1987) The Political Economy of International Relations. New Jersey: Princeton University Press.
———. (1988) The Theory of Hegemonic War. The Journal of Interdisciplinary History 18(4): 591–613.
Glaser, Charles L. (1994/95) Realists as Optimists: Cooperation as Self-Help. International Security
19(3): 50–90.
cclxiv
———. (2010) Rational Theory of International Politics: The Logic of Competition and Cooperation. New
Jersey: Princeton University Press.
Goddard, Stacie E., and Daniel H. Nexon. (2016) The Dynamics of Global Power Politics: A
Framework for Analysis. Journal of Global Security Studies 1(1): 4–18.
Goldblat, Jozef. (1997) The Biological Weapons Convention: An Overview. International Review of the
Red Cross 318: 251–265.
Goldstein, Avery. (2020) China’s Grand Strategy under Xi Jinping: Reassurance, Reform, and
Resistance. International Security 45(1): 164–201.
Gonsalves, Tad. (2018) The Summers and Winters of Artificial Intelligence. In Encyclopedia of
Information Science and Technology. <https://www.igi-global.com/chapter/the-summers-and-
winters-of-artificial-intelligence/183737?camid=4v1>.
Good, I. J. (1966) Speculations Concerning the First Ultraintelligent Machine. Advances in Computers 6:
31–88.
Gosling, F. G. (1999) The Manhattan Project: Making the Atomic Bomb. Washington: History Division,
Department of Energy.
Gotz, Elias. (2017) Putin, the State, and War: The Causes of Russia’s Near Abroad Assertion Revisited.
International Studies Review 19: 228–253.
Grace, Katja, John Salvatier, Allan Dafoe, Baobao Zhang, Owain Evans. (2018) When Will AI Exceed Human
Performance? Evidence from AI Experts. Journal of Artificial Intelligence Research.
https://arxiv.org/abs/1705.08807.
Greer, Tanner. (2019) Xi Jinping in Translation: China’s Guiding Ideology. Palladium, May 31.
<https://palladiummag.com/2019/05/31/xi-jinping-in-translation-chinas-guiding-ideology/>.
Grieco, Joseph M. (1988) Anarchy and the Limits of Cooperation: A Realist Critique of the Newest
Liberal Institutionalism. International Organization 42(3): 485–507.
Group of 7 (G7). (2017) Multistakeholder Exchange on Human Centric AI for Our Societies.
<http://www.g8.utoronto.ca/ict/2017-ict-annex2-AI.html>.
———. (2018a) Charlevoix Common Vision for the Future of Artificial Intelligence.
<https://www.international.gc.ca/world-monde/assets/pdfs/international_relations-
relations_internationales/g7/2018-06-09-artificial-intelligence-artificielle-en.pdf>.
———. (2018b) G7 Innovation Ministers’ Statement on Artificial Intelligence.
<http://www.g8.utoronto.ca/employment/2018-labour-annex-b-en.html>.
Group of 20 (G20). (2019) G20 Ministerial Statement on Trade and Digital Economy.
<http://www.g20.utoronto.ca/2019/2019-g20-trade.html>.
Grove, Jairus Victor. (2019) Savage Ecology: War and Geopolitics at the End of the World. United
States of America: Duke University Press.
Gunitsky, Seva. (2019) Rival Visions of Parsimony. International Studies Quarterly 63(3): 707–716.
Guterres, António. (2020) Remarks to the General Assembly on the Secretary General’s Priorities for
2020. United Nations. <https://www.un.org/sg/en/content/sg/speeches/2020-01-22/remarks-
general-assembly-priorities-for-2020>.
Guzzini, Stefano. (2011) Securitization as a Causal Mechanism. Security Dialogue 42(4-5): 329-341.
cclxv
Haas, Mark. (2005) The Ideological Origins of Great Power Politics, 1789-1989. Ithaca: Cornell
University Press.
Harrington, Cameron. (2016) The Ends of the World: International Relations and the Anthropocene.
Millennium: Journal of International Studies 44(3): 478–498.
Hawking, Stephen. (2018) Brief Answers to the Big Questions. New York: Bantam Books.
Helfand, Ira. (2003) Nuclear Famine: Two Billion People At Risk? Physicians for Social Responsibility.
<https://www.psr.org/wpcontent/uploads/2018/04/two-billion-at-risk.pdf>.
Henderson, Donald A. (2011) The Eradication of Smallpox—An Overview of the Past, Present, and
Future. Vaccine 29(4): D7—D9.
Herbst, Jeffrey. (2000) States and Power in Africa. New Jersey: Princeton University Press.
Herfst, S., et al. (2012) Airborne Transmission of Influence A/H5N1 Virus Between Ferrets. Science
336(6088): 1534–1541.
Herken, Gregg. (1980) ‘A Most Deadly Illusion’: The Atomic Secret and American Nuclear Weapons
Policy, 1945-1950. Pacific Historical Review 49: 51-76.
Herz, John H. (1950) Idealist Internationalism and the Security Dilemma. World Politics 2(2): 157–180.
———. (1957) Rise and Demise of the Territorial State. World Politics 9 (4): 473–93.
———. (1959) International Politics in the Atomic Age. New York: Columbia University Press.
Hewlett, Richard G., and Oscar E. Anderson, Jr. (1962) The New World, 1939-1946: A History of the
United States Atomic Energy Commission, Volume I. Pennsylvania: The Pennsylvania State
University.
Hoffman, Hoffman E. (2009) The Dead Hand: The Untold Story of the Cold War Arms Race and Its
Dangerous Legacy. New York: Doubleday.
Holley, Peter. (2015) Bill Gates on Dangers of Artificial Intelligence: ‘I Don’t Understand Why Some
People Are Not Concerned’. The Washington Post. <https://www.washingtonpost.com/news/the-
switch/wp/2015/01/28/bill-gates-on-dangers-of-artificial-intelligence-dont-understand-why-
some-people-are-not-concerned/>.
Holloway, David. (1979/80). Research Note: Soviet Thermonuclear Development. International Security
4(3): 192–197.
———. (1981) Entering the Nuclear Arms Race: The Soviet Decision to Build the Atomic Bomb, 1939-45.
Social Studies of Science 11(2): 159–197.
———. (1994) Stalin and the Bomb: The Soviet Union and Atomic Energy, 1939-1956. New Haven: Yale
University Press.
———. (2020) The Soviet Union and the Baruch Plan. Wilson Center. <https://www.wilsoncenter.org/blog-
post/soviet-union-and-baruch-plan>.
Hom, Andrew R. (2018) Silent Order: The Temporal Turn in Critical International Relations.
Millennium: Journal of International Studies 46(3): 303–330.
Homer-Dixon, Thomas, Brian Walker, Reinette Biggs, Anne-Sophie Cr_pin, Carl Folke, Eric. F. Lambin,
and Johan Rockstrom. (2015) Synchronous Failure: The Emerging Causal Architecture of Global
Crisis. Ecology and Society 20(3): 6–21.
cclxvi
Horowitz, Michael. (2018) Artificial Intelligence, International Competition, and the Balance of Power.
Texas National Security Review 1(3): 36–57.
Howell, Alison, and Melanie Richter-Montpetit. (2020) Is Securitization Theory Racist? Civilizationism,
Methodological Whiteness, and Antiblack Thought in the Copenhagen School. Security Dialogue
51(1) 3–22.
Huet, Natalie. (2022) “What Is Russia’s Poseidon Nuclear Drone and Could It Wipe Out the UK in a
Radioactive Tsunami?” Euronews.next, May 5. <https://www.euronews.com/next/2022/05/04/
what-is-russia-s-poseidon-nuclear-drone-and-could-it-wipe-out-the-uk-in-a-radioactive-tsun>.
Hutchings, Kimberly. (2008) Time and World Politics: Thinking the Present. Manchester: Manchester
University Press.
Huysmans, Jef. (1998) Security! What Do You Mean? From Concept to Thick Signifier. European
Journal of International Relations 4(2): 226–255.
———. (2006) The Politics of Insecurity: Fear, Migration, and Asylum in the EU. London: Routledge.
Hwang, Tim. (2020) Shaping the Terrain of AI Competition. Washington, D.C.: Centre for Security and
Emerging Technology (CSET).
Ikenberry, G. John. (2000) After Victory: Institutions, Strategic Restraint, and the Rebuilding of Order
after Major Wars. New Jersey: Princeton University Press.
Ikenberry, G. John, and Daniel H. Nexon. (2019) “Hegemony Studies 3.0: The Dynamics of Hegemonic
Orders.” Security Studies 28(3): 395–421.
Jackson, Patrick Thaddeus. (2011) The Conduct of Inquiry in International Relations. London:
Routledge.
Jamieson, Dale. (2014) Reason in a Dark Time: Why the Struggle Against Climate Change Failed—and
What It Means for Our Future. Oxford: Oxford University Press.
Jervis, Robert. (1976) Perception and Misperception in International Politics. New Jersey: Princeton
University Press.
———. (1978) Cooperation Under the Security Dilemma. World Politics 30(2): 167–214.
———. (1989) The Meaning of the Nuclear Revolution: Statecraft and the Prospect of Armageddon. Ithaca:
Cornell University Press.
———. (1997) System Effects: Complexity in Political and Social Life. New Jersey: Princeton University
Press.
———. (1999) Realism, Neoliberalism, and Cooperation: Understanding the Debate. International
Security 24(1): 42–63.
———. (2002) Theories of War in an Era of Leading-Power Peace. The American Political Science Review
96(1): 1–14.
———. (2009) Unipolarity: A Structural Perspective. World Politics 61(1): 188–213.
Jing, Meng, and Sarah Dai. (2017) China Recruits Baidu, Alibaba and Tencent to AI ‘National Team’.
South China Morning Post, November 21. <https://www.scmp.com/tech/china-
tech/article/2120913/china-recruits-baidu-alibaba-and-tencent-ai-national-team>.
Jinping, Xi. (2017) Secure a Decisive Victory in Building a Moderately Prosperous Society in All
Respects and Strive for the Great Success of Socialism with Chinese Characteristics for a New
cclxvii
Era. China Daily, October 18. <https://www.chinadaily.com.cn/china/
19thcpcnationalcongress/2017-11/04/content_34115212.htm>.
Joint Artificial Intelligence Center (JAIC). (2020) JAIC Facilitates First-Ever International AI Dialogue
for Defense. JAIC, September 16. <https://www.ai.mil/news_09_16_20-jaic_facilitates_first-
ever_international_ai_dialogue_for_defense.html>.
Jore, S. H. (2019) The Conceptual and Scientific Demarcation of Security in Contrast to Safety.
European Journal for Security Research 4: 157–174.
Joy, Bill. (2000) Why the Future Doesn’t Need Us. Wired. <https://www.wired.com/2000/04/joy-2/>.
Kadercan, Burak. (2013) Making Sense of Survival: Refining the Treatment of State Preferences. Review
of International Studies 39: 1015–1037.
Kahn, Herman. (1960) On Thermonuclear War. RAND Corporation.
———. (1962) Thinking about the Unthinkable. Avon Library Books.
Kaliq, Riyaz Kaliq. (2021) China Links Climate Cooperation with US to Overall Bilateral Tires. AA
February 9. <https://www.aa.com.tr/en/asia-pacific/china-links-climate-cooperation-with-us-to-
overall-bilateral-ties/2353525>.
Kania, Elsa B. (2017) Battlefield Singularity: Artificial Intelligence, Military Revolution, and China’s
Future Military Power. Washington: Center for a New American Security.
Katzenstein, Peter J., and Lucia A. Seybert, eds. (2017) Protean Power: Exploring the Uncertain and
Unexpected in World Politics. Cambridge: Cambridge University Press.
Kearn, David W., Jr. (2010) The Baruch Plan and the Quest for Atomic Disarmament. Diplomacy &
Statecraft 21: 41–67.
Keeley, Lawrence. (1996) War Before Civilization: The Myth of the Peaceful Savage. Oxford: Oxford
University Press.
Kennedy, Andrew B., and Darren J. Lim. (2018) The Innovation Imperative: Technology and US-China
Rivalry in the Twenty-First Century. International Affairs 94(3): 553–572.
Kennedy, Paul. (1987) The Rise and Fall of the Great Powers: Economic Change and Military Conflict
from 1500 to 2000. New York: Vintage Books.
Keohane, Robert O. (1984) After Hegemony: Cooperation and Discord in the World Political Economy.
New Jersey: Princeton University Press.
Keohane, Robert O., ed. (1985) Neorealism and Its Critics. New York: Columbia University Press.
Keohane, Robert O., and Joseph S. Nye Jr. (1977) Power and Interdependence. Boston: Little, Brown.
Kharpal, Arjun. (2021) China Spending on Research and Development to Rise 7% Per Year in Push for
Major Tech Breakthroughs. CNBC, March 4. <https://www.cnbc.com/2021/03/05/china-to-boost-
research-and-development-spend-in-push-for-tech-breakthroughs.html>.
Kharplan, Arjun. (2017) Stephen Hawking Says AI Could Be the ‘Worst Event in the History of Our
Civilization. CNBC. <https://www.cnbc.com/2017/11/06/stephen-hawking-ai-could-be-worst-
event-in-civilization.html>.
Kim, Hyun-Kyung, Elizabeth Philipp, Hattie Chung. (2017) North Korea’s Biological Weapons Program:
The Known and Unknown. Cambridge, Massachusetts: Harvard Kennedy School, Belfer Center
for Science and International Affairs.
cclxviii
King, Gary, Robert O. Keohane, and Sidney Verba. (1994) Designing Social Inquiry: Scientific
Inference in Qualitative Research. New Jersey: Princeton University Press.
Kirk, Jessica. (2020) From Threat to Risk? Exceptionalism and Logics of Health Security. International
Studies Quarterly 0: 1–11.
Knight, Frank H. (1921) Risk, Uncertainty, and Profit. Boston: Houghton Mifflin Company.
Knight, Will. (2019) Why Does Beijing Suddenly Care about AI Ethics? MIT Technology Review.
<https://www.technologyreview.com/2019/05/31/135129/why-does-china-suddenly-care-about-
ai-ethics-and-privacy/>.
Kobie, Nicole. (2019) The Complicated Truth about China’s Social Credit System. Wired, June 7.
<https://www.wired.co.uk/article/china-social-credit-system-explained>.
Koblentz, Gregory D. (2003/04) Pathogens as Weapons: The International Security Implications of
Biological Warfare. International Security 28(3): 84–122.
———. (2010) Biosecurity Reconsidered: Calibrating Biological Threats and Response. International
Security 34(4): 96–132.
———. (2020) A Biotech Firm Made a Smallpox-Like Virus on Purpose. Nobody Seems to Care. Bulletin
of Atomic Scientists, February 21. <https://thebulletin.org/2020/02/a-biotech-firm-made-a-
smallpox-like-virus-on-purpose-nobody-seems-to-care/#:~:text=Seth%20Lederman%2C%20
the%20CEO%20of,the%20one%20that%20causes%20smallpox>.
Kolbert, Elizabeth. (2014) The Sixth Extinction: An Unnatural History. New York: Picador.
Krauthammer, Charles. (1990/91) The Unipolar Moment Author. Foreign Affairs 70(1): 23–33.
Krass, Allan S. (1985) Verification: How Much Is Enough? Stockholm: Stockholm International Peace
Research Institute.
Kreienkamp, Julia, and Tom Pegram. (2021) Governing Complexity: Design Principles for the
Governance of Complex Catastrophic Risks. International Studies Review 23(3): 779–806.
Kroenig, Matthew. (2020) The Return of Great Power Rivalry: Democracy versus Autocracy from the
Ancient World to the U.S. and China. Oxford: Oxford University Press.
Kuhlemann, Karin. (2018) Complexity, Creeping Normalcy and Conceit: Sexy and Unsexy Catastrophic
Risks. Foresight 21(1): 35–52.
Kupferschmidt, Kai. (2017) How Canadian Researchers Reconstituted an Extinct Poxvirus for $100,000
Using Mail-Order DNA. Science. <https://www.sciencemag.org/news/2017/07/how-canadian-
researchers-reconstitutedextinct-poxvirus-100000-using-mail-order-dna>.
Kurzweil, Ray. (1999) The Age of Spiritual Machines: When Computers Exceed Human Intelligence. New
York: Penguin Books.
———. (2005) The Singularity is Near: When Humans Transcend Biology. Viking.
Kydd, Andrew H., and Barbara F. Walter. (2006) The Strategies of Terrorism. International Security
31(1): 49–80.
Lake, David A. (1996) Anarchy, Hierarchy, and the Variety of International Relations. International
Organization 50(1): 1–33.
———. (2007) Escape from the State of Nature: Authority and Hierarchy in World Politics. International
Security 32(1): 47–79.
cclxix
———. (2010) Two Cheers for Bargaining Theory: Assessing Rationalist Explanations of the Iraq War.
International Security 35(3): 7–52.
———. (2013) Theory is Dead, Long Live Theory: The End of the Great Debates and the Rise of
Eclecticism in International Relations.” European Journal of International Relations 19(3): 567-
587.
Larson, Deborah Welch, and Alexei Shevchenko. (2010) Status Seekers: Chinese and Russian Responses
to U.S. Primacy. International Security 34(4): 63–95.
Lawrence, Michael, and Thomas Homer-Dixon. (2021) Mechanisms of Societal Collapse: The Inter
Workings of Catastrophe. Conference Paper: International Studies Association Annual
Convention.
Layne, Christopher. (1993) The Unipolar Illusion: Why New Great Powers Will Rise. International
Security 17(4): 5–51.
———. (2006) The Unipolar Illusion Revisited: The Coming End of the United States’ Unipolar Moment.
International Security 32(2): 7–41.
Learn, David. W. (2010) The Baruch Plan and the Quest for Atomic Disarmament. Diplomacy &
Statecraft 21: 41–67.
Lee, Kai-Fu. (2018) AI Super-Powers: China, Silicon Valley, and the New World Order. Houghton Mifflin
Harcourt.
Legro, Jeffrey W., and Andrew Moravcsik. (1999) Is Anybody Still a Realist? International Security
24(2): 5–55.
Lehman, Joel et al. (2020) The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from
the Evolutionary Computation and Artificial Life Research Communities. Artificial Life 26(2):
274–306.
Lenin, Vladimir I. (2018) The State and Revolution: The Marxist Theory of the State and the Task of the
Proletarian Revolution. Zodiac and Brian Baggins.
Leslie, John. (1996) The End of the World: The Science and Ethics of Human Extinction. London:
Routledge.
Levin, Kelly, Benjamin Cashore, Steven Bernstein, and Graeme Auld. (2012) Overcoming the Tragedy
of Super Wicked Problems: Constraining Our Future Selves to Ameliorate Global Climate
Change. Policy Science 45: 123–152.
Levy, Jack. (1985) Theories of General War. World Politics 37(3): 344-374.
———. (2008) Deterrence and Coercive Diplomacy: The Contributions of Alexander George. Political
Psychology 29(4): 537–552.
———. (2008) Case Studies: Types, Designs, and Logics of Inference. Conflict Management and Peace
Studies 25: 1–18.
Lieberman, Joseph I. (1970) The Scorpion and the Tarantula: The Struggle to Control Atomic Weapons,
1945-1949. United States of America: Houghton Mifflin Company Boston.
Liu, Hin-yan, Kristian Cedervall, and Matthijs Michiel Mass. (2018) Governing Boring Apocalypses: A
New Typology of Existential Vulnerabilities and Exposures for Existential Risk Research.
Futures 102: 6–19.
cclxx
MacDonald, Matt. (2008) Securitization and the Construction of Security. European Journal of
International Relations 14(4): 563–587.
Mahbubani, Kishore. (2008) The New Asian Hemisphere: The Irresistible Shift of Global Power to the
East. New York: PublicAffairs.
Mann, Charles C. (2012) 1493: Uncovering the New World Columbus Created. United States: Vintage
Books.
Matheny, Jason G. (2007) Reducing the Risk of Human Extinction. Risk Analysis 27(5): 1335–1344.
McDonald, Joe, and Zen Soo. (2021) China tightens political control of internet giants. ABC News,
October 3. <https://abcnews.go.com/Business/wireStory/china-tightens-political-control-internet-
giants-80387365>.
McFarland, Matt. (2014) Elon Musk: ‘With Artificial Intelligence We Are Summoning the Demon’. The
Washington Post. <https://www.washingtonpost.com/news/innovations/wp/2014/10/24/elon-
musk-with-artificial-intelligence-we-are-summoning-the-demon/>.
McInnes, Colin, and Simon Rushton. (2011) HIV/AIDS and Securitization Theory. European Journal of
International Relations 19(1): 115–138.
McIntosh, Christopher. (2015) Theory across Time: The Privileging of Time-Less Theory in
International Relations. International Theory 7(3): 466–68.
Meadows, Donella H., Dennis L. Meadows, Jorgen Randers, and William W. Behrens. (1972) The Limits
to Growth. New York: Universe Books.
Mearsheimer, John J. (1994/95) The False Promise of International Institutions. International Security
19(3): 5–49.
———. (2001) The Tragedy of Great Power Politics. New York: W. W. Norton & Company.
Mearsheimer, John J., Stephen M. Walt. (2013) Leaving Theory Behind: Why Simplistic Hypothesis
Testing Is Bad for International Relations. European Journal of International Relations 19(3)
427–457.
Mercer, Jonathan. (2010) Emotional Beliefs. International Organization 64(1): 1–31.
Merler, Stefano, Marco Ajelli, Laura Fumanelli, and Alessandro Vespignani. (2013) Containing the
Accidental Laboratory Escape of Potential Pandemic Influenza Viruses. BMC Medicine 11: 252–
62.
Mede, Niels G., and Mike S. Schafer. (2020) Science-Related Populism: Conceptualizing Populist
Demands toward Science. Public Understanding of Science 29(5): 473–491.
Methmann Chris, and Delf Rothe. (2012) Politics for the Day After Tomorrow: The Logic of Apocalypse
in Global Climate Politics. Security Dialogue 43(4) 323–344.
Miailhe, Nicolas. (2018) AI & Global Governance: Why We Need an Intergovernmental Panel for
Artificial Intelligence. United Nations University, December 20.
<https://cpr.unu.edu/publications/articles/ai-global-governance-why-we-need-an-
intergovernmental-panel-for-artificial-intelligence.html>.
Minson, Christopher. (2020) Nuclear War Map. <http://nuclearwarmap.com/>.
Millet, Piers, and Andrew Snyder-Beattie. (2017) Existential Risk and Cost-Effective Biosecurity. Health
Security 15(4): 373–383.
cclxxi
Milliken, Jennifer. (1999) The Study of Discourse in International Relations: A Critique of Research and
Methods. European Journal of International Relations 5(2): 225–54.
Mitchell, Audra. (2017) “Is IR Going Extinct?” European Journal of International Relations 23(1) 3–25.
Mitchell, Audra, and Aadita Chaudhury. (2020) Worlding Beyond ‘the’ ‘End’ of ‘the World’: White
Apocalyptic Visions and BIPOC Futurisms. International Relations 0(0): 1–24.
Modelski, George. (1978) The Long Cycle of Global Politics and the Nation-State. Comparative Studies
in Society and History 20(2): 214–235.
Monteiro, Nuno P. (2011/12) Unrest Assured: Why Unipolarity Is Not Peaceful. International Security
36(3): 9–40.
———. (2014) Theory of Unipolar Politics. Cambridge: Cambridge University Press.
Moravec, Hans. (1988) Mind Children: The Future of Robot and Human Intelligence. Cambridge,
Massachusetts: Harvard University Press.
Moravcsik, Andrew. (1997) Taking Preferences Seriously: A Liberal Theory of International Politics.
International Organization 51(4): 513–553.
Morgenthau, Hans J. (1948) Politics among Nations: The Struggle for Power and Peace. New York:
Alfred A. Knopf.
———. (1948) [2006]. Politics among Nations: The Struggle for Power and Peace, 7th Edition. Boston:
McGraw Hill.
———. (1949) The Primacy of the National Interest. The American Scholar 18(2): 207–212.
———. (1956) Has Atomic War Really Become Impossible? The Bulletin of Atomic Scientists 12(1): 7–9.
———. (1964) The Four Paradoxes of Nuclear Strategy. The American Political Science Review 58(1):
23–35.
Morneau, William Francis. (2017) Building A Strong Middle Class: Budget 2017. Ottawa: House of
Commons. <https://www.budget.gc.ca/2017/docs/plan/budget-2017-en.pdf>.
Moynihan, Thomas. (2020) X-Risk: How Humanity Discovered Its Own Extinction. Urbanomic.
Mueller, John. (1989) Retreat from Doomsday: The Obsolescence of Major Power War. Basic Books.
Nadelmann, Ethan A. (1990) Global Prohibition Regimes: The Evolution of Norms in International
Society. International Organization 44(4): 479–525.
National Science and Technology Council (NSTC). (2016a) The National Artificial Intelligence Research
and Development Strategic Plan. Washington: Office of the President of the United States.
———. (2016b). Preparing for the Future of Artificial Intelligence. Washington: Office of the President of
the United States.
Nathan, Christopher, and Keith Hyams. (2021) Global Policymakers and Catastrophic Risk. Policy
Sciences (0): 1–19.
Naudé, Wim, and Nicola Dimitri. (2020) The Race for an Artificial General Intelligence: Implications for
Public Policy. AI & Society 35: 367–379.
Niebuhr, Reinhold. (1963) The Nuclear Dilemma. Chicago Review 16(3): 5–11.
cclxxii
Nielson, David, and Michael Tierney. (2003) Delegation to International Organizations: Agency Theory
and World Bank Environmental Reform. International Organization 57(2): 241–276.
Nincic, Miroslav. (1999) The National Interest and Its Interpretation. The Review of Politics 61(1): 29–
55.
Nitoiu, Christian. (2017) Aspirations to Great Power Status: Russia’s Path to Assertiveness in the
International Arena under Putin. Political Studies Review 15(1): 39–48.
Nogee, Joseph L. (2012) Soviet Policy Towards International Control of Atomic Energy. Literary
Licensing, LLC.
Norris, Robert S., and Hans M. Kristensen. (2006) Global Nuclear Stockpiles, 1945-2006. Bulletin of
Atomic Scientists 62(4):64-66.
O’Hanlon, Michael E. (2022) But CAN the United States Defend Taiwan? Brookings, June 1.
<https://www.brookings.edu/blog/order-from-chaos/2022/06/01/but-can-the-united-states-
defend-taiwan/>.
Office of the Director of National Intelligence. (2021) Annual Threat Assessment of the U.S. Intelligence
Community. United States of America.
Office of the Secretary of Defense. (2020) Military and Security Developments Involving the People’s
Republic of China 2020: Annual Report to Congress. United States of America: U.S. Department
of Defense.
Office of Technology Assessment. (1979) The Effects of Nuclear War. Washington: Government Printing
Office.
Olesker, Ronnie. (2018) The Securitization Dilemma: Legitimacy in Securitization Studies. Critical
Studies on Security 6(3): 1–18.
Omohundro, Stephen. (2008) The Basic AI Drives. Proceedings of the First AGI Conference.
<https://dl.acm.org/doi/10.5555/1566174.1566226>.
Oneal, John R., and Bruce M. Russett. (1999) The Kantian Peace: The Pacific Benefits of Democracy,
Interdependence, and International Organizations, 1885-1992. World Politics 52(2): 1–37.
OpenAI. (2020) OpenAI: Mission. <https://openai.com/>.
Ord, Toby. (2020) The Precipice: Existential Risk and the Future of Humanity. London: Bloomsbury
Publishing.
Organization for Economic Cooperation and Development (OECD). (2019a) Recommendation of the
Council on Artificial Intelligence. <https://legalinstruments.oecd.org/en/instruments/oecd-legal-
0449>.
———. (2019b) Artificial Intelligence in Society. Paris: OECD Publishing. <https://www.oecd-
ilibrary.org/science-and-technology/artificial-intelligence-in-society_eedfee77-en>.
———. (2019c) World Corporate Top R&D Investors: Shaping the Future of Technologies and of AI.
Luxembourg: Publications Office of the European Union.
———. (2021) State of Implementation of the OECD AI Principles: Insights from National Policies. Paris:
OECD Publishing. <https://doi.org/10.1787/1cd40c44-en>.
———. (2021) OECD AI Policy Observatory. <https://oecd.ai/>.
Organski, A. F. K. (1958) World Politics. New York: Alfred A. Knopf.
cclxxiii
Paglia, Eric. (2018) The Socio-Scientific Construction of Global Climate Crisis. Geopolitics 23(1): 96–
123.
Pamuk, Humeyra, Alexandra Alper, and Idrees Ali. (2020) Trump Bans U.S. Investments in Companies
Linked to Chinese Military. Reuters, November 13. <https://www.reuters.com/article/usa-china-
securities-idUSKBN27T1MD>.
Papagrigorakis, Manolis, et al. (2013) The Plague of Athens: An Ancient Act of Bioterrorism?
Biosecurity and Bioterror 11(3): 228–9.
Pape, Robert A. (2005) Dying to Win: The Strategic Logic of Suicide Terrorism. New York: Random
House.
Parfit, Derek. (1984) Reasons and Persons. Oxford: Oxford University Press.
Paul, T.V. (2009) The Tradition of Nonuse of Nuclear Weapons. Stanford: Stanford University Press.
Paul, T. V., Deborah Welch Larson, and William C. Wohlforth. (2014) Status in World Politics.
Cambridge: Cambridge University Press.
Pelopidas, Benoit. (2020) Power, Luck, and Scholarly Responsibility at the End of the World(s).
International Theory 12(3): 459-470.
Perrow, Charles. (1984) Normal Accidents: Living with High-Risk Technologies. Basic Books.
Peters, Adele. (2018) This MIT Project Says Nuclear Fusion Is 15 Years Away (No, Really, This Time).
<https://www.fastcompany.com/40541615/this-mit-project-says-nuclear-fusion-is-15-years-
away-no-really-this-time>.
Phillips, Andrew, and J.C. Sharman. (2015) International Order in Diversity: War, Trade, and Rule in the
Indian Ocean. Cambridge: Cambridge University Press.
Pinker, Steven. (2011) The Better Angels of Our Nature: Why Violence Has Declined. Viking Books.
———. (2018) Enlightenment Now: The Case for Reason, Science, Humanism, and Progress. New York:
Viking.
Popper, Karl. (1959) The Logic of Scientific Inquiry. London: Routledge.
Posen, Barry. (2001/02) The Struggle Against Terrorism: Grand Strategy, Strategy and Tactics.
International Security 26(3): 39–55.
———. (2003) Command of the Commons: The Military Foundations of U.S. Hegemony. International
Security 28(1): 5–46.
Posner, Richard. (2004) Catastrophe: Risk and Response. Oxford: Oxford University Press.
Price, Richard. (1998) Reversing the Gun Sights: Transnational Civil Society Targets Land Mines.
International Organization 52(3): 613-644.
Qin, Amy, (2021) As U.S. Hunts for Chinese Spies, University Scientists Warn of Backlash. The New
York Times, November 28. <https://www.nytimes.com/2021/11/28/world/asia/china-university-
spies.html>.
Qu, Tracy. (2021) US-China Tech War: AI, Semiconductors Get Quasi-Military Commanders as ‘Supply
Chain Chiefs’ to Boost Self-Sufficiency. South China Morning Post, July 26.
<https://www.scmp.com/tech/tech-war/article/3138785/us-china-tech-war-ai-semiconductors-
get-quasi-military-commanders>.
cclxxiv
Quinn, Adam, and Nicholas Kitchen. (2019) Understanding American Power: Conceptual Clarity,
Strategic Priorities, and the Decline Debate. Global Policy 10(1): 5–18.
Ramamoorthy, Anand, and Roman Yampolskiy. (2018) Beyond MAD?: The Race for Artificial General
Intelligence. ITU Journal 1(2).
Rathbun, Brian C. (2007) Uncertain about Uncertainty: Understanding the Multiple Meanings of a Crucial
Concept in International Relations Theory. International Studies Quarterly 51: 533–557.
Rees, Martin. (2003) Our Final Hour: A Scientist’s Warning: How Terror, Error, and Environmental
Disaster Threaten Humankind’s Future in this Century—On Earth and Beyond. New York: Basic
Books.
———. (2018) On the Future: Prospects for Humanity. New Jersey: Princeton University Press.
Rhodes, Richard. (1986) The Making of the Atomic Bomb. New York: Simon & Schuster.
Rich, Nathaniel. (2019) Losing Earth: A Recent History. New York: Picador.
Richards, C. E., R. C. Lupton, and J. M. Allwood. (2021) Re-Framing the Threat of Global Warming:
An Empirical Causal Loop Diagram of Climate Change, Food Insecurity and Societal Collapse.
Climactic Change 164 (49): 1–19.
Ripple, William J., et al. (2017) World Scientists’ Warning to Humanity: A Second Notice. BioScience
67(12): 1026–1028.
Ripple, William J., Christopher Wolf, Thomas M. Newsome, Phoebe Barnard, and William R. Moomaw.
(2020) World Scientists’ Warning of a Climate Emergency. BioScience 70 (1): 8–12.
Ripple, William J., et al. (2021) World Scientists’ Warning of a Climate Emergency 2021. BioScience
71(9): 984–898.
Ripsman, Norrin M., Jeffrey W. Taliaferro, and Steven E. Lobell. (2016) Neoclassical Realist Theory of
International Politics. Oxford: Oxford University Press.
Risse, Thomas. (2000) ‘Let’s Argue!’: Communicative Action in World Politics. International
Organization 54(1): 1–40.
Roberts, Huw, Josh Cowls, Jessica Morley, Mariarosaria Taddeo, Vincent Wang, and Luciano Floridi.
(2021) The Chinese Approach to Artificial Intelligence: An Analysis of Policy, Ethics, and
Regulation. AI & Society 36: 59–77.
Robock, Alan, and Owen Toon. (2012) Self-Assured Destruction: The Climate Impacts of Nuclear War.
Bulletin of the Atomic Scientists 68(5): 66–74.
Roe, Paul. (2012) Is Securitization a ‘Negative’ Concept? Revisiting the Normative Debate over Normal
Versus Extraordinary Politics. Security Dialogue 43(3): 249–266
Roffey, R., A. Tegnell, and F. Elgh. (2002) Biological Warfare in a Historical Perspective. Clinical
Microbiology and Infection 8(8): 450–454.
Rosenberg, Justin. (2016) International Relations in the Prison of Political Science. International
Relations 30(2): 127–153.
Roser, Max. (2016) War and Peace. Our World in Data. <https://ourworldindata.org/war-and-peace>.
Ruggie, John Gerard. (1982) International Regimes, Transactions, and Change: Embedded Liberalism in
the Postwar Economic Order. International Organization 36(2): 379–415.
cclxxv
———. (1993) Territoriality and Beyond: Problematizing Modernity in International Relations.
International Organization 47(1): 139–174.
Russell, Stuart. (2019) Human Compatible: Artificial Intelligence and the Problem of Control. United
States of America: Viking.
Russell, Stuart and Peter Norvig. (2010) Artificial Intelligence: A Modern Approach: Third Edition. New
Jersey: Prentice Hall.
Ruzicka, Jan. (2019) Failed Securitization: Why It Matters. Polity 51(2): 365–377.
Sagan, Carl. (1983) Nuclear War and Climatic Catastrophe: Some Policy Implications. Foreign Affairs
62(2): 257–92.
Sagan, Scott. (1995) The Limits of Safety: Organizations, Accidents, and Nuclear Weapons. New Jersey:
Princeton University Press.
Salter, Mark B. (2011) When Securitization Fails; The Hard Case of Counter-Terrorism Programs. In
Securitization Theory: How Security Problems Emerge and Dissolve, ed. Thierry Balzacq.
London: Routledge.
Sargent, Daniel J. (2015) A Superpower Transformed: The Remaking of American Foreign Relations in
the 1970s. Oxford: Oxford University Press.
Scharre, Paul. (2018) Army of None: Autonomous Weapons and the Future of War. New York: W. W.
Norton & Company.
———. 2021. Debunking the AI Arms Race Theory. Texas National Security Review 4(3): 122–132.
Schell, Jonathan. (1982) The Fate of the Earth. New York: Alfred A. Knopf.
Schelling, Thomas C. (1960) The Strategy of Conflict. Cambridge, Massachusetts: Harvard University.
———. (1966) Arms and Influence. New Haven: Yale University Press.
Scherrer, Jutta. (2014) ‘To Catch Up and Overtake’ the West: Soviet Discourse on Socialist
Competition.” In Competition in Socialist Society, eds. Katalin Mikolóssy and Melanie Ilic.
London: Routledge.
Schweller, Randall L. (1994) Bandwagoning for Profit: Bringing the Revisionist State Back In.
International Security 19(1): 72–107.
Schweller, Randall L., and Xiaoyu Pu. (2011) After Unipolarity: China’s Visions of International Order
in an Era of U.S. Decline. International Security 36(1): 41–72.
Shanahan, Murray. (2015) The Technological Singularity. Boston: The MIT Press.
Shen, Xinmei. (2021a) US-China Tech War: Xi Jinping Doubles Down on ‘Technology Security’
Measures as Part of Nation’s Five-Year Plan. South China Morning Post, November 19.
<https://www.scmp.com/tech/policy/article/3156707/us-china-tech-war-xi-jinping-doubles-
down-technology-security-measures?module=hard_link&pgtype=article>.
Shen, Xinmei. (2021b) US-China Tech War: Beijing Draws Up Three-Year Plan to Revamp State
Technology System. South China Morning Post, November 25.
<https://www.scmp.com/tech/policy/article/3157384/us-china-tech-war-beijing-draws-three-
year-plan-revamp-state-technology?module=perpetual_scroll&pgtype=
article&campaign=3157384>.
cclxxvi
Sherwin, Martin J. (1973) The Atomic Bomb and the Origins of the Cold War: U.S. Atomic-Energy
Policy and Diplomacy, 1941-45. The American Historical Review 78(4): 945–968.
Schlosser, Eric. (2013) Command and Control: NuclearWeapons, the Damascus Accident, and the
Illusion of Safety. New York: Penguin Books.
Schwalma, Christopher R., Spencer Glendona, and Philip B. Duffy. (2020) RCP8.5 Tracks Cumulative
CO2 Emissions. PNAS 117(33): 19656–19657.
Scranton, Roy. (2015) Learning to Die in the Anthropocene: Reflections on the End of Civilization.
California: City Lights Publishers.
Sears, Nathan Alexander. (2016) China, Russia, and the Long ‘Unipolar Moment’: How Balancing
Failures Are Actually Extending U.S. Hegemony. The Diplomat, April 27.
<https://thediplomat.com/2016/04/china-russia-and-the-unipolar-moment/>.
Sears, Nathan Alexander. (2017) The Neoclassical Realist Research Program: Between Progressive
Promise and Degenerative Dangers. International Politics Reviews 5: 21–31.
———. (2018) The Instability of a Post-Unipolar World. Insight & Inquiry 11(1): 63-77. URL:
<https://uwaterloo.ca/political-science/sites/ca.political-science/files/uploads/files/
insight_inquiry_2018.pdf>.
———. (2020a) Existential Security: Towards a Security Framework for the Survival of Humanity. Global
Policy 11(2): 255–266. DOI: https://doi.org/10.1111/1758-5899.12800.
———. (2020b) Anarchy, Technology, and the Self-Destruction Hypothesis: Human Survival and the Fermi
Paradox. World Futures. DOI:10.1080/02604027.2020.1819116
———. (2021a) International Politics in the Age of Existential Threats. Journal of Global Security Studies
5(3). DOI: 10.1093/jogss/ogaa027.
———. (2021b) Omnicidal Powers: Existential Threats and the Distribution of the Forces of Total
Destruction in International Politics. <https://www.researchgate.net/publication/350500094_
Great_Powers_Polarity_and_Existential_Threats_to_Humanity_An_Analysis_of_the_Distributio
n_of_the_Forces_of_Total_Destruction_in_International_Security>.
Selgelid, Michael. (2016) Gain-of-Function Research: Ethical Analysis. Science and Engineering Ethics
22(4): 923–64.
Seybert, Lucia A., and Peter J. Katzenstein. (2017) Protean Power: Exploring the Uncertain and
Unpexcted in World Politics. Cambridge: Cambridge University Press.
Sil, Rudra, and Peter J. Katzenstein. (2010) Analytic Eclecticism in the Study of World Politics:
Reconfiguring Problems and Mechanisms across Research Traditions. Perspective on Politics
8(2): 411–431.
Silver, David, et al. (2017) Mastering the Game of Go without Human Knowledge. Nature 550 (7676):
354–359.
Singer, J. David, Stuart Bremer, and John Stuckey. (1972) Capability Distribution, Uncertainty, and
Major Power War, 1820-1965. In Peace, War, and Numbers, ed. Bruce Russett. Beverly Hills:
Sage.
Shanahan, Murray. (2015) The Technological Singularity. Boston: The MIT Press.
cclxxvii
Shils, Esward. (1947) The Failure of the United Nations Atomic Energy Commission: An Interpretation.
The University of Chicago Law Review 15(4): 855-876.
Snyder, Jack. (1991) Myths of Empire: Domestic Politics and International Ambition. Ithaca: Cornell
University Press.
Somit, Albert. (1990) Review Essay: Humans, Chimps, and Bonobos. The Biological Bases of
Aggression, War, and Peacemaking. The Journal of Conflict Resolution 34(3): 553–582.
Sotala, Kaj, and Roman V. Yampolskiy. (2015) Responses to Catastrophic AGI Risk: A Survey. Physica
Scripta 90: 1–33.
Spratt, David, and Ian Dunlop. (2019) Existential Climate-Related Security Risk: A Scenario Approach.
Australia: Breakthrough, National Centre for Climate Restoration.
Stalin, Joseph V. (1953) The Historical Roots of Leninism. In From Marx to Mao. Moscow: Foreign
Languages Publishing House.
State Council of China. (2017) Notice of the State Council Issuing the New Generation of Artificial
Intelligence Development Plan. <https://flia.org/notice-state-council-issuing-new-generation-
artificial-intelligence-development-plan/>.
Steffen, Will, Johan Rockström, Katherine Richardson, Timothy M. Lenton, Carl Folke, Diana Liverman,
Colin P. Summerhayes, Anthony D. Barnosky, Sarah E. Cornell, Michel Crucifix, Jonathan F.
Donges, Ingo Fetzer, Steven J. Lade, Marten Scheffer, Ricarda Winkelmann, and Hans Joachim
Schellnhuber. (2018) Trajectories of the Earth System in the Anthropocene. Proceedings of the
National Academy of Sciences 115(33): 8252–8259.
Stimson, Henry L. (1945) Memorandum discussed with the President.” <https://nsarchive2.gwu.edu
//NSAEBB/NSAEBB162/3b.pdf>.
Stritzel, Holger. (2012) Securitization, Power, Intertextuality: Discourse Theory and the Translations of
Organized Crime. Security Dialogue 43(6) 549–567
Stockholm International Peace Research Institute (SIPRI). (2020) World Nuclear Forces.
<https://www.sipri.org/sites/default/files/YB20%2010%20WNF.pdf>.
Sutherland, R. S. (1961/62). Reviewed Work(s): Soviet Policy towards International Control of Atomic
Energy, by Joseph L. Nogee. International Journal 17(1): 77–78.
Swanson, Ana, Paul Mozur, and Steve Lohr. (2019) U.S. Blacklists More Chinese Tech Companies Over
National Security Concerns. The New York Times, June 21. <https://www.nytimes.com/
2019/06/21/us/politics/us-china-trade-blacklist.html>.
Tainter, Joseph A. (1988) The Collapse of Complex Societies. Cambridge: Cambridge University Press.
Tankersley, Jim. (2021) Biden Sells Infrastructure Improvements as a Way to Counter China. The New
York Times, November 16. <https://www.nytimes.com/2021/11/16/us/politics/biden-
infrastructure-china.html>.
Tannenwald, Nina. (2008) The Nuclear Taboo: The United States and the Non-use of Nuclear Weapons
Since 1945. Cambridge: Cambridge University Press.
———. (2018) How Strong Is the Nuclear Taboo Today? The Washington Quarterly 41(3): 89–109.
Tanner, Harold M. (2009) China: A History. Indianapolis: Hacket Publishing Company, Inc.
Tegmark, Max. (2018) Life 3.0: Being Human in the Age of Artificial Intelligence. Vintage.
cclxxviii
Thayer, Bradley A. (2000) Bringing in Darwin. Evolutionary Theory, Realism, and International Politics.
International Security 25(2): 124-151.
The White House. (2016) Preparing for the Future of Artificial Intelligence
<https://obamawhitehouse.archives.gov/blog/2016/05/03/preparing-future-artificial-
intelligence>.
———. (2017) National Security Strategy of the United States. Washington: The White House.
———. (2019) “Artificial Intelligence for the American People.”
<https://trumpwhitehouse.archives.gov/ai/executive-order-ai/>.
———. (2021) Interim National Security Strategic Guidance. Washington: The White House.
Thomas, Nicholas, and Catherine Yuk-ping Lo. (2020) The Macrosecurtization of Antimicrobial
Resistance in China. Journal of Global Security Studies 5(2): 361–378.
Thompson, William R. (1986) Polarity, the Long Cycle, and Global Power Warfare. The Journal of
Conflict Resolution 30(4): 587–615.
Tilly, Charles. (1990) Coercion, Capital, and European States, AD 990–1990. Oxford: Basil Blackwell.
Tonix Pharmaceuticals. (2020) Press Releases: Tonix Pharmaceuticals Presented Results from a
Preclinical Study of TNX-801, a Potential Vaccine to Prevent Smallpox and Monkeypox, in a
Poster Presentation at the 2020 American Society for Microbiology (ASM) Biothreats Conference.
<https://ir.tonixpharma.com/press-releases/detail/1186>.
Toon, Owen B., Alan Robock, and Richard P. Turco. (2008) Environmental Consequences of Nuclear
War. Physics Today: 37–42.
Torres, Phil. (2017a) Morality, Foresight & Human Flourishing: An Introduction to Existential Risks.
Durham, North Carolina: Pitchstone Publishing.
———. (2017b) The End: What Science and Religion Tell Us about the Apocalypse. Durham, North
Carolina: Pitchstone Publishing. <https://www.xriskology.com/>.
Trombetta, Maria Julia. (2008) Environmental Security and Climate Change: Analysing the Discourse.
Cambridge Review of International Affairs 2(4): 585–602.
Truman, Harry S. (1945a) August 6 1945: Announcement of the A-Bomb at Hiroshima. Miller Center.
<https://millercenter.org/the-presidency/presidential-speeches/august-6-1945-statement-
president-announcing-use-bomb#:~:text=We%20shall%20destroy%20their%20docks,
26%20was%20issued%20at%20Potsdam>.
———. (1945b) August 9 1945: Radio Report to the American People on the Potsdam Conference. Miller
Center. <https://millercenter.org/the-presidency/presidential-speeches/august-9-1945-radio-
report-american-people-potsdam-conference>.
Tucker, Jonathan B. (2002) A Farewell to Germs: The U.S. Renunciation of Biological and Toxin Warfare,
1969-70. International Security 27(1): 107–148.
Tucker, Jonathan B., and Erin R. Mahan. (2009) President Nixon’s Decision to Renounce U.S. Offensive
Biological Weapons. Washington: National Defense University Press.
Tulliu, Steve, and Thomas Schmalberger. (2003) Coming to Terms with Security: A Lexicon for Arms
Control, Disarmament and Confidence-Building. Geneva: United Nations Institute for
Disarmament Research.
cclxxix
Turing, A. M. (1950) Computing Machinery and Intelligence. Mind 49: 433–460.
Ullman, Richard H. (1983) Redefining Security. International Security 8(1): 129–153.
United Nations Education, Scientific, and Cultural Organization (UNESCO). (2021) AI Ethics: Another
Step Closer to the Adoption of UNESCO’s Recommendation. <https://en.unesco.org/news/ai-
ethics-another-step-closer-adoption-unescos-recommendation-0>.
United Nations General Assembly (UNGA). (1968) Resolution 2454(A): Questions of General and
Complete Disarmament. <https://undocs.org/en/A/RES/2454(XXIII)>.
United Nations Office of Disarmament Affairs (UNODA). (2021) Biological Weapons Convention.
<https://www.un.org/disarmament/biological-weapons/>.
United Nations Secretary General (UNSG) (1969) Chemical and Bacteriological (Biological) Weapons
and the Effects of Their Possible Use. New York: United Nations.
United States Arms Control & Disarmament Agency (USACDA). (1969) Documents on Disarmament,
1968. Washington: Government Printing Office.
———. (1970a). Documents on Disarmament, 1969. Washington: Government Printing Office.
———. (1970b). Documents on Disarmament, 1971. Washington: Government Printing Office.
United States Department of State. (1960) Documents on Disarmament, 1945-1959: Volume I.
Washington: Department of State.
———. (2021) U.S.-China Joint Statement Addressing the Climate Crisis. <https://www.state.gov/u-s-
china-joint-statement-addressing-the-climate-crisis/>.
Vincent, James. (2016) Twitter Taught Microsoft’s AI Chatbot to Be a Racist Asshole in Less Than a
Day. The Verge. <https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist>.
———. (2017) Putin Says the Nation That Leads in AI ‘Will Be the Ruler of the World. The Verge,
September 4. <https://www.theverge.com/2017/9/4/16251226/russia-ai-putin-rule-the-world>.
Vinge, Vernor. (1993) The Coming Technological Singularity: How to Survive in the Post-Human Era.
United States of America: NASA Lewis Research Center.
Von Lucke, Franziskus, Zehra Wellman, and Thomas Diez. (2014) What’s at Stake in Securitising Climate
Change? Towards a Differentiated Approach. Geopolitics 19: 857–884.
Vox, Lisa. (2017) Existential Threats: American Apocalyptic Beliefs in the Technological Era.
Pennsylvania: University of Pennsylvania Press.
Vuori, Juha A. (2010) A Timely Prophet? The Doomsday Clock as a Visualization of Securitization
Moves with a Global Referent Object. Security Dialogue 41(3): 255–277.
Wæver, Ole. (1995) Securitization and Desecuritization. In On Security, ed. Ronnie D. Lipschutz. New
York: Columbia University Press.
———. (2009) Waltz’s Theory of Theory. International Relations 23(2): 201–222.
Wakefield, Jane. (2021) Elon Musk's Neuralink ‘Shows Monkey Playing Pong with Mind’. BBC World
News, April 9. <https://www.bbc.com/news/technology-56688812>.
Wall Street Journal (WSJ). (2017) Humans Mourn Loss After Google Is Unmasked as China’s Go
Master. <https://www.wsj.com/articles/ai-program-vanquishes-human-players-of-go-in-china-
1483601561>.
cclxxx
Wallace -Wells, David. (2019) The Uninhabitable Earth: Life After Warming. United States: Tim Duggan
Books.
Wallach, Wendell and Colin Allen. (2008) Moral Machines: Teaching Robots Right from Wrong. Oxford:
Oxford University Press.
Walt, Stephen M. (1991) The Renaissance of Security Studies. International Studies Quarterly 35(2):
211–239.
Waltz, Kenneth N. (1954) Man, the State, and War: A Theoretical Analysis. New York: Columbia
University Press.
———. (1964) The Stability of a Bipolar World. Daedalus 93(3): 891–909.
———. (1967) The Politics of Peace. International Studies Quarterly 11(3): 199-211.
———. (1979) Theory of International Politics. Illinois: Waveland Press, Inc.
———. (1981) The Spread of Nuclear Weapons: More May Be Better. The Adelphi Papers 21(171).
———. (1988) The Origins of War in Neorealist Theory. The Journal of Interdisciplinary History 18 (4):
615–28.
———. (1990) Nuclear Myths and Political Realities. The American Political Science Review 84(3): 731–
45.
———. (1993) The Emerging Structure of International Politics. International Security 18(2): 44–79.
———. (1997) Evaluating Theories. The American Political Science Review 91(4): 913–917.
———. (2000) Structural Realism after the Cold War. International Security 25(1): 5–41
Wang, Orange, William Zheng, Jun Mai, and Echo Xie. (2020) Five-Year Plan: China Moves to
Technology Self-Sufficiency. South China Morning Post, October 30.
<https://www.scmp.com/news/china/politics/article/3107709/five-year-plan-china-officials-
flesh-out-details-plenum>.
Ward, Peter. (2008) Under A Green Sky: Global Warming, the Mass Extinctions of the Past, and What
They Can Tell Us About Our Future. Harper Perennial.
Watson, Scott. (2011) The ‘Human’ as Referent Object? Humanitarianism as Securitization. Security
Dialogue 42(1): 3–20.
Webb, S. (2015) If the Universe Is teeming with Aliens… Where Is Everybody? Seventy-Five Solutions to
the Fermi Paradox and the Problem of Extraterrestrial Life (2nd ed.). Springer
Weitzman, Martin L. (2009) On Modeling and Interpreting the Economics of Catastrophic Climate
Change. The Review of Economics and Statistics 91(1): 1–19.
Wendt, Alexander. (1992) Anarchy Is What States Make of It: The Social Construction of Power Politics.
International Organization 46(2): 391–425.
———. (1998) On Constitution and Causation in International Relations. Review of International Studies
24: 101–117.
———. (1999) Social Theory of International Politics. Cambridge: Cambridge University Press.
Westerheide, Fabian. (2020) China—The First Artificial Intelligence Superpower. Forbes, January 14.
<https://www.forbes.com/sites/cognitiveworld/2020/01/14/china-artificial-intelligence-
superpower/?sh=75b824282f05>.
cclxxxi
Wertheim, Joel O. (2010) The Re-Emergence of H1N1 Influenza Virus in 1977: A Cautionary Tale for
Estimating Divergence Times Using Biologically Unrealistic Sampling Dates. PLOS ONE 5(6):
e11184.
Wheelis, Mark. (2002) Biological Warfare at the 1346 Siege of Caffa. Emerging Infectious Diseases
8(9): 971–975.
Williams, Michael C. (2003) Words, Images, Enemies: Securitization and International Politics.
International Studies Quarterly 47: 511–531.
Wilson, Edward O. (2014) The Meaning of Human Existence. Liveright.
Wohlforth, William C. (1999) The Stability of a Unipolar World. International Security 24(1): 5–41.
Wohlstetter, Albert. (1959) The Delicate Balance of Terror. Foreign Affairs 37: 211–34.
Wolfers, Arnold. (1952) ‘National Security’ as an Ambiguous Symbol. Political Science Quarterly 67(4):
481–502.
World Health Organization. (2018) A Checklist for Pandemic Influence Risk and Impact Management:
Building Capacity for Pandemic Response. World Health Organization.
<https://apps.who.int/iris/bitstream/handle/10665/259884/9789241513623-eng.pdf>.
Wright, Thomas. (2015) The Rise and Fall of the Unipolar Concert. The Washington Quarterly 37(4): 7–
24.
Yampolskiy, Roman V., ed. (2019) Artificial Intelligence Safety and Security. Florida: CRC Press.
Yang, Yuan. (2021) How China Is Targeting Big Tech. Financial Times, June 18.
<https://www.ft.com/content/baad4a14-efac-4601-8ce4-406d5fd8f2a7>.
Yudkowsky, Eliezer. (2004) Coherent Extrapolated Volition. California: The Singularity Institute.
———. (2008) Artificial Intelligence as a Positive and Negative Factor in Global Risk. In Global
Catastrophic Risks, eds. Nick Bostrom and Milan M. Ćirković. Oxford: Oxford University Press.
Yuk-ping Lo, Catherine, and Nicholas Thomas. (2018) The Macrosecuritization of Antimicrobial
Resistance in Asia. Australian Journal of International Affairs 72(6): 567–583.
Zakaria, Fareed. (1992) Realism and Domestic Politics: A Review Essay. International Security 17(1):
177–198.
———. (2008) The Post-American World. New York: Norton.
Zilinskas, Raymond A. (2016) The Soviet Biological Weapons Program and Its legacy in Today’s
Russia. Washington: National Defense University Press.
Zuberi, Matin. (1999) The Missed Opportunity to Stop the H-Bomb. Strategic Analysis 23(2): 183–201.
cclxxxii
Appendices
Appendix 1: Nuclear War
Scenario: A “total” nuclear war occurs between the United States and Russia, leading to hundreds
of millions of immediate causalities, followed by billions more from the consequences of “nuclear
winter.”
Since at least the thermonuclear revolution of the 1950s, nuclear war has posed an existential threat
to human survival. At its peak during the Cold War, global nuclear arsenals swelled to over 70,000
warheads—an astonishing level of redundancy that could destroy the world many times over
(Schell 1982). Nuclear weapons increased not only in number but also in size. On November 1st,
1952, the United States successfully detonated the first thermonuclear explosion (the “Ivy Mike”
nuclear test), with a yield of 10.4 megatonnes—approximately 450 times more powerful than the
bomb that destroyed Nagasaki. Then, in 1961, the Soviet Union exploded the biggest nuclear
weapon in history (the “Tsar Bomba”), with a yield of 50 megatons—roughly 1,500 times greater
than the combined destructive power of the atomic bombs that destroyed Hiroshima and Nagasaki.
Both the United States and the Soviet Union also made significant innovations in the delivery
systems for nuclear warfare, including long-range strategic bombers, submarine launched ballistic
missiles (SLMBs), intercontinental ballistic missiles (ICBMs), multiple independently-targetable
reentry vehicle (MIRVs), and intelligence, surveillance, target acquisition and reconnaissance
satellites and communications (ISTAR). The continuous development and deployment of a
“nuclear triad” of bombers, submarines, and missiles—and the systems of command-and-control
to sustain them—meant that the United States and the Soviet Union were capable of waging
nuclear war at a moment’s notice.
Despite the dramatic reductions in nuclear arsenals since the end of the Cold War, nuclear
war still poses an existential threat to humanity. Global nuclear arsenals have fallen to an estimated
13,150 warheads, but the United States and Russia still possess 5,600 and 6,257 warheads,
respectively, including a combined total of 3,250 deployed warheads. What would be the results
of a “total” nuclear war between the United States and Russia? One recent model of a nuclear
strike against the United States—involving 1,066 warheads (566 megatonness) on 387 known
targets—estimates 184.5 million causalities during the first two hours within the United States
alone (Minson 2020). This would presumably provoke a second-strike against Russia, greatly
increasing the number of total casualties. A nuclear war of this magnitude would likely cause an
environmental catastrophe—a “nuclear winter”—from the effects of the huge quantities of dust
and smoke released into the atmosphere from the explosions and firestorms. The result of nuclear
war would be the reduction of sunlight, the decline in precipitation, ultraviolet radiation from the
depletion of the ozone layer, toxic gases and chemical from the combustion of synthetic materials,
and a dramatic and prolonged drop in global average temperatures to well below freezing in both
the Northern and Southern Hemispheres (Schell 1982; Ehrlich et al. 1984; Sagan 1984; Toon et al.
2008; Robock and Toon 2012). The “cold and the dark” after nuclear war would critically endanger
Earth’s environmental system upon which humanity—and all life—depends. Nuclear winter could
result in billions of deaths around the world from “nuclear famine” (Helfand 2013). And no one
could rule out the possibility that total nuclear war and winter “could cause the extermination of
humankind and most of the planet’s wildlife species” (Ehrlich et al. 1984, xvi). As Carl Sagan
(1983, 291–292) concluded:
In summary, cold, dark, radioactivity, pyrotoxins and ultraviolet light following a
nuclear war—including some scenarios involving only a small fraction of the world
cclxxxiii
strategic arsenals—would imperil every survivor on the planet. There is a real
danger of the extinction of humanity. A threshold exists at which the climatic
catastrophe could be triggered, very roughly around 500–2,000 strategic war heads.
How might a total nuclear war occur? Since 1945, when the United States dropped two
atomic bombs on Hiroshima and Nagasaki to put an end to the Second World War, the nuclear
weapons states have managed to maintain the “nuclear peace” through a combination of strategic
military deterrence (Brodie 1959; Wohlstetter 1959; Schelling 1960; Waltz 1990), a moral “taboo”
against the use of nuclear weapons (Mueller 1989; Tannenwald 2008; Paul 2009), and “good luck”
(Pelopidas 2020). However, a nuclear war could occur—in the words of John F. Kennedy—by
“accident, miscalculation, or madness.” Nuclear war could occur by accident through human error
and/or technical accident. There are dozens of known instances in which failures in the human and
technological systems of nuclear command-and-control have led to preparations for nuclear war
(Blair 1985; Sagan 1993; Schlosser 2013)—from malfunctioning computer chips, to early-warning
systems misinterpreting natural phenomenon as incoming missiles, and even a blunder involving
a training tape simulating nuclear war. The risks of nuclear accidents may grow with the increasing
automation of military systems (Scharre 2018; Geist and Lohn 2018; Boulanin ed. 2019).
Ultimately, the systems that have been created to maintain control over nuclear weapons have
vulnerabilities, and accidents can and do occur.
Nuclear war could also occur by miscalculation. The danger is that a conflict between the
great and major powers escalates to nuclear war. The structural effects of a changing balance-of-
power in international relations are likely to produce more frequent and important conflicts
between the great and major powers than has been the case since the end of the Cold War. While
the “rise and fall” of great powers (Kennedy 1989) and “hegemonic decline” (Gilpin 1981) are
familiar problems in international relations, there is no historic precedent for the end of unipolarity
in the nuclear age, and it would be dangerous to assume that the sources of international peace and
stability in a unipolar (Wohlforth 1999) or bipolar system (Waltz 1964) are equally robust during
an era of structural transition to a post-unipolar world (Sears 2018). Maintaining the nuclear peace
between a declining hegemon (the United States), a rising great power (China), and a frustrated
former-great power (Russia) is one of the most important challenges in contemporary international
security. Furthermore, the dangers of conflict escalation may be exacerbated by changing military
technology. Nuclear deterrence has become increasingly complex with “C4ISR” systems
(command, control, communications, computers, surveillance, and reconnaissance), which are
influenced by technological developments in, inter alia, cybersecurity (e.g., computer worms and
“zero days”) (Gartzke and Lindsay 2017), missile defense (e.g., kinetic and direct-energy
weapons), and precision strike capabilities (e.g., drones, cruise missiles, and hypersonic weapons)
(Miller and Fontaine 2017). The “survivability” of a state’s second-strike capability no longer
depends solely on the deployment and protection of a “nuclear triad,” but also on the vulnerability
and resilience of C4ISR systems to cyber-attacks, anti-satellite weapons, and precision-strike
capabilities. Russia and China have both expressed concerns that the United States could launch a
well-coordinated “disarming” strike—perhaps through a combination of disruptive cyber-attacks
and anti-satellite capabilities against C4ISR systems, and precision conventional or nuclear strikes
against fixed silos, mobile launchers, and strategic bombers on the ground—and then rely on
missile defense to “mop up” a diminished and disorganized retaliation (Miller and Fontaine 2017,
25). To make matters worse, the “entanglement” of conventional and nuclear military systems
increase the risk that conventional conflicts could escalate to nuclear war, especially if a
conventional war threatens a state’s second-strike capabilities, generating a “use-it-or-lose-it”
dilemma (Acton 2018).
cclxxxiv
Finally, nuclear war could occur by madness. In nuclear deterrence, there are strategic
incentives for political leaders to appear irrational—like the Nixon administration’s “madman
theory”—or to take deliberately dangerous actions—as in the policy of “brinkmanship”—in order
to dissuade an adversary from taking provocative actions, or to coerce an adversary into backing
down or making concessions (Schelling 1960; Kahn 1962; McManus 2019). Indeed, nuclear
deterrence can make the seemingly absurd strategically rational, such as the system of automatic
nuclear retaliation that was represented by the “doomsday machine” in Stanley Kubrick’s satirical
film, Dr. Strangelove, which was actually considered by the Soviet Union in its “Dead Hand”
program and actually exists in the semi-automatic system known as “Perimeter” (Hoffman 2009).
In recent years, the United States and Russia have both announced lower thresholds for the use of
nuclear weapons against non-nuclear attacks. Russia’s 2014 National Defense Strategy suggests
that it could use nuclear weapons in retaliation to conventional warfare that “threatens the very
existence of the Russian state”—the so-called “escalate to de-escalate” doctrine (Isachenkov
2014). Similarly, the U.S. 2018 Nuclear Posture Review states that the United States would
consider using nuclear weapons in the event of “significant nonnuclear strategic attacks… on U.S.
or allied nuclear forces, their command and control, or warning and attack assessment capabilities”
(Department of Defense 2018, 21). Since nuclear war poses an existential threat to humanity, one
can only hope that the political leadership of the nuclear weapons states possess the right
temperament to maintain the “nuclear taboo” (Tannenwald 2018), but a look at the political leaders
of the world’s nuclear powers in recent years—including Donald Trump, Vladimir Putin, Xi
Jinping, Benjamin Netanyahu, Nerendra Modi, and Kim Jong-un—does not inspire much
confidence.
Appendix 2: Climate Change
Scenario: Anthropogenic global warming passes critical “tipping points” and “runaway” climate
change takes hold, pushing Earth’s climate beyond stable Holocene limits and towards a
“Hothouse Earth” climate of a 4°C or 5°C increase or more in global average temperatures.
Humanity has become the driving force behind Earth’s changing climate, which has led a growing
number of scientists and scholars to call the contemporary geological epoch “the Anthropocene”
(Crutzen 2002; Steffen et al. 2007). While Earth’s climate is subject to natural fluctuations over
the course of geological history, carbon dioxide (CO2) concentrations in the atmosphere—the
main source of a warmer climate—have reached their highest levels in millions of years, which
scientists overwhelmingly attribute to human activity (Kump et al. 2010; IPCC 2018; 2019;
Rippler 2020). The main anthropogenic drivers of climate change are the enormous scale of
greenhouse gas emissions from the burning of fossil fuels (e.g., coal, oil, and natural gas),
combined with the rapid deterioration of Earth’s carbon sinks for storing greenhouse gases because
of deforestation for agriculture (e.g., livestock and monocultures) and resource extraction (e.g.,
mining and oil), as well as the warming of the oceans (Steffen et al. 2007; 2015; McNeill and
Engelke 2016). As a result, global average temperatures now exceed preindustrial levels by over
1.0°C (IPCC 2018).
Climate change could pose an existential threat to humanity if the planet’s climate system
reaches a “Hothouse Earth” state (Steffen et al. 2018; Wallace-Wells 2019; Spratt and Dunlop
2019; Ripple et al. 2020); that is, a climate system characterized by the stabilization of global
temperatures at much hotter levels than have existed for millions of years on Earth (Steffen et al.
2018). The implications of climate change for humanity are notoriously difficult to predict (Beard
et al. 2021), but there are essentially two mechanisms that threaten humanity. The first is the direct
cclxxxv
effect of extreme heat. While human societies possess some capacity for adaptation and resilience
to climate change, the physiological response of human beings to heat stress imposes physical
limits—with a hard limit at roughly 35°C wet-bulb temperature (Sherwood et al. 2010).
Temperatures in this range are already recorded in some coastal subtropical locations (Raymond
et al. 2020). A rise in global average temperatures by 3–4°C would increase the risk of heat stress,
while 7°C could render some regions uninhabitable, and 11–12°C would leave much of the planet
too hot for human habitation (Sherwood et al. 2010). Some future climate modeling scenarios—
for instance, the “Representative Concentration Pathway 8.5” (or RCP 8.5) (IPC2018)—not only
suggest that these temperatures are possible in the long-term (i.e., beyond 2100), but are actually
consistent with current emissions pathways (Schwalm et al. 2020). The second mechanism are the
indirect effects of climate change. They include, inter alia, environmental pressures on water and
food scarcity (e.g., droughts from less-dispersed rainfall, and lower wheat-yields at higher
temperatures); rising sea-levels affecting coastal regions (e.g., Miami and Shanghai), or even
swallowing entire countries (e.g., Tuvalu and the Maldives); extreme and unpredictable weather
and natural disasters (e.g., hurricanes and forest fires); a reduction in the biodiversity of flora and
fauna (e.g., plant, insect, and animal species); the possible inception of new bacteria and viruses
(e.g., ancient viruses preserved in glacier ice); and large-scale human migration away from the
most affected regions (e.g., Subsaharan Africa, Central America, and the Middle East) (World
Bank 2012; IPCC 2018; Wallace-Well 2019). Overall, the direct and indirect effects of climate
change could greatly increase the environmental pressures on humanity, which could threaten
modern global civilization (Diamond 2005; Homer-Dixon et al. 2015; Lawrence and Homer-Dixon
2021; Beard et al. 2021; Richards et al. 2021).
At what point could climate change cross the threshold to a Hothouse Earth state? While
the complexity of Earth’s environmental systems makes this difficult to determine, recent research
suggests that such a “planetary threshold” may exist at or around 2.0°C global average
temperatures above preindustrial levels (Steffen et al. 2018, 3). The danger is that a ~2.0°C
increase in global average temperatures could cross certain “tipping points,” whereby positive
feedback in Earth’s climate system leads to a “runaway” process of non-linear and self-sustaining
warming beyond human control (Steffen et al. 2018, 5). For example, the melting of Arctic
“permafrost” is one feedback system that could produce additional warming, as glacial retreat
reduces the refractory effect of the ice sheets and releases huge quantities of methane—a gas with
a greater warming effect than carbon dioxide—currently trapped beneath the ice. Similarly, as the
oceans warm by absorbing carbon dioxide, they become less able to absorb additional carbon
dioxide, leaving more greenhouse gases to remain in the atmosphere. Rising temperatures also
increase the risk of forest fires, which release more CO2 and reduce the number of trees to absorb
carbon dioxide. Some of the tipping points in Earth’s climate system could be crossed at relatively
low-levels of warming (Kump et al. 2010; Steffen et al. 2018).
The Paris Agreement on Climate Change sets the goal of limiting the increase in global
average temperatures to “well below” 2°C and to pursue efforts to limit the increase to 1.5°C. If
the Paris Agreement goals are met, then humanity could keep climate change below the threshold
of an existential threat. However, Climate Action Tracker (2021) estimates that “current policies”
will produce an increase of 2.9°C in global average temperatures by 2100 (range between +2.1 and
+3.9°C), while if states succeed in meeting their pledges and targets, global average temperatures
are still projected to increase by 2.6°C (range between +2.1 and +3.3°C). Even a scenario of
“optimistic targets” leads to 2.1°C of warming (range between +1.9 and +2.7°C). In other words,
the actual policies and actions of states are not on track to meet the goals of the Paris Agreement,
which increases the danger of a Hothouse Earth.
cclxxxvi
In general, humanity’s large-scale destruction of Earth’s natural environment poses the
existential threat of an inhospitable planet for humankind (Steffen et al. 2018; Wallace-Wells
2019; Ripple et al. 2020). The decline in several of Earth’s interconnected geological and
ecological systems are reducing planetary habitability, including biodiversity loss, deforestation,
desertification, ocean acidification, ozone-thinning, and pollution (Rockstrom et al. 2009). A
Hothouse Earth climate would be terra incognita for humanity (Steffen et al. 2007), which is why
a growing number of scientists and scholars are looking to the geological record of Earth’s “big
five” mass extinctions: they could provide a glimpse at what the future could look like if humanity
fails to arrest climate change (Ward 2008; Payne and Clapham 2012; Brannen 2017). Perhaps the
greatest warning sign of an inhospitable planet is the rapid and dramatic decline of biodiversity,
with approximately one million plant, insect, and animal species currently facing extinction
(United Nations 2019; World Wildlife Fund 2020)—what some are calling the “sixth extinction”
(Kolbert 2014; Brannen 2017). Humanity’s destruction of Earth’s natural environment not only
threatens the prosperity and survival of modern civilization and human beings, but the diversity
and flourishing of all life on Earth.
Appendix 3: Bioengineered Pathogens
Scenario: A highly virulent pathogen could be designed and released—either intentionally or
unintentionally by state or non-state actors—causing a global pandemic that threatens human
civilization and survival.
Human beings have always been susceptible to naturally occurring bacteria and viruses. When
lethality and communicability align, pandemics can be catastrophic, such as the plague that struck
Europe in 1347-1341 (the “Black Death”), killing between 30 and 50% of the population, and the
influenza virus in 1918-1921 (the “Spanish Flu”), which killed between 50 and 100 million people
— more than World War I and II combined. Nevertheless, natural selection may mitigate the threat
of pandemics, since lethality and communicability tend to work against each other: the more lethal
the pathogen, the more likely it will kill its host, and so the less opportunity it will have to spread
(Rees 2003; Posner 2004). General advances in hygiene and medicine have also tended to reduce
human vulnerability to pandemics, unless this trend reverses from poor practices with antibiotics
leading to microbial resistance (O’Neill 2014).
However, advances in biotechnology could make “bioengineered pathogens” an existential
threat to humanity. The biological sciences are in the midst of a “revolution”, driven by advances
in biotechnology and synthetic biology, such as DNA sequencing and gene-editing. Leading the
charge is the discovery of the CRISPR/Cas9 system, which acts as both “guide” and “scissors” to
enable precision editing of the genome to potentially “rewrite the code of life” (Doudna and
Sternberg 2017, 84). In A Crack in Creation, Jennifer Doudna — one of CRISPR’s discoverers
writes:
For billions of years, life progressed according to Darwin’s theory of evolution…
Today, things could not be more different. Scientists have succeeded in bringing
this primordial process fully under human control. Using powerful biotechnology
tools to tinker with DNA inside living cells, scientists can now manipulate and
rationally modify the genetic code that defines every species on the planet,
including our own (Doudna and Sternberg 2017, xiii).
cclxxxvii
While advances in the biosciences and biotechnology could produce enormous benefits for
humanity—including cures for some genetic diseases (Doudna and Sternberg 2017)—some
scientists have warned that they will also give humans the ability to take genetic blueprints of
viruses and manipulate them to make more virulent pathogens (Rees 2003; Posner 2004; Bostrom
and Cirkovic 2008). This could make possible the creation of “designer viruses” that are both
highly communicable, like smallpox, and highly lethal, like anthrax. Perhaps a pathogen could be
designed to be highly communicable with a slow incubation period, in order to maximize the time
for its spread and minimize the possibility of detection, and then overcome the human immune
system and defenses from vaccinations and antibiotics (Torres 2017).
Fortunately, the probability of deliberate use of biological weapons by states remains low.
The Biological Weapons Convention (BWC) prohibits the development of biological weapons,
with over 180 states parties. Yet most states possess at least a latent biological weapons capability,
and the BWC lacks a monitoring and verification system. Being a state party did not prevent the
Soviet Union from developing biological weapons during the Cold War (Gorvett 2017). Today,
North Korea maintains a biological weapons program that is believed to possess anthrax and
smallpox (Kim et al. 2017). The biggest obstacle to biological warfare is not international law, but
the limited military utility of a weapon that could unleash uncontrollable destruction.
Paradoxically, any advances in biotechnology that would provide states with greater control over
the effects of bioweapons could decrease international security by making these weapons “usable”
(Gronlund, 2018), while increasing humanity’s existential security by limiting the scope of their
destructiveness.
Nevertheless, a pandemic could result from the accidental release of a pathogen. Scientific
research involving highly virulent pathogens inevitably entails biosafety risks, such as the “gain-
of-function” experiments to increase the transmissibility and/or virulence of pathogens (Gryphon
Scientific 2015; Selgelid 2016). For example, one controversial experiment involved the genetic
alteration of H5N1 avian influenza to be transmissible by air between ferrets, which one virologist
described as “probably one of the most dangerous viruses you can make” (Enserink 2011; Herfst
et al. 2012). While laboratories have safety procedures for containment, systems have flaws,
accidents occur, and viruses can escape (Merler et al. 2013; Furmanski 2014). In 2014, the Center
for Disease Control and Prevention came under fire for repeated errors that exposed scientists to
Ebola, anthrax, and the flu (Grady and McNeil 2014). The biosafety threat may grow with the
decreasing costs and increasing accessibility of biotechnology to the general public. For example,
one crowdfunded venture aimed to distribute “DIY gene-editing kits” for $130 dollars, providing
“everything you need to make precision genome edits in bacteria at home” (Doudna and Sternberg
2017, 113). As Martin Rees (2018) warns, the democratization of “biohacking” could make
effective regulation impossible.
Perhaps the most frightening scenario, however, is “bioterrorism”. Biological weapons
have not been a weapon of choice of terrorism. This is probably not merely an issue of limited
expertise or resources, since many terrorist organizations are demonstratively highly capable
wielders of violence. In describing al-Qaeda’s 9/11 attacks, Barry Posen (2001, 40) wrote, “If this
had been a Western commando raid, it would be considered nothing short of brilliant.” However,
to the extent that terrorist organizations are “rational” actors with “limited” (political) objectives
(Pape 2003; Kydd amd Walter 2006; Abrahms 2008), they will probably refrain from employing
biological weapons for similar reasons as states.
The danger lies in the alignment of “apocalyptic” aims and biotechnological capabilities
(Rees 2003; 2018; Torres 2017). Do (some) terrorists harbor apocalyptic aims? The answer, of
course, is “yes”. From “lone-wolf” individuals, like Eric Harris, to well-funded organizations, like
Aum Shinrikyo, there are terrorists whose principal “goal” is destruction. There are now both
cclxxxviii
religious “millenarians” who would like to bring about the “apocalypse”, and “environmentalists”
who would like to see the Earth return to its pre-human “natural” state (Torres 2017, 116–132).
The existence of such apocalyptic preferences is perhaps unsurprising: there are 7.6 billion people
on the planet. If every individual possessed a “Doomsday button,” humanity’s fate would be
sealed.
Given the apocalyptic aims of at least some individuals and groups, the question may turn
more on capabilities than intent. Frighteningly, biotechnology is no longer the “privilege of the
few.” Doudna and Sternberg (2017, 112–3) write:
Today thanks to the features of CRISPR, an aspiring scientist with the most basic
training can accomplish feats that would have been inconceivable just a few years
ago… What used to require years of work in a sophisticated biology laboratory can
now be performed in days by a high school student.
The proliferation threat is aggravated by access to scientific publications that describe how to make
virulent pathogens. The closing of the gap between apocalyptic aims and biotechnological
capabilities represents an existential threat to humanity.
Appendix 4: Artificial Intelligence
Scenario: An artificial intelligence system that achieves “superintelligence” eliminates
humankind if its fundamental goal is misaligned with humanity’s interests in prosperity and
survival; there appears to be no easy solution to the AI “control problem.”
Artificial intelligence (AI) is a general term that is used to describe digital technologies “that are
capable of performing tasks commonly thought to require intelligence” (Brundage et al. 2018, 9).
Humanity may be in the midst of an “AI revolution,” driven by a combination of gains in hardware
(e.g., the exponential growth in computing power described by “Moore’s Law”), software (e.g.,
“machine learning” algorithms and techniques, such as “neural networks” and “deep learning”),
and data (e.g., the abundance of digital information on the Internet) (Bostrom 2014; Shanahan
2015). While today’s AI are “narrow” systems that can achieve or surpass human-level intelligence
only in specific domains—like the AlphaGo system that triumphed over the world champion, Ke
Jie, in May 2017 at the strategy game “Go”—AI experts have long expressed the concern that AI
could one day match human-level “general” intelligence (AGI), or even achieve
“superintelligence” (ASI) that far exceeds our own (Turing 1950; Good 1966; Moravec 1988;
Vinge 1993; Joy 2000; Kurzweil 2005; Russell and Norvig 2010; Bostrom 2014; Shanahan 2015;
Tegmark 2018; Russell 2019). If, for instance, an AI system becomes better than humans at
optimizing its own algorithm, then this could unleash a process of recursive self-improvement—
an “intelligence explosion” (Good 1966)—in which the AI rapidly surpasses human intelligence
(Vinge 1993; Bostrom 2014).
Why would AGI/ASI constitute an existential threat to humanity? There are multiple
scenarios that illustrate the existential AGI/ASI risks. In the first scenario (“takeover” or “world
domination”), the AGI/ASI system determines humanity to be a threat to achieving its fundamental
goal, and therefore “eliminates the human species and any automatic systems humans have created
that could offer intelligent opposition to the execution of the AI’s plans” (Bostrom 2014, 95–7).
In a second scenario (“perverse instantiation”), the AGI/ASI system eliminates humankind
because its fundamental goal is not properly “aligned” with human survival. In essence, the
AGI/ASI system pursues some seemingly benign goal—e.g., maximizing paperclip production—
cclxxxix
in a way that threatens humanity—e.g., by converting the biosphere into paperclips, humans
included (Bostrom 2014, 119–124). While experts caution against “anthropomorphizing” the goals
and motivations of AI (Bostrom 2014; Pinker 2018; Russell 2019), this may be immaterial to
whether AGI/ASI poses an existential threat, since it is the AI’s capabilities that are the main
source of danger. If the AGI/ASI system is “goal-oriented”—which today’s AI systems are
(Russell 2019)—then it could pursue instrumental objectives—e.g., survival, self-improvement,
and resource acquisition—that threaten human survival (Omohundro 2008; Bostrom 2014; Russell
2019). Ultimately, an AGI/ASI system could, under a wide-range of goals, take actions that
threaten human survival, meaning that human extinction could be the “default outcome” (Bostrom
2014, 115).
In a third scenario, humans could employ an AGI/ASI system in the pursuit of power,
wealth, security, or some other end, in ways that threaten human survival. The most obvious threat
is war. As warfare becomes increasingly fast and complex through the advance of technology
(Deudney 2018; Coker 2018; Bidwell et al. 2018)—including communications, networks, and
autonomy—humans may become increasingly dependent on AI to make and/or implement
decisions about the use of military force (Arquilla and Ronfeldt 2005; Horowitz 2018; Payne 2018;
Cummings et al. 2018; Scharre 2019). The relationship between the strategic pressure for military
advantage and the danger of technological loss-of-control could threaten human survival (Danzig
2018), especially if AGI/ASI undermines nuclear deterrence and creates new risks of escalation to
nuclear war (Geist and Lohn 2018; Boulanin ed. 2019). In a fourth scenario, a world characterized
by increasingly complex problems and powerful machines could lead humanity to gradually
relinquish control to machines of ever-greater capabilities and autonomy. The creeping
dependency of humans on AI could eventually mean that “the machines will be in effective
control” (Joy 2000). Humanity would no longer be in control of its own destiny, nor would it be
able to decrease its dependency on the intelligent machines, since “turning them off would amount
to suicide” (Joy 2000). Thus, artificial intelligence does not need to “eliminate” humanity to
constitute an existential threat, it only needs to achieve a level of intelligence and capabilities that
exceeds humanity’s intelligence and capabilities to maintain effective control. The essence of the
AI “control problem” is about “retaining absolute power over machines that are more powerful
than us” (Russell 2019, xii).
ccxc
Appendix 5: Discourse Analysis of Macrosecuritization
No
Issue
Year
Type
Speaker
Discourse Analysis
Content Analysis
Source
1
Nuclear
Weapons
1946
Article
Neils Bohr
(Scientist)
The possibility of releasing vast amounts of energy [2] through atomic disintegration,
which means a veritable revolution of human resources [12], cannot but raise in the
mind of everyone [1] the question of where the advance of physical science is leading
civilization [1]. While the increasing mastery of the forces of nature [2] has contributed
so prolifically to human welfare [1] and holds out even greater promises, it is evident
that the formidable power of destruction [2] that has come within reach of man [1] may
become a mortal menace [2] unless human society can adjust [1, 3] itself to the
exigencies of the situation. Civilization [1] is presented with a challenge more serious
perhaps than ever before [2], and the fate of humanity [13] will depend on its ability
to unite in averting common dangers [13] and jointly to reap the benefit from the
immense opportunities which the progress of science offers… Such measures will, of
course, demand the abolition of barriers [3] hitherto considered necessary to safeguard
national interests but now standing in the way of common security [1] against
unprecedented dangers [2]. Certainly the handling of the precarious situation [2] will
demand the good will of all nations [1, 3], but it must be recognized that we [1] are
dealing with what is potentially a deadly challenge to civilization itself [12].
Universal Humanity = 13
Existential threat = 11
Extraordinary measures = 5
Masters and Ways
eds. (1946, ix-x).
2
Nuclear
Weapons
1968
Speech
Lyndon B.
Johnson
(President)
I have asked for the privilege of addressing you this afternoon to acknowledge this
momentous event in the history of nations [1], and to pledge, on behalf of the United
States, our determination to make this but a first step toward ending the peril of nuclear
war [2]… The resolution that you have just approved commends to the Governments
of the world [1], for their speedy ratification, the treaty on the non-proliferation of
nuclear weapons. It is the most important international agreement [3] in the field of
disarmament since the nuclear age [2] began. It goes far to prevent [3] the spread of
nuclear weapons [2]. It commits the nuclear Powers to redouble their efforts to end the
nuclear arms race [3], and to achieve nuclear disarmament [3]. It will ensure the
equitable sharing of the peaceful uses of nuclear energy, under effective safeguards [3],
for the benefit of all nations [1]I believe that this Treaty can lead to further measures
that will inhibit the senseless continuation of the arms race [2]. I believe that it can
give the world [1] timevery precious timeto protect itself against Armageddon [2
3]… My fellow-citizens of the world [1]in the name of our common humanity [1],
let us ensure our survival [13]so that we [1] may achieve our high destiny on earth.
Let us work for the ultimate self-interest of mankind [1]: for that peace in which future
generations [1] may build a world [1] without fear and without wanta world that is
fit for the sons of man [1].
Universal Humanity = 12
Existential threat = 6
Extraordinary measures = 7
United States Arms
Control &
Disarmament
Agency (1969,
433-435).
ccxci
3
Nuclear
Weapons
1983
Article
Carl Sagan
(Scientist)
In summary, cold, dark, radioactivity, pyrotoxins and ultraviolet light following a
nuclear war [2]including some scenarios only a small fraction of the world strategic
arsenals [2]would imperil every survivor on the planet [12]. There is a real danger
of the extinction of humanity [12]. A threshold exists at which the climatic catastrophe
[2] could be triggered, very roughly around 500-2,000 strategic war heads [2]. A major
first strike [2] may be an act of national suicide, even if no retaliation occurs. Given
the magnitude of the potential loss [2], no policy declarations and no mechanical
safeguards can adequately guarantee the safety [3] of the human species [1]. No
national rivalry or ideological confrontation justifies putting the species at risk [12].
Accordingly, there is a critical need for safe and verifiable reductions [3] of the world
strategic inventories to below [this] threshold. At such levels, still adequate for
deterrence, at least the worst could not happen should a nuclear war break out [23].
Universal Humanity = 4
Existential threat = 10
Extraordinary measures = 3
Sagan (1983, 291-
292)
4
Nuclear
Weapons
2017
Treaty
United
Nations
(IGO)
Deeply concerned about the catastrophic humanitarian consequences [2] that would
result from any use of nuclear weapons, and recognizing the consequent need to
completely eliminate [3] such weapons, which remains the only way to guarantee that
nuclear weapons are never used again [3] under any circumstances[.] Mindful of the
risks posed by the continued existence of nuclear weapons [2], including from any
nuclear-weapon detonation by accident, miscalculation or design [2], and emphasizing
that these risks concern the security of all humanity [1], and that all States [1] share the
responsibility to prevent any use of nuclear weapons[.] Cognizant that the catastrophic
consequences [2] of nuclear weapons cannot be adequately addressed [3], transcend
national borders, pose grave implications for human survival [2], the environment,
socioeconomic development, the global economy, food security and the health of
current and future generations [1], and have a disproportionate impact on women and
girls, including as a result of ionizing radiation[.] Acknowledging the ethical
imperatives for nuclear disarmament and the urgency [3] of achieving and maintaining
a nuclear-weapon-free world [3], which is a global public good [1] of the highest order,
serving both national and collective security interests [1]… Stressing the role of public
conscience [1] in the furthering of the principles of humanity [1] as evidenced by the
call for the total elimination of nuclear weapons [3].
Universal Humanity = 7
Existential threat = 5
Extraordinary measures = 6
United Nations
General Assemblu
(2017)
ccxcii
5
Biological
Weapons
1969
Report
United
Nations
(IGO)
It is simple to appreciate the resurgence of interest in the problems of chemical and
bacteriological (biological) warfare [2]. Advances in chemical and biological science,
while contributing to the good of mankind [1], have also opened up the possibility of
exploiting the idea of chemical and bacteriological (biological) warfare weapons [2],
some of which could endanger man’s future [12], and the situation will remain
threatening [2] so long as a number of States proceed with their development,
perfection, production and stockpiling… All weapons of war are destructive of human
life [1], but chemical and bacteriological (biological) weapons stand in a class of their
own [2] as armaments which exercise their effects solely on living matter. The idea
that bacteriological (biological) weapons could deliberately be used to spread disease
generates a sense of horror [2]. The fact that certain chemical and bacteriological
(biological) agents are potentially unconfined in their effects [2], both in space and
time, and that their large-scale use could conceivably have deleterious and irreversible
effects [2] on the balance of nature [1] adds to the sense of insecurity and tension which
the existence of this class of weapons engenders [2]. Considerations such as these set
them into a category of their own in relation to the continuing arms race [2]… The
general conclusion of the report can thus be summed up in a few lines. Were these
weapons ever to be used on a large scale in war [2], no one could predict how enduring
the effects [2] would be and how they would affect the structure of society and the
environment in which we live [1] It is the hope of the authors that this report will
contribute to public awareness [1] of the profoundly dangerous [2] results if these
weapons were ever used and that an aroused public [1] will demand and receive
assurances that Governments are working for the earliest effective elimination [3] of
chemical and bacteriological (biological) weapons.
Universal Humanity = 7
Existential threat = 13
Extraordinary measures = 1
United Nations
Secretary General
(1969)
6
Biological
Weapons
1972
Treaty
United
Nations
(IGO)
Determined to act with a view to achieving effective progress [3] towards general and
complete disarmament, including the prohibition and elimination of all types of
weapons of mass destruction [23]… Convinced of the importance and urgency [3] of
eliminating from the arsenals of States, through effective measures [3], such dangerous
weapons of mass destruction [2] as those using chemical or bacteriological (biological)
agents… Determined, for the sake of all mankind [1], to exclude completely [3] the
possibility of bacteriological (biological) agents and toxins being used as weapons
[2][.] Convinced that such use would be repugnant to the conscience of mankind [1]
and that no effort should be spared [3] to minimise this risk [2][.] Have agreed as
follows: Each State Party to this Convention undertakes never in any circumstance to
develop, produce, stockpile or otherwise acquire or retain [biological weapons] [3]
Each State Party to this Convention undertakes to destroy, or to divert to peaceful
purposes [3], as soon as possible but not later than nine months after the entry into
force of the Convention, all agents, toxins, weapons, equipment and means of
delivery…
Universal Humanity = 2
Existential threat = 4
Extraordinary measures = 8
United Nations
(1972)
ccxciii
7
Climate
Change
1988
Report
United
Nations
(IGO)
Humanity [1] is conducting an unintended, uncontrolled, globally pervasive
experiment [2] whose ultimate consequences could be second only to a global nuclear
war [2]. The Earth’s atmosphere is being changed at an unprecedented rate [2] by
pollutants resulting from human activities [1], inefficient and wasteful fossil fuel use
and the effects of rapid population growth in many regions. These changes represent a
major threat [2] to international security and are already having harmful consequences
[2] over many parts of the globe [1]. Far-reaching impacts [2] will be caused by global
warming and sea-level rise… The best predictions available indicate potentially severe
economic and social dislocation for present and future generations [1], which will
worsen international tensions and increase risk of conflicts among and within nations.
It is imperative to act now [3]…The Conference called upon governments, the United
Nations and its specialized agencies, industry, educational institutions, non-
governmental organizations and individuals to take specific actions [3] to reduce the
impending crisis [2] caused by pollution of the atmosphere. No country can tackle this
problem in isolation [3]. International cooperation in the management and monitoring
of, and research on, this shared resource is essential. The Conference called upon
governments to work with urgency [3] towards an Action Plan for the Protection of the
Atmosphere [3] Continuing alteration of the global atmosphere threatens global
security, the world economy, and the natural environment [2]…upon which human
survival depends [2]… A coalition of reason is required, in particular, a rapid reduction
of both North-South inequalities and East-West tensions, if we are to achieve the
understanding and agreements needed to secure a sustainable future for planet Earth
and its inhabitants [13].
Universal Humanity = 5
Existential threat = 10
Extraordinary measures = 6
United Nations
Environment
Programme (1988)
8
Climate
Change
2017
Article
David
Wallace-
Wells
(Journal-
ist)
It is, I promise, worse than you think [2]. If your anxiety about global warming is
dominated by fears of sea-level rise, you are barely scratching the surface of what
terrors are possible [2], even within the lifetime of a teenager today [1]. And yet the
swelling seasand the cities they will drown [2]have so dominated the picture of
global warming, and so overwhelmed our capacity [1] for climate panic, that they have
occluded our perception [1] of other threats [2], many much closer at hand. Rising
oceans are bad, in fact very bad [2]; but fleeing the coastline will not be enough [3].
Indeed, absent a significant adjustment to how billions of humans conduct their lives
[3], parts of the Earth [1] will likely become close to uninhabitable [2], and other parts
horrifically inhospitable [2], as soon as the end of this century… The scientists know
that to even meet the Paris goals, by 2050, carbon emissions from energy and industry,
which are still rising, will have to fall by half each decade [3]; emissions from land use
(deforestation, cow farts, etc.) will have to zero out [3]; and we will need to have
invented technologies [3] to extract, annually, twice as much carbon from the
atmosphere as the entire planet’s plants now do. Nevertheless, by and large, the
scientists have an enormous confidence in the ingenuity of humans [1, 3]a
confidence perhaps bolstered by their appreciation for climate change, which is, after
all, a human invention [1], too. They point to the Apollo project, the hole in the ozone
we [1] patched in the 1980s, the passing of the fear of mutually assured destruction.
Now we’ve [1] found a way to engineer our own doomsday [12], and surely we [1]
will find a way to engineer our way out of it [1, 3], one way or another. The planet [1]
Universal Humanity = 17
Existential threat = 10
Extraordinary measures = 7
Wallace-Wells
(2017)
ccxciv
is not used to being provoked like this [2], and climate systems designed to give
feedback over centuries or millennia prevent us [1]even those who may be watching
closelyfrom fully imagining the damage done already to the planet [1]. But when we
[1] do truly see the world we’ve made [1], they say, we [1] will also find a way to make
it livable. For them, the alternative is simply unimaginable [2].
9
Climate
Change
2019
Article
William
Ripple et
al.
(Scientists)
Scientists have a moral obligation to clearly warn humanity of any catastrophic threat
[12] and to “tell it like it is.” On the basis of this obligation and the graphical indicators
presented below, we declare, with more than 11,000 scientist signatories from around
the world [1], clearly and unequivocally that planet Earth is facing a climate
emergency [12] Despite 40 years of global climate negotiations, with few
exceptions, we [1] have generally conducted business as usual and have largely failed
to address this predicament [2]. The climate crisis has arrived and is accelerating [2]
faster than most scientists expected. It is more severe [2] than anticipated, threatening
natural ecosystems and the fate of humanity [12]. Especially worrisome are potential
irreversible climate tipping points [2] and nature’s reinforcing feedbacks (atmospheric,
marine, and terrestrial) that could lead to a catastrophic “hothouse Earth,” [2] well
beyond the control of humans [12]. These climate chain reactions could cause
significant disruptions [2] to ecosystems, society, and economies, potentially making
large areas of Earth uninhabitable [2]. To secure a sustainable future, we must change
how we live [1, 3] Mitigating and adapting to climate change while honoring the
diversity of humans [1] entails major transformations [3] in the ways our global society
[1] functions and interacts with natural ecosystems… The good news is that such
transformative change [3], with social and economic justice for all [1], promises far
greater human well-being [1] than does business as usual. We believe that the prospects
will be greatest if decision-makers and all of humanity [1] promptly respond [3] to this
warning and declaration of a climate emergency [2] and act [3] to sustain life on planet
Earth, our only home [1].
Universal Humanity = 13
Existential threat = 12
Extraordinary measures = 5
Ripple et al. (2020,
8-12).
ccxcv
10
Biodiversity
Loss
2020
Report
World
Wildlife
Fund
(NGO)
At a time when the world [1] is reeling from the deepest global disruption and health
crisis of a lifetime [12], this year’s Living Planet Report provides unequivocal and
alarming evidence that nature is unravelling [2] and that our planet [1] is flashing red
warning signs of vital natural systems failure [2]. The Living Planet Report 2020
clearly outlines how humanity’s [1] increasing destruction of nature is having
catastrophic impacts [2] not only on wildlife populations but also on human health [1]
and all aspects of our lives [1]. This highlights that a deep cultural and systemic shift
is urgently needed [3], one that so far our civilisation [1] has failed to embrace [2]: a
transition to a society and economic system that values nature [3] This is about
rebalancing our relationship with the planet [3] to preserve the Earth’s amazing
diversity of life [1] and enable a just, healthy and prosperous society [1]and
ultimately to ensure our own survival [13]. Nature is declining globally [1] at rates
unprecedented in millions of years [2]. The way we [1] produce and consume food and
energy, and the blatant disregard for the environment entrenched in our [1] current
economic model, has pushed the natural world to its limits [2]. COVID-19 is a clear
manifestation of our broken relationship with nature [12]. It has highlighted the deep
interconnection between nature, human health and well-being [1], and how
unprecedented biodiversity loss threatens the health of both people and the planet [1
2]. It is time we answer nature’s SOS [2]. Not just to secure the future of tigers, rhinos,
whales, bees, trees and all the amazing diversity of life we love [1] and have the moral
duty to coexist with, but because ignoring it also puts the health, well-being and
prosperity, indeed the future, of nearly 8 billion people at stake [12]World leaders
[1] must take urgent action to protect and restore nature [3] as the foundation for a
healthy society [1] and a thriving economy. We still have a chance [1, 3] to put things
right. It’s time for the world [1] to agree a New Deal for Nature and People [3],
committing to stop and reverse the loss of nature [3] by the end of this decade and
build a carbon-neutral and nature-positive economy and society [1, 3]. This is our [1]
best safeguard human health and livelihoods [1, 3] in the long term, and to ensure a
safe future for our children and children’s children [1, 3].
Universal Humanity = 26
Existential threat = 12
Extraordinary measures =
11
Almond et al.
(2020, 4-5).
11
Biodiversity
Loss
2020
Comm-
uniqué
United
Nations
Summit
(Political
leaders)
We, political leaders participating in the United Nations Summit on Biodiversity…
send a united signal to step up global ambition for biodiversity [1, 3] and to commit to
matching our collective ambition [3] for nature, climate and people [2] with the scale
of the crisis [2] at hand… We are in a state of planetary emergency [1, 2]: the
interdependent crises [2] of biodiversity loss and ecosystem degradation and climate
changedriven in large part by unsustainable production and consumption [2]
require urgent and immediate global action [1, 3]. Science clearly shows that
biodiversity loss, land and ocean degradation, pollution, resource depletion and climate
change are accelerating at an unprecedented rate [2]. This acceleration is causing
irreversible harm to our life support systems [12] and aggravating poverty and
inequalities as well as hunger and malnutrition. Unless halted and reversed with
immediate effect [3], it will cause significant damage [2] to global [1] economic, social
and political resilience and stability and will render achieving the Sustainable
Development Goals impossible…Nature fundamentally underpins human health,
wellbeing and prosperity [1]Despite ambitious global agreements and targets for
Universal Humanity =9
Existential threat = 10
Extraordinary measures = 9
Leaders’ Pledge for
Nature (2020)
ccxcvi
the protection, sustainable use and restoration of biodiversity, and notwithstanding
many local success stories, the global trends continue rapidly in the wrong direction.
A transformative change is needed [3]: we [1] cannot simply carry on as before [3].
This Pledge is a recognition of this crisis [2] and an expression of the need for a
profound re-commitment from World leaders [1] to take urgent action [3] [W]e
commit ourselves not simply to words, but to meaningful action [3] and mutual
accountability to address the planetary emergency [2]. It marks a turning point [3], and
comes with an explicit recognition that we will be judged now and by future
generations [1] on our willingness and ability to meet its aims.
12
Artificial
Intelligence
2014
Book
Nick
Bostrom
(Scholar)
This thing, the human brain [1], has some capabilities the brains of other animals lack.
It is to these distinctive capabilities that we owe our dominant position on the planet
[1]. Other animals have stronger muscles and sharper claws, but we [1] have cleverer
brains. Our [1] modes advantage in intelligence has led us [1] to develop language,
technology, and complex social organization. The advantage has compounded over
time, as each generation [1] has built on the achievements of its predecessors. If some
day we [1] build machine brains that surpass human brains in general intelligence [1
2], then this new superintelligence could become very powerful [2]. And, as the fate of
gorillas now depends more on us humans [1] than on the gorillas themselves, so the
fate of our species [1] would depend on the actions of the machine superintelligence
[2]. We do have one advantage [1, 3]: we [1] get to build the stuff. In principle, we [1]
could build a kind of superintelligence that would protect human values [1, 3]. We [1]
would certainly have strong reason to do so. In practice, the control problem [2]the
problem of how to control what the superintelligence would dolooks quite difficult.
It also looks like we will only get one chance [2]. Once unfriendly superintelligence
[2] exists, it would prevent us [1] from replacing it or changing its preferences. Our
fate would be sealed [12] This is quite possibly the most important and most
daunting challenge humanity has ever faced [13]. Andwhether we [1] succeed or
failit is probably the last challenge we will ever face [13].
Universal Humanity = 20
Existential threat = 9
Extraordinary measures = 4
Bostrom (2014,
vii.)
Explanatory
Note
The referent object of a universal humanity is indicated by a [1], an existential threat is marked by a [2], and extraordinary measures for security and survival are
indicated by a [3] in the table.
ccxcvii
Appendix 6: Analysis of Macrosecuritization Cases
Cases
Background
Macrosecuritization Framework
Security Constellation
Macrosecuritization
Outcomes
Explanatory Variables
Nature of Issue
Domestic Politics
International System
Great Power Politics
National Interests
Sta
rt
Yea
r
En
d
Yea
r
Issue
Historic
al
Context
Referent
Object
Existential
Threat
Extraordina
ry
Measures
Macrosecuriti
zing Actor(s)
Audience
Function
al
Security
Actors
Audience
Acceptan
ce
(Discours
e)
Security
Measures
(Action)
Sector
(In)Dir
ect
Harm
Scientific
Uncertain
ty
Time
Horizons
Regime
Type
(GreatPow
ers)
Political
Party
(USA)
Polarity
Great
Powers
Major
Powers
Structura
l
Transitio
n
Regional
Divide
Hegemoni
c Patron
Great
Power
Consens
us
National
Security
Interest
(USA)
National
Economic
Interests
(USA)
Internatio
nal
Control of
Atomic
Energy
194
2
194
6
Nuclear
weapons
End of
the
Second
World
War /
Origins
of the
Cold
War
“Civilizatio
n”;
“Mankind”
The atomic
bomb; “Nuclear
arms race”
International
control over
atomic
energy;
International
Atomic
Developmen
t Authority
Individuals
(Bohr,
Baruch);
Scientists
(atomic
scientists);
States (USA);
IGOs (UN)
States (all);
Great
powers
(USA,
USSR)
States
(all);
Great
powers
(USA,
USSR)
Strong
(4): The
survival of
humanity
is
accepted
as a
serious
concern
on the
atomic
bomb and
danger of
a “nuclear
arms race”
Weak (1):
Diplomatic
negotiations
on the
international
control of
atomic
energy end
in failure,
launching
the “nuclear
arms race”
Military
Direct
(“threat
”)
No:
Consensus
amongst
atomic
scientists
Short-
term: ~4
years until
Soviet
atomic
bomb
Democracy
(USA);
Authoritaria
n (USSR)
Democra
t
(Truman
)
Bipolarit
y
United
States;
Soviet
Union
United
Kingdo
m;
German
y;
Japan;
France;
Italy;
Republi
c of
China
Yes:
Multipolar
ity to
bipolarity,
with the
decline of
European
great
powers
and Japan
Yes: East-
West
Yes: The
United
States
No:
Oppositio
n by the
Soviet
Union
Strong:
American
“atomic
monopoly”
and Soviet
conventional
superiority
Weak:
Nascent
nuclear
energy
industry
Nuclear
Proliferati
on
196
1
196
7
Nuclear
weapons
The
Cold
War
“Humanity”
;
“Civilizatio
n”;
“Mankind”
Nuclear war;
Nuclear
proliferation
Treaty on
Non-
Proliferation
of Nuclear
Weapons;
nuclear
“safe-
guards” by
International
Atomic
Energy
Agency
States (Ireland;
USA); IGOs
(UN, Eighteen-
Nation
Committee on
Disarmament)
States (all);
Great
powers
(USA,
USSR)
States
(all);
IGOs
(UNGA,
UNSC,
IAEA)
Strong
(4): The
survival of
humanity
is
accepted
as a
serious
concern
for
nuclear
war, the
risks
exacerbate
d by
nuclear
proliferati
on
Strong 4:
The nuclear
nonprolifera
tion regime,
including
the NPT and
IAEA
“safeguards”
Military
Direct
(“threat
”)
No:
General
consensus
about the
dangers of
nuclear
proliferati
on and
nuclear
war
Existing:
Present
and
growing
Democracy
(USA);
Authoritaria
n (USSR)
Democra
t
(Johnson
)
Bipolarit
y
United
States;
Soviet
Union
United
Kingdo
m;
France;
People’
s
Republi
c of
China
No: Stable
bipolar
system
Yes: North-
South
Yes: The
United
States
Yes:
United
States
and the
Soviet
Union
(also the
United
Kingdom
)
Strong:
Clear
American
national
security
interest in
nonprolifera
tion
Moderate:
Nuclear
energy
industry
Biological
Weapons
196
8
197
2
Biologica
l
weapons
The
Cold
War
“Humanity”
;
“Mankind”
Bacteriological
(biological)
weapons;
“Weapons of
mass
destruction”
International
prohibition
regimethe
first
international
treaty to
eliminate an
entire
category of
“weapons of
mass
destruction”
States (UK,
USA); IGOs
(UN, UNSG,
WHO)
States (all);
Great
powers
(USA,
USSR)
States
(all);
IGOs
(UN,
WHO)
Moderate
(3): The
survival of
humanity
is
accepted
as a
legitimate
but not
the
dominant
framing
for
biological
weapons
and
warfare
Strong 4:
The BWC
prohibition
regime
establishes a
strong
normative
“taboo”
against
biological
weapons and
leads US to
eliminate
stockpiles,
but does not
establish
verification
system
USSR
launches a
secret
biological
weapons
program
Military;
Public
health
Direct
(“threat
”)
Yes:
Uncertaint
y about
the
potential
consequen
ces of
biological
warfare
Existing:
Present
and
growing
Democracy
(USA);
Authoritaria
n (USSR)
Republic
an
(Nixon)
Bipolarit
y
United
States;
Soviet
Union
United
Kingdo
m;
France;
People’
s
Republi
c of
China
No: Stable
bipolar
system
No: No
apparent
regional
divide
(though
recognition
that
biological
weapons
pose a
proliferatio
n threat in
developing
countries)
Yes: The
United
States (also
the United
Kingdom)
Yes:
United
States
and the
Soviet
Union
(also the
United
Kingdom
)
Weak:
White
House and
the military
conclude
that
biological
weapons
have limited
strategic
importance
Weak: Some
resistance
from
Chemical
industry and
Army
Chemical
Corps
ccxcviii
Ozone
Hole
198
5
198
7
Ozone
layer
End of
the Cold
War
“Mankind”;
“Earth”;
“Human
health and
environmen
t”
“Ozone hole”;
Ozone
depletion;
Chlorofluorocar
bons (CFCs)
The Vienna
Convention
and
Montreal
Protocol to
protect the
Ozone by
phasing out
harmful
chemicals,
Multilateral
Fund for
Implementat
ion, and
intrusive
monitoring
Scientists
(WMO); IGOs
(UNEP,
WHO); States
(USA)
States (all);
General
public
(internatio
nal)
States
(all);
IGOs
(UN);
Firms
(CFC and
other
ozone-
depleting
chemical
producers)
; Civil
society
(consumer
s)
Weak (1):
Only
minimal
or
infrequent
recognitio
n that
Ozone
depletion
could pose
and
existential
threat to
humanity
Strong (5):
Rapid and
decisive
action is
taken to
neutralize
the threat,
uncharacteri
stic of
multilateral
diplomacy
the
Montreal
Protocol is
the only
treaty
ratified by
all 197 UN
Member
States
Environm
ent
Indirect
(“risk”)
Yes:
General
consensus
about
CFCs as
drivers of
Ozone
depletion,
but
continuing
debate
about the
severity of
the
consequen
ces for
human
health and
the
environme
nt
Long-term
Democracy
(USA);
Authoritaria
n (USSR)
Republic
an
(Reagan)
Bipolarit
y
United
States;
Soviet
Union
Japan;
German
y
(West);
United
Kingdo
m;
France;
People’
s
Republi
c of
China
No: Stable
bipolar
system
Yes: North-
South
(uneven
responsibili
ties and
consequenc
es for
Northern
countries)
Yes: The
United
States
Yes:
Initial
resistance
from
European
nations
None: No
connection
made
between
Ozone layer
and national
security
Moderate:
CFC
producers
and
consumers;
the main
American
producer
(Dupont)
developed
substitutes
Global
Warming
197
9
199
2
Climate
change
End of
the Cold
War /
America
n
Hegemo
ny
“Humanity”
;
“Humankin
d”;
“Mankind”;
“Earth”
Global
warming;
Rising sea
levels; Fresh
water scarcity;
Global food
security;
Extreme
weather and
natural disasters
A
comprehensi
ve global
framework
convention
on climate
change;
Transition of
industrial
economies
to renewable
energy and
“sustainable
development
Individuals
(Hansen);
Scientists
(climate
scientists,
IPCC); States
(EU); IGOs
(UNEP,
WMO)
States (all);
Great
powers
(US,
USSR);
General
public
(internatio
nal)
States
(“develop
ed
economies
”);
Industry
(fossil
fuels);
IGOs
(UNEP,
WMO,
IPCC)
Moderate
(2): Some
acceptanc
e that
global
warming
poses an
existential
threat to
humanity,
but this
frame is
contested
and not
generally
accepted
the
UNFCCC
only refers
to the
“adverse
effects”
and
“common
concern of
humankin
d”
Moderate
(2): The
UNFCC
establishes
new
international
obligations,
but does not
set binding
emissions
targets or
create new
international
powers
and
generally
fails to
stabilize
greenhouse
gas
emissions
Environm
ent;
Energy;
Economy
Indirect
(“risk”)
Yes:
Growing
scientific
consensus
about
human-
driven
climate
change,
but
substantial
uncertaint
y about
timing and
consequen
ces
Long-
term:
Between
2050 and
2100
Democracy
(USA);
Authoritaria
n (USSR)
Republic
an
(Reagan;
Bush)
Bipolarit
y
United
States;
Soviet
Union /
Russian
Federati
on
Japan;
German
y
(West);
United
Kingdo
m;
France;
People’
s
Republi
c of
China
Yes:
Bipolarity
to
unipolarit
y, with the
decline
and
collapse
of Soviet
Union
Yes: North-
South
(uneven
responsi-
bilities for
North, and
consequenc
es for
South);
Divide
between
Europe and
North
America,
with Japan,
Australia,
and USSR
supporting
USA
No: The
United
States
resists the
UNFCCC
and waters
down
commitme
nts and
measures
No:
Oppositio
n by the
United
States,
the Soviet
Union,
and Japan
Weak: Weak
connection
made to
national
security
Strong:
Fossil fuel
industry and
industrial
economy
Nuclear
Winter
198
2
199
2
Nuclear
weapons
/ Climate
change
End of
the Cold
War /
America
n
Hegemo
ny
“Humanity”
; “Human
species”;
“Life on
Earth”
Nuclear war;
“Nuclear
winter”
Nuclear
arms control
and
disarmament
(limitations
and
reductions in
nuclear
arsenals)
Individuals
(Reagan,
Gorbachev);
Scientists
(biological and
climate
scientists);
Civil society
(antinuclear
activists);
IGOs (UN)
States
(nuclear
powers);
General
public
(internatio
nal)
States
(nuclear
weapons
states);
Great
powers
(USA,
USSR)
Moderate
(3): The
survival of
humanity
is
accepted
as a
legitimate
but not
the
dominant
framing
on nuclear
weapons,
which also
emphasize
s strategic
stability
and
nuclear
deterrence
Strong (4):
The US and
USSR agree
to and
implement
dramatic
reductions in
their nuclear
arsenals
(START I &
II), which
mitigates but
does not
eliminate the
existential
threat of
nuclear war
and winter
Military;
Environm
ent
Indirect
(“risk”)
No:
Growing
scientific
consensus
confirmin
g nuclear
winter
theory
although
skepticism
remains in
governme
nts and
general
public
Existing:
Newly
discovere
d
Democracy
(USA);
Authoritaria
n (USSR)
Republic
an
(Reagan;
Bush)
Bipolarit
y
United
States;
Soviet
Union /
Russian
Federati
on
Japan;
German
y;
United
Kingdo
m;
France;
People’
s
Republi
c of
China
Yes:
Bipolarity
to
unipolarit
y, with the
decline
and
collapse
of Soviet
Union
No: No
apparent
regional
divide
(although
there are
differences
in
responsibili
ty for, and
consequenc
es of the
threat)
Yes: The
United
States (also
the Soviet
Union)
Yes: The
United
States
and the
Soviet
Union
Strong:
Nuclear
deterrence is
strengthened
by arms
control
measures to
stabilize and
reverse the
nuclear arms
race
Moderate:
“Military
Industrial
Complex”
ccxcix
Prohibitio
n of
Nuclear
Weapons
200
7
201
7
Nuclear
weapons
America
n
Hegemo
ny
“Humanity”
Nuclear
weapons;
Nuclear war;
Nuclear famine
An
international
prohibition
regime on
nuclear
weapons; “A
world
without
nuclear
weapons”
Transnational
civil society
(ICAN); States
(small and
medium
powers); IGOs
(UNGA)
States
(small and
medium
states);
General
public
(internatio
nal)
States
(nuclear
weapons
states);
IGOs
(UNGA)
Moderate
(3): The
survival of
humanity
is
accepted
as a
legitimate,
though
contested
framing
for
nuclear
weapons
the
Treaty on
the
Prohibitio
n of
Nuclear
Weapons
recognizes
that
nuclear
war poses
“grave
implicatio
ns for
human
survival”
Moderate
(2): The
Treaty on
the
Prohibition
of Nuclear
Weapons
establishes
new
obligations
for states
parties and a
normative
challenge to
nuclear
weapons
states, but
non-
participation
by nuclear
weapons
states fails
to reduce the
existential
threat of
nuclear war
Military
Direct
(“threat
”)
No:
General
consensus
about the
dangers of
nuclear
war
Existing:
Present
and
growing
Democracy
(USA)
Democra
tic
(Obama)
;
Republic
an
(Trump)
Unipolari
ty
United
States
People’
s
Republi
c of
China;
Europe
an
Union;
Russia;
India;
Japan
No: Stable
unipolar
system
Yes: North-
South
(Southern
countries
favor a ban,
while
Northern
countries
reject a
ban)
No: The
United
States
boycotted
treaty
negotiation
s and
pressured
its allies to
do the
same
No: None
of the
nuclear
weapons
states
the great
and major
powers
are
signatorie
s or
parties to
the treaty
Strong:
American
national
security
policy is
strongly
committed
to nuclear
deterrence
Moderate:
“Military
Industrial
Complex”
Artificial
Intelligenc
e
201
4
Con
t.
Artificial
intelligen
ce
Rise of
China
“Humanity”
;
“Humankin
d”
“Artificial
general
intelligence”;
“Superintelligen
ce”; Human
“incompatible”
or “misaligned”
AI
Radically
changing the
“standard
model” of
AI R&D
(“provably
beneficial
AI”);
International
control or
prohibition
regime
Individuals
(Bostrom,
Russell);
Scientists (AI
experts);
NGOs (FHI;
FLI; CSER)
Scientists
(AI
community
); States
(all);
General
public
(internatio
nal)
States
(USA,
China,
EU,
Russia);
Firms
(tech
companies
);
Scientists
(AI)
Weak (1):
Only
minimal
or
infrequent
recognitio
n that AI
may pose
an
existential
threat to
humanity
Weak (1):
New
international
measures on
AI safety
and security
are
discussed
(i.e.,
“trustworthy
” and
“human-
centric AI”),
but are yet
to
materialize
and the
existential
risks of
AGI/ASI are
largely
dismissed
Science
and
technolog
y
Indirect
(“risk”)
Yes:
Significant
debate and
uncertaint
y amongst
AI experts
about the
possibility
, timing,
and
consequen
ces of
AGI/ASI
Medium-
term: ~20
50
Democracy
(USA);
Authoritaria
n (China)
Democra
ct
(Obama)
;
Republic
an
(Trump);
Democra
ct
(Biden)
Unipolari
ty
United
States
People’
s
Republi
c of
China;
Europe
an
Union;
Russia;
India;
Japan;
United
Kingdo
m
Yes:
Unipolarit
y to
bipolarity,
with the
rise of
China as a
great
power
Yes: East-
West (US-
China
technology
competition
); Also
North-
South
(unequal
access to
technology)
No: The
United
States has
refrained
from
taking a
leadership
position on
AI safety
and
security
No:
Neither
the
United
States nor
China
have
taken on
internatio
nal
leadershi
p on AI
safety
and
security
Strong: The
US military
and other
great and
major
powers see
AI as a
strategic
emerging
technology
for national
security and
defence
Strong: The
US
Government
sees AI as a
revolutionar
y emerging
technology
for economic
competitiven
ess,
innovation,
and growth
Climate
Emergenc
y
201
7
Con
t.
Climate
change
Rise of
China
“Humanity”
;
“Humankin
d”; “Earth
system”
The “Anthropo-
cene”; “Climate
crisis”;
“Climate
emergency”;
“Tipping
points”;
“Hothouse
Earth”
A “societal
transformati
on” for the
global
economy
and the
human-
nature
relationship;
“Carbon net
zero”
Individuals
(Thunberg);
Scientists
(climate
scientists);
States (some);
IGOs (UN,
UNEP, IPCC);
Transnational
civil society
(Extinction
Rebellion)
States (all);
General
public
(internatio
nal)
States
(all);
Industry
(fossil
fuels);
IGOs
(UN,
IPCC)
Strong
(4): The
survival of
humanity
is
increasing
ly
accepted
as a
serious
concern
from
climate
change
many
nations
declare a
“climate
crisis” or
“climate
emergenc
y”
Moderate
(3): New
international
measures are
established
by the Paris
Agreement
and many
states
commit to
carbon
neutrality by
mid-century;
but actual
emissions
reductions
are
incremental
and uneven,
falling fall
short of
what is
necessary to
meet goals
of Paris
Agreement
Environm
ent;
Energy;
Economy
Indirect
(“risk”)
No:
General
scientific
consensus
about the
causes and
effects of
climate
change
although
there is
varying
levels of
consensus
amongst
the
general
public and
in
different
countries
Medium
to long-
term:
Governme
nts are
concerned
about
medium-
term
effects
(between
2030-
2050), but
climate
change
likely only
becomes
an
existential
threat
beyond
2050
Democracy
(USA);
Authoritaria
n (China)
Republic
an
(Trump);
Democra
t (Biden)
Unipolari
ty
United
States
People’
s
Republi
c of
China;
Europe
an
Union;
Russia;
India;
Japan;
United
Kingdo
m
Yes:
Unipolarit
y to
bipolarity,
with the
rise of
China as a
great
power
Yes: North-
South
(uneven
responsibili
ties and
consequenc
es); East-
West
(historical
emissions
of Europe
and North
America,
versus
current
emissions
of China
and India);
Also,
divides
between
groupings,
including
oil
producers
and small
island
nations
No: The
United
States
withdrew
from the
Paris
Agreement
under
Trump and
is yet to
make a
clear and
strong
commitme
nt to
climate
action
under the
Biden
administrat
ion
No: The
United
States
and
China
have
failed to
demonstr
ate strong
leadershi
p on
climate
change
there is a
big gap
between
climate
targets
and
emissions
reduction
s
Moderate:
The US
military has
made a
connection
between
climate
change and
national
security as a
“threat
multiplier”
Strong:
Fossil fuel
industry has
strong
interest in
resisting
climate
action; US
economy is
highly
dependent on
energy
intensive
industries,
such as
agriculture,
manufacturin
g, and
transportatio
n; Other
great and
major
powers are
also highly
dependent on
fossil fuel
production
ccc
and
consumption
Biodiversi
ty Loss
201
8
Con
t.
Biodivers
ity loss
Rise of
China
“Humanity”
; Human
health and
wellbeing;
“Nature”;
“Biodiversit
y”; “Life on
Earth”
“Biodiversity
loss”; “Mass
extinction”;
“Planetary
emergency”;
Decline of “life
support
systems”; Food
security; Water
security; Human
health
A paradigm
shift to
transform
the human-
nature
relationship;
“Whole-of-
society”
action to
conserve,
protect, and
enhance
biodiversity;
“Sustainable
development
Individuals
(Attenborough)
; Scientists
(IPBES);
NGOs (WWF);
States (some);
IGOs (UN,
UNEP);
Transnational
civil society
(Extinction
Rebellion)
States (all);
General
public
(internatio
nal)
States
(all);
IGOs
(UN)
Moderate
(3): The
survival of
humanity
is
accepted
as a
legitimate
frame for
biodiversit
y loss, but
comes
with other
narratives
about the
loss of
nature or
the
implicatio
ns for UN
“Sustaina
ble
Developm
ent Goals”
Moderate
(2):
Although
states have
agreed to a
“Post-2020
Global
Biodiversity
Framework”
and a “2050
Vision of
Living in
Harmony
with
Nature,”
states have
thus far
failed to
slow and
reverse
biodiversity
lossand
the
“extinction
crisis”
continues to
accelerate
Environm
ent;
Natural
resources
Indirect
(“risk”)
No:
General
scientific
consensus
that
biodiversit
y loss is
human-
driven and
poses
serious
dangers
Short to
long-term:
The
impacts of
biodiversit
y loss on
food and
resource
scarcity
are
already
being felt,
but
biodiversit
y loss
likely only
to become
an
existential
threat
beyond
2050
Democracy
(USA);
Authoritaria
n (China)
Republic
an
(Trump)
and
Democra
t (Biden)
Unipolari
ty
United
States
People’
s
Republi
c of
China;
Europe
an
Union;
Russia;
India;
Japan;
United
Kingdo
m
Yes:
Unipolarit
y to
bipolarity,
with the
rise of
China as a
great
power
Yes: North-
South
(uneven
responsibili
ties and
consequenc
es); Also,
divides
between
national
government
s and
indigenous
peoples
No: The
United
States has
not taken
on a
leadership
roleit is
the only
country
that is not a
party to the
Conventio
n of
Biological
Diversity
No:
Neither
the
United
States nor
China
support
strong
action on
biodiversi
ty loss;
the
European
Union
and
United
Kingdom
are
supportiv
e
None:
Currently,
no
connection
made
between
biodiversity
loss and
national
security
Strong:
Many
industries are
involved in
biodiversity
loss,
including
agriculture,
energy,
mining, and
transportatio
n
Explanato
ry Notes:
The historical boundaries for the polarity of the international system are classi fied as follows: “multipolarity” between 1942 and 1944; “bipolarity” between 1945 and 1990; “unipolarity” between 1991 and 2019; and “bipolarity” from 2020 to the present.
ccci
Appendix 7: Summary of Theoretical Hypothesis and Empirical Findings
Sector
Harm
Scientific
Uncertainty
Time Horizons
Regime Type
Political Party
Polarity
Structural
Transitions
Hegemonic
Patron
Great Power
Consensus
Hypothesis
H1: Military
sector
conducive to
macro-
securitization;
otherwise
failure
H2: Direct
harm
conducive to
macros-
ecuritization;
otherwise
failure
H3: Scientific
consensus
conducive to
macro-
securitization;
otherwise
failure
H4: Short time
horizons
conducive to
macr-
osecuritization;
otherwise
failure
H5:
Democracy
conducive to
macro-
securitization;
otherwise
failure
H6:
Democratic
White House
conducive to
macro-
securitization;
otherwise
failure
H7: Bipolarity
conducive to
macro-
securitization;
otherwise
failure
H8: Stable
international
system
conducive to
macro-
securitization;
otherwise
failure
H9: Hegemonic
patron
conducive to
macro-
securitization;
otherwise
failure
H10: Great
power
consensus
conducive to
macro-
securitization;
otherwise
failure
Atomic
Energy
X
X
Nuclear
Proliferation
X
X
X
X
X
X
X
X
X
X
Biological
Weapons
X
X
X
X
X
X
X
X
Ozone Hole
X
X
X
X
X
Nuclear
Winter
X
X
X
X
X
X
X
Global
Warming
X
X
X
X
X
X
X
Nuclear
Prohibition
X
X
X
Artificial
Intelligence
X
X
X
X
X
X
X
X
Climate
Emergency
X
X
X
X
X
X
X
Biodiversity
Loss
X
X
X
X
X
X
X
cccii
Appendix 8: Views of Key Individuals on the International Control over Atomic Energy
Stage 1: Emergence
Name
Position
Posture
Winston S. Churchill
Prime Minister of the United
Kingdom
National securitization: Strongly favoured an Anglo-American atomic monopoly and rejected
cooperation with the Soviet Union and the international control of atomic energy.
Franklin D. Roosevelt
President of the United
States
National securitization: Favoured an Anglo-American atomic monopoly, rejected cooperation with the
Soviet Union, and skeptical of the international control of atomic energy.
Albert Einstein
Scientist
National securitization: Warned President Roosevelt about the danger that Nazi Germany could
develop the atomic bomb.
Neils Bohr
Atomic Scientist
Macrosecuritization: Strong advocated atomic cooperation with the Soviet Union and the international
control of atomic energy.
Vannevar Bush
Director, Office of Scientific
Research and Development
Macrosecuritization: Warned against Anglo-American atomic cooperation, advocated reaching out to
the Soviet Union, and proposed a system of international control.
James B. Conant
Chairman of National
Defence Research
Committee
Macrosecuritization: Warned against Anglo-American atomic cooperation, advocated reaching out to
the Soviet Union, and proposed a system of international control.
Henry L. Stimson
U.S. Secretary of War
Macrosecuritization: Eventually became a strong proponent of cooperation with the Soviet Union and
the international control of atomic energy.
Igor Kurchatov
Director of Soviet Atomic
Research, Ioffe Institute
National securitization: Warned of the military and economic implications of atomic energy.
G. N. Flyorov
Lieutenant, Red Army
National securitization: Warned that American, British, and German scientists were secretly working
on the atomic bomb.
Stage 2: Evolution
Harry S. Truman
President of the United
States
Macrosecuritization: Publicly advocated the creation of a system of international control over atomic
energy, but also supported the American atomic monopoly and atomic diplomacy towards the Soviet
Union.
James F. Byrnes
Secretary of State
National securitization: Initially advocated “atomic diplomacy” of using atomic weapons as an implicit
threat to coerce the Soviets into making political concessions in the peace agreement, but eventually
supported diplomatic negotiations towards the international control of atomic energy.
Leslie Groves
Major General,
Commanding General of the
Manhattan Project
National securitization: Advocated the maintenance of the American atomic monopoly and atomic
secrecy for as long as possible.
Dean Acheson
Major General,
Commanding General of the
Manhattan Project
Macrosecuritization: Supported the international control of energy and led the committee that produced
the Acheson-Lilienthal Report.
ccciii
J. Robert Oppenheimer
Head of Los Alamos
Laboratory, Manhattan
Project
Macrosecuritization: Supported the international control of atomic energy and the lead in developing
the Acheson-Lilienthal Report.
Clement Attlee
Prime Minister of the United
Kingdom
Macrosecuritization: Called for the international control of atomic energy.
William Lyon
Mackenzie King
Prime Minister of Canada
Macrosecuritization: Called for the international control of atomic energy.
Joseph Stalin
General Secretary of the
Soviet Union
National securitization: Decided to pursue a Soviet atomic bomb, and perceived the American atomic
monopoly as a serious threat to Soviet national security.
Vyacheslav Molotov
Foreign Minister of the
Soviet Union
National securitization: Adopted a stance of intransigence in diplomatic negotiations in response to the
American atomic bomb.
Stage 3: Demise
Harry S. Truman
President of the United
States
National securitization: Made the decisions to maintain the American atomic monopoly and seek a vote
on the Baruch Plan despite Soviet opposition.
James F. Byrnes
Secretary of State
National securitization: Supported the decisions to maintain the American atomic monopoly and seek a
vote on the Baruch Plan despite Soviet opposition.
Henry A. Wallace
Secretary of Commerce and
Former Vice-President of the
United Staes
Macrosecuritization: Publicly criticized the Baruch Plan for requiring the Soviet Union to subject itself
to international control while maintaining the American atomic monopoly.
Bernard Baruch
American Representative to
the Atomic Energy
Commission
Macrosecuritization: Negotiated for the international control of atomic energy, but insisted on the
American terms of penalties and sanctions, implementation in stages, and abolition of the veto, and
ultimately recommended a vote and “propaganda victory.”
Joseph Stalin
General Secretary of the
Soviet Union
National securitization: Maintained the decision to pursue a Soviet atomic bomb and opposed the
Baruch Plan.
Vyacheslav Molotov
Foreign Minister of the
Soviet Union
National securitization: Made diplomatic statements attacking the American atomic monopoly and
opposed the Baruch Plan.
Andrei Gromyko
Soviet Permanent
Representative to the United
Nations
Macropoliticization: Insisted on an international convention to prohibit the atomic bomb and worked to
slow and stall discussions on the international control of atomic energy.
Sources: Original. Based on information from Hewlett and Anderson (1962); Lieberman (1970); Sherwin (1973); and Bernstein (1974; 1975).
ccciv
Appendix 9: Timeline on the International Control over Atomic Energy, 1939–1946
Stage I: Emergence of the Atomic Threat to Humanity, August 1939–May 1945
2 August 1939: Einstein–Szilárd Letter to Roosevelt
March 1940: The Frisch–Peierls Memorandum
March 1941: Report of the MAUD Committee
28 December 1942: President Roosevelt authorizes formation of the Manhattan Project
19 August 1943: The Quebec Agreement on Anglo-American atomic cooperation
3 July 1944: The Bohr Memorandum
26 August 1944: Bohr meets with President Roosevelt
19 September 1944: The Churchill-Roosevelt Aide-Memoire rejects the idea of international
control
30 September 1944: The Bush-Conant Memorandum on international control
4-11 February 1945: The Yalta Conference (Roosevelt, Churchill, and Stalin)
12 April 1945: Death of President Roosevelt
25 April 1945: Stimson briefs President Truman on the atomic bomb
May 1945: Creation of the Interim Committee
Stage II: Evolution of a Plan for International Control, June 1945–June 1946
12 June 1945: The Franck Report
16 July 1945: The Trinity Test
17 July2 August 1945: The Potsdam Conference (Truman, Churchill, and Stalin)
24 July 1945: Truman informs Stalin about the atomic bomb
6–9 August 1945: Atomic bombs dropped on Hiroshima and Nagasaki
15 August 1945: Japan surrenders
20 August 1945: Soviet Union establishes a Special Committee to develop the atomic bomb
September 1945: The London Conference of Foreign Ministers
3 October 1945: Speech by President Truman to Congress on the atomic bomb
15 November 1945: Joint declaration on atomic energy (Truman, Attlee, and King)
16–26 December 1945: The Moscow Conference of Foreign Ministers
24 January 1946: General Assembly Resolution 1 establishes the Atomic Energy Commission
22 February 1946: Kennan’s “Long Telegram”
5 March 1946: Churchill’s “Iron Curtain” speech
16 March 1946: The Acheson-Lilienthal Report
18 March 1946: Baruch appointed as U.S. Representative to the UNAEC
7 June 1946: President Truman approves Baruch Plan
Stage III: Demise of the International Control over Atomic Energy, June–December 1946
14 June 1946: “The Baruch Plan” in the UNAEC
19 June 1946: “The Gromyko Plan” in the UNAEC
June–July 1946: The Paris Conference of Foreign Ministers
cccv
1–25 July 1946: Bikini Atoll atomic bomb tests
6 August 1946: Subcommittee No. 2 discontinues political discussion to await scientific report
3 September 1946: Scientific and Technical Committee’s draft report
29 October 1946: Molotov’s statement at the General Assembly
13 November 1946: UNAEC decides to submit a report to the Security Council
November-December 1946: The New York Conference of Foreign Ministers
5 December 1946: Baruch addresses the UNAEC
14 December 1946: General Assembly Resolution on general reductions of armaments
18–26 December 1946: Formal discussion of the “First Report”
30 December 1946: 10-0 vote on the “First Report” in the UNAEC
4 January 1947: Baruch resign
cccvi
Appendix 10: Artificial General Intelligence, Superintelligence, and Existential AI Risks
Actor
AGI/ASI
Existential AI Risks
Canada
No
No
China
No
No
France
Ye s - The future of artificial intelligence surely depends on its
exposure to these different technological developments. These new
applications fuel new narratives and fears based on, amongst other
concepts, the omnipotence of artificial intelligence, the myth of
Singularity and transhumanism. In recent years, these views have
been largely endorsed and promoted by some of the most prominent
actors in the AI landscape (Villani 2018, 5, 113).
Ye s - Far from the speculative considerations on the existential threats of AI
for humanity, the [ethical] debate seems to focus on algorithms that are already
present in our daily lives and that can have a major impact on our day-to-day
existence (Villani 2018, 14, 113).
Germany
Ye s - In highly abstract terms, AI researchers [sic.] can be assigned to
two groups: “strong” and “weak” AI. “Strong” AI means that AI
systems have the same intellectual capabilities as humans, or even
exceed them. “Weak” AI is focused on the solution of specific
problems using methods from mathematics and computer science,
whereby the systems developed are capable of self-optimisation…
The Federal Government is oriented its strategy to the use of AI to
solve specific problems, i. e. to the “weak” approach (Federal
Government of Germany 2018, 45).
No
India
Ye s - a) Weak AI vs. Strong AI: Weak AI describes "simulated"
thinking. That is, a system which appears to behave intelligently, but
doesn't have any kind of consciousness about what it's doing… Strong
AI describes "actual" thinking. That is, behaving intelligently,
thinking as human does, with a conscious, subjective mind. For
example, when two humans converse, they most likely know exactly
who they are, what they're doing, and why.
b) Narrow AI vs. General AI: Narrow AI describes an AI that is limited
to a single task or a set number of tasks… General AI describes an AI
which can be used to complete a wide range of tasks in a wide range
of environments. As such, it's much closer to human intelligence.
c) Superintelligence: The term "superintelligence" is often used to
refer to general and strong AI at the point at which it surpasses human
intelligence, if it ever does.
While big strides have been made in Artificial Narrow Intelligence…
no one has yet claimed the first production or development of General
Ambiguous - A star t ha s bee n mad e by Open AI, set up by the like s of Elon
Musk and Sam Altman, with a mission to discover and enact the path to safe
artificial general intelligence (NITI Aayog 2018, 63; emphasis added).
cccvii
AI. The weight of expert opinion is that we are a long way off the
emergence of General AI. (NITI Aayog 2018, 15; emphasis added).
Japan
No
No
Russia
Ye s - Prospective artificial intelligence techniquestechniques that
are aimed at the creation of fundamentally new scientific and technical
products, including those that have as their purpose the development
of artificial general intelligence, aka strong artificial intelligence
(Office of the President of the Russian Federation 2019, 4)
Ambiguous - The creation of an artificial general intelligence (strong artificial
intelligence) that is able, like a person, to solve various problems, to think, to
interact, and to adapt to changing conditions is a complex scientific and
technical problem, the resolution of which lies at the crossroads of different
spheres of scientific knowledgenatural science, engineering, social studies,
and the humanities. The resolution of this problem may lead not only to
positive changes in key areas of life activities, but also to negative
consequences caused by the social and technological changes that accompany
the development of artificial intelligence technologies (Office of the President
of the Russian Federation 2019, 56; emphasis added).
United
Kingdom
No
No
United
States
Ye s - General AI (sometimes called Artificial General Intelligence, or
AGI) refers to a notional future AI system that exhibits apparently
intelligent behavior at least as advanced as a person across the full
range of cognitive tasks. A broad chasm seems to separate today’s
Narrow AI from the much more difficult challenge of General AI.
Attempts to reach General AI by expanding Narrow AI solutions have
made little headway over many decades of research. The current
consensus of the private-sector expert community, with which the
NSTC Committee on Technology concurs, is that General AI will not
be achieved for at least decades (NSTC 2016b, 7)
Ye s - People have long speculated on the implications of computers becoming
more intelligent than humans… In a dystopian vision of this process, these
super-intelligent machines would exceed the ability of humanity to understand
or control. If computers could exert control over many critical systems, the
result could be havoc, with humans no longer in control of their destiny at
best and extinct at worst The NSTC Committee on Technology’s
assessment is that long-term concerns about super-intelligent General AI
should have little impact on current policy. The policies the Federal
Government should adopt in the near-to-medium term if these fears are
justified are almost exactly the same policies the Federal Government should
adopt if they are not justified… Although prudence dictates some attention to
the possibility that harmful superintelligence might someday become
possible, these concerns should not be the main driver of public policy for AI
(NSTC 2016b, 8; emphasis added).
European
Union
Ye s - While some consider that Artificial General Intelligence,
Artificial Consciousness, Artificial Moral Agents, Super-intelligence
or Transformative AI can be examples of such long-term concerns
(currently non-existent), many others believe these to be unrealistic
(High-Level Expert Group on Artificial Intelligence 2019, 35;
emphasis added).
No
G7
No
No
G20
No
No
cccviii
OECD
Ye s - Artificial narrow intelligence (ANI) or “applied” AI is designed
to accomplish a specific problem-solving or reasoning task. This is the
current state-of-the-art. The most advanced AI systems available
today, such as Google’s AlphaGo, are still “narrow”… Applied AI is
often contrasted to a (hypothetical) AGI. In AGI, autonomous
machines would become capable of general intelligent action. Like
humans, they would generalise and abstract learning across different
cognitive functions. AGI would have a strong associative memory and
be capable of judgment and decision making. It could solve
multifaceted problems, learn through reading or experience, create
concepts, perceive the world and itself, invent and be creative, react
to the unexpected in complex environments and anticipate (OECD
2019b, 22).
Ambiguous - With respect to a potential AGI, views vary widely. Experts
caution that discussions should be realistic in terms of time scales. They
broadly agree that ANI will generate significant new opportunities, risks and
challenges. They also agree that the possible advent of an AGI, perhaps
sometime during the 21st century, would greatly amplify these consequences
(OECD 2019b, 22; emphasis added).
UNESCO
No
No
309
310
Copyright Acknowledgements
None.
... Nathan's realism also drove him to Securitisation Theory as the best way to bring these concerns to the IR field, but for the opposite reasons that motivated its original proponents (Buzan et al., 1998). Existential threats to humanity, he argued, demand a type of macrosecuritisation -a "higher order securitisation" -where the "referent object" is humankind and universalist claims about dangers to its survival legitimise extraordinary measures outside of normal politics and international relations (Sears, 2022;Buzan & Waever, 2009, 259). His concern was failures to macrosecuritise, not the usual (often critical) focus on securitisation success (cf. ...
Article
Full-text available
There is a rapidly developing literature on risks that threaten the whole of humanity, or a large part of it. Discussion is increasingly turning to how such risks can be governed. This paper arises from a study of those involved the governance of risks from emerging technologies, examining the perceptions of global catastrophic risk within the relevant global policymaking community. Those who took part were either civil servants working for the UK government, U.S. Congress, the United Nations, and the European Commission, or cognate members of civil society groups and the private sector. Analysis of interviews identified four major themes: Scepticism; Realism; Influence; and Governance outside of Government. These themes provide evidence for the value of conceptualising the governance of global catastrophic risk as a unified challenge. Furthermore, they highlight the range of agents involved in governance of emerging technology and give reason to value reforms carried out sub-nationally.
Article
Full-text available
There is increasing concern that climate change poses an existential risk to humanity. Understanding these worst-case scenarios is essential for good risk management. However, our knowledge of the causal pathways through which climate change could cause societal collapse is underdeveloped. This paper aims to identify and structure an empirical evidence base of the climate change, food insecurity and societal collapse pathway. We first review the societal collapse and existential risk literature and define a set of determinants of societal collapse. We develop an original methodology, using these determinants as societal collapse proxies, to identify an empirical evidence base of climate change, food insecurity and societal collapse in contemporary society and then structure it using a novel-format causal loop diagram (CLD) defined at global scale and national granularity. The resulting evidence base varies in temporal and spatial distribution of study and in the type of data-driven methods used. The resulting CLD documents the spread of the evidence base, using line thickness and colour to depict density and type of data-driven method respectively. It enables exploration of how the effects of climate change may undermine agricultural systems and disrupt food supply, which can lead to economic shocks, socio-political instability as well as starvation, migration and conflict. Suggestions are made for future work that could build on this paper to further develop our qualitative understanding of, and quantitative complex systems modelling capabilities for analysing, the causal pathways between climate change and societal collapse.
Book
This volume aims to provide a new framework for the analysis of securitization processes, increasing our understanding of how security issues emerge, evolve and dissolve. Securitisation theory has become one of the key components of security studies and IR courses in recent years, and this book represents the first attempt to provide an integrated and rigorous overview of securitization practices within a coherent framework. To do so, it organizes securitization around three core assumptions which make the theory applicable to empirical studies: the centrality of audience, the co-dependency of agency and context and the structuring force of the dispositif. These assumptions are then investigated through discourse analysis, process-tracing, ethnographic research, and content analysis and discussed in relation to extensive case studies.
Chapter
Following the analysis given by Alan Turing in 1951, one must expect that AI capabilities will eventually exceed those of humans across a wide range of real-world-decision making scenarios. Should this be a cause for concern, as Turing, Hawking, and others have suggested? And, if so, what can we do about it? While some in the mainstream AI community dismiss the issue, I will argue that the problem is real: we have to work out how to design AI systems that are far more powerful than ourselves while ensuring that they never have power over us. I believe the technical aspects of this problem are solvable. Whereas the standard model of AI proposes to build machines that optimize known, exogenously specified objectives, a preferable approach would be to build machines that are of provable benefit to humans. I introduce assistance games as a formal class of problems whose solution, under certain assumptions, has the desired property.
Article
Many have claimed that climate change is an imminent threat to humanity, but there is no way to verify such claims. This is concerning, especially given the prominence of some of these claims and the fact that they are confused with other well verified and settled aspects of climate science. This paper seeks to build an analytical framework to help explore climate change’s contribution to Global Catastrophic Risk (GCR), including the role of its indirect and systemic impacts. In doing so it evaluates the current state of knowledge about catastrophic climate change and integrates this with a suite of conceptual and evaluative tools that have recently been developed by scholars of GCR and Existential Risk. These tools connect GCR to planetary boundaries, classify its key features, and place it in a global policy context. While the goal of this paper is limited to producing a framework for assessment; we argue that applying this framework can yield new insights into how climate change could cause global catastrophes and how to manage this risk. We illustrate this by using our framework to describe the novel concept of possible’ global systems death spirals,’ involving reinforcing feedback between collapsing sociotechnological and ecological systems.