Giving from your Founders Pledge DAF this year-end? Check our 2024 giving deadlines

Autonomous weapon systems and military artificial intelligence (AI) applications report

Illustrative image

▲ Photo by U.S. Department of Defense on Flickr

Related recommendations

This is an executive summary of our investigation into autonomous weapon systems and military artificial intelligence (AI) applications

Read the full report

The use and proliferation of autonomous weapon systems appears likely in the near future, but the risks of artificial intelligence (AI)-enabled warfare are under-studied and under-funded. Autonomous weapons and military applications of AI more broadly (such as early-warning and decision-support systems) have the potential to increase the risk factors for a variety of issues, including great power war, nuclear stability, and AI safety. Several of these issues are potential pathways towards existential and global catastrophic risks. Autonomy in weapon systems therefore affects both the long-term future of the world and the lives of billions of people today.

This report intends to advise philanthropic donors who wish to reduce the risk from autonomous weapon systems and from the military applications of AI more broadly. We argue that a key problem of AI-enabled military systems arises from strategic risks that affect the likelihood of great power conflict, nuclear war, and risks from artificial general intelligence (AGI) . These risks include the increased speed of decision-making in a world with autonomous weapons, automation bias, increased complexity leading to a higher risk of accidents and escalation, and the possibility of AI-related military competition and its implications for long-term AI safety.

Beyond “Killer Robots”: Strategic Risks

Although “killer robots” feature in the popular imagination, and some prominent organizations have taken up and promoted the cause of formal arms control through discussions at the UN, autonomous weapons remain a neglected issue for three reasons. First, the largest organizations focus mostly on humanitarian issues, leaving strategic threats relatively neglected. Second, those who do study risks beyond “slaughterbots” (like automation bias, strategic stability, etc.) are few and receive even less funding; there is a talent shortage and room for funding. Third, the most widely-advocated solution — formal treaty-based arms control or a “killer robot ban” — is not the most tractable solution. Thus, philanthropists have an opportunity to have an outsized impact in this space and reduce the long-term risks to humanity’s survival and flourishing.

Pathways to Risk

We argue that autonomous weapon systems can act as a “threat multiplier", activating pathways for a variety of risks as outlined in figure 1, below. These risks — like great power conflict and thermonuclear war — could lead to the deaths of millions or even billions of people, and in the worst cases, to the extinction of humanity or the unrecoverable loss of our potential. Our report dives into greater detail for each of the potential risks that autonomous weapons could pose to international security and the future of humanity.

The Autonomous Risk Landscape

The Autonomous Risk Landscape

Orange = “flashy” problems; light green = “boring” (but neglected) problems; dark green = Global Catastrophic Risks (GCRs) and existential threats. Source: author’s diagram based on literature reviewed in the full report. *= see Great Power Conflict report.

Evaluating Interventions

In addition to outlining the risks from autonomous weapon systems and the military applications of AI, we also evaluate potential interventions to mitigate these risks. We argue that philanthropists can have an outsized impact in this space by following two guiding principles to choose their interventions:

  1. Focus on strategic risks, including the effects of autonomous systems on nuclear stability.
  2. Focus on key actors, rather than multilateral inclusiveness, and prioritize those states most likely to develop and use autonomous systems, possibly starting with bilateral dialogues.

Using these guiding principles, we argue that effective philanthropists should focus not on a legally binding ban of autonomous weapons, but on researching strategic risks and on working on a set of confidence-building measures (CBMs), which have a track record of regulating militarily-useful technologies. Funding work related to this research and CBMs presents one of the best ways to have an impact on this important and neglected problem.


About the author

Portrait

Christian Ruhl

Global Catastrophic Risks Lead

Christian Ruhl is our Global Catastrophic Risks Lead based in Philadelphia. Before joining Founders Pledge in November 2021, Christian was the Global Order Program Manager at Perry World House, the University of Pennsylvania's global affairs think tank, where he managed the research theme on “The Future of the Global Order: Power, Technology, and Governance.” Before that, Christian studied on a Dr. Herchel Smith Fellowship at the University of Cambridge for two master’s degrees, one in History and Philosophy of Science and one in International Relations and Politics, with dissertations on early modern submarines and Cold War nuclear strategy. Christian received his BA from Williams College in 2017.