Rapid Response Fund • filling critical time-sensitive funding gaps created by the suspension of foreign aid •Donate Now

How philanthropists can help slow the race to dangerous AI

When corporations race to develop powerful technologies like advanced AI, they can become increasingly incentivized to prioritize speed over safety. We’re already seeing an AI race between tech companies—but even more alarmingly, a similar race could also arise between countries.

The emergence of an AI racing dynamic between great power nations like the US and China could cause both countries to cut corners on testing and other safety precautions, increasing the risks of AI-related accidents and misuse. This race to the bottom could have catastrophic effects.

We’ve published multiple deep-dive research reports on cause areas relevant to this topic, including:

Through our research, we’ve identified potential pathways that philanthropists can help mitigate the risks posed by an AI racing dynamic between nations. In particular, we’ve identified backchannel diplomacy between AI experts and other nongovernmental participants as a potentially high-leverage intervention. Though there are many uncertainties around how to influence international policy, we believe there’s an opportunity here for philanthropists to play a critical role in safeguarding global well-being.

This post will:

  • Explain AI racing dynamics and the consequences of a race to the bottom.
  • Discuss the potential benefits and risks of backchannel (“track 2”) diplomacy.
  • Outline the track 2 diplomacy initiatives we’ve identified as potentially high-leverage ways to help mitigate AI racing and how we’ve supported them through the Global Catastrophic Risks Fund.

The international race to develop powerful AI

Racing dynamics between great powers have emerged repeatedly throughout history. One example occurred with the “missile gap” in the 1950s, when US experts were convinced that the Soviets had developed a stockpile of intercontinental ballistic missiles (ICBMs). Fearing a first strike from the Soviets, the US experts led a sprint to develop their own ICBM stockpile. Later, the 1961 National Intelligence Estimate revealed that at the beginning of this sprint, the Soviets actually had only four ICBMs—a small threat compared to the approximately forty in the US at the time. Ultimately, the missile race only hastened the development of more weapons of mass destruction on both sides.

Sometimes countries compete against threats that turn out not just to be exaggerated, but wholly imagined. During the Cold War, the Soviets continued operating the largest and most sophisticated biological weapons program in history because they believed they were in a race with the US, falsely assuming that the Nixon administration’s public renunciation of biological weapons was just a ploy to throw them off the scent. The mere perception of being in a race led to the creation of new bioweapons, leading to a greater threat to civilians in both countries.

Unlike the examples above, the AI racing dynamic is not an arms race, as AI is a general-purpose technology rather than a military weapon. However, the race to develop more powerful AI has many arms-race-like dynamics, and the stakes are just as high, with potentially devastating effects for global security.

How do racing dynamics arise?

Imagine a scenario involving two nations that are both developing new AI technologies:

  • Nation A, which is seeking to deploy and use powerful technologies as quickly as possible.
  • Nation B, which is willing to move slowly to invest in careful testing and other safety precautions.

If Nation B suspects that Nation A is getting close to deploying an unsafe AI system, they might feel incentivized to do whatever it takes to create an AI system capable of stopping or rivaling Nation A, even if that means cutting corners on their own safety precautions. As a result, both nations become more likely to deploy unsafe AI systems, even though Nation B is acting with good intentions.

Crucially, this scenario can arise even if both nations start out as safety-conscious actors willing to move slowly. If the two nations fail to communicate and coordinate with each other, each might perceive that the other one is racing toward deployment, which would still result in a harmful racing dynamic that leaves both actors worse off.

This situation is an example of a security dilemma: when one nation’s defensive actions to increase its security provokes feelings of insecurity in another nation, causing the other nation to respond in kind, and ultimately leading to a vicious cycle where both nations end up building increasingly dangerous military capabilities.

What are the stakes of a race to AI?

A racing dynamic between great power nations like the US and China could cause both nations to pour billions of dollars into funding AI development in the next century. This would have two effects on global catastrophic risk:

  1. It shortens the timeline to the deployment of a variety of AI-enabled technologies.
  2. It makes those technologies more likely to be developed unsafely.

There are many pathways for those two effects to lead to catastrophe. Let’s start with a few examples of the risks involved in using AI for military applications. Allowing high-level military decision-making to be assisted by AI would likely increase the risk of a nuclear war, for which there have already been multiple close calls in the past. In addition, the use of autonomous weapons systems could accelerate the tempo of military decision-making, allowing war to happen at “machine speed” instead of at human speed, which could make even non-nuclear wars deadlier and more difficult to de-escalate.

Outside of military applications, the rapid development of advanced AI also comes with a slew of potential risks. The incorporation of AI into modern society could cause or exacerbate systemic harms, such as mass unemployment, misinformation, and discrimination. In an extreme scenario, the deployment of a misaligned AI—a human-level AI system that has goals that differ from users’ intentions—could have even more disastrous consequences.

Furthermore, the development of AI acts as a risk multiplier for potentially existential risks, like biological risks. AI-powered biological design tools (BDTs) could increase the lethality of bioengineered pandemics and other weapons of mass destruction. If such technology gets developed and falls into the wrong hands, it could potentially lead to billions of deaths, especially if tension between the governments of great power nations prevents them from cooperating to combat global threats.

What does the AI race look like currently?

There’s already some evidence of technological competition simmering between the US and China. For example, the director of the US Defense Innovation Unit has stated, “We need technological advantage to prevail in this strategic competition with China [...] we’re not going fast enough.” Similarly, Xi Jinping has said China needs to “ensure that our country marches in the front ranks where it comes to theoretical research in this important area of AI, and occupies the high ground in critical and AI core technologies.”

Tensions haven’t yet reached the point of being a full racing dynamic, but it’s possible that this sense of competition will intensify in the coming decades. This is true for many reasons, such as:

  • More players create more competition. As AI capabilities proliferate and the cost of computational hardware falls, the number of states working on advanced AI will rise, which means coordination between players will only become more difficult.
  • Intractable issues will continue to spark tension. There are issues raising tension between the US and China that seem unlikely to get resolved in the near future, such as the sovereignty of Taiwan.
  • Political leaders are incentivized to signal racing. In all the great power states, political leaders are incentivized to signal to citizens that they’re leading the competition against their rivals.

Former OpenAI employee Leopold Aschenbrenner describes the race to develop advanced AI as a “trillion-dollar cluster,” where private investments in AI are likely to skyrocket past $100B a year within the next few years, making AI a massive energy investment and revenue driver that will demand the attention of the national security state. Within a few years, once it’s clear that advanced AI is an utterly decisive military technology, the governments of major powers like China and the US may have no choice but to launch their own AI efforts to prevent the other side from getting to superintelligence first—which could fairly be described as an AI version of the Manhattan Project.

Despite the high stakes, there are currently few official diplomatic dialogues between the US and China that focus on AI. Earlier this year, envoys from the US and China led intergovernmental AI dialogues in Geneva for the first time. Still, the range of these discussions is limited. These dialogues also leave many high-stakes issues out of scope, such as the intersection between AI and biosecurity.

How backchannel diplomacy could mitigate AI racing dynamics

International policy is a complex landscape, and it’s difficult to be certain about how philanthropists can make a difference. Many private philanthropists feel that because it’s difficult for them to shift international issues like US-Sino policy, their funding might make a bigger impact elsewhere. Though this is a reasonable stance, we also believe there are ways philanthropists can play a crucial role in the international policy ecosystem, particularly when it comes to high-consequence issues like AI racing dynamics.

One way philanthropists can directly influence these types of issues is by supporting backchannel diplomacy, also called track 2 diplomacy. In contrast to track 1 diplomacy, which refers to the official discussions and negotiations that take place between governments, track 2 diplomacy refers to unofficial exchanges between non-governmental parties, such as industry experts and retired government officials. This can take the form of workshops, dialogues, or other meetings between international participants, who are then able to feed information and ideas back to their countries’ official policymakers.

What are the potential benefits of backchannel diplomacy?

Based on our research, we believe there are good reasons to be optimistic about backchannel diplomacy as a potentially high-leverage intervention for mitigating AI racing dynamics.

We’ve researched and developed a detailed theory of change for how backchannel diplomatic efforts play a multipurpose role in the policymaking ecosystem. Track 2 dialogues can:

  • Facilitate information exchange, decreasing mutual suspicion. Participants can share information about the current state of their country’s AI development, along with ways to verify that information, such as by looking at certain supply chain trends. Both sides can then unofficially feed this information back to their governments, preventing racing dynamics from emerging.
  • Provide a space for object-level problem solving. Scientists, engineers, and other people with technical expertise related to AI can work together to solve specific problems. This can include developing best practices related to AI safety, testing, or red teaming.
  • Support the development of expert communities and talent pools. In many countries, AI governance expertise is in short supply, because AI is such a complex and rapidly evolving field. Track 2 dialogues can create transnational expert communities who can help craft policy related to US-China relations and AI, or even fill talent gaps in governments.
  • Lay the foundations for Track 1 diplomacy. In some cases, track 2 dialogues can serve as proof of concept and later mature into track 1 government-to-government dialogues. One example is the Pugwash Conferences during the Cold War, which led to several arms control treaties, including the Limited Test Ban Treaty and the 1972 Anti Ballistic Missile Treaty.
  • Hedge against political change. When leadership changes within countries and disrupts progress toward international policymaking, track 2 dialogues can preserve momentum and keep the conversation going about how to mitigate AI risks.
  • Raise issue salience for policy stakeholders. Many official diplomats and government officials don’t have the bandwidth to prioritize long-term issues like AI risks, even when these issues are more high-stakes than short-term priorities. Track 2 dialogues can create space for important issues not being addressed by governments.
  • Establish backchannels for crisis communication. In the event of an international crisis, such as AI-enabled military action, it’s crucial to have additional pathways for communication. Track 2 dialogues increase the “surface area” of information flow.

What are the downsides of backchannel diplomacy?

Any intervention in a high-uncertainty space comes with potential negative consequences. We’ve considered several potential downsides of funding backchannel diplomacy related to AI development, as well as their counterarguments and counteractions.

One risk is that the participants in these dialogues might not act in good faith, such as by lying to one another, spying for their own governments, or arresting individual participants for espionage and other false charges. Still, there are ways for dialogue organizers to avoid these risks. For example, these dialogues typically take place in third-party countries, to prevent one side from having too much power. Organizers can also brief all participants on best practices about the risks of sharing information, debrief participants afterwards about how much to believe, demand evidence to support claims made about each country’s specific AI capabilities, and triangulate claims with information from other sources.

Another risk is that funding these types of dialogues could actually accelerate racing dynamics by providing information about rivals’ AI development, if calling attention to AI creates an information hazard that makes both countries pursue even stronger AI development programs. However, both countries are already aware of the potential benefits of AI development—as indicated in the recent US executive order on AI, which the Chinese government has likely studied already. When it comes to racing dynamics, more information exchange is generally better than less. This is because when a country is uncertain about their rival’s capabilities, they’re incentivized to proceed as though they’re competing against the most extreme worst-case scenario, i.e., one in which their rival country is racing ahead. Increasing transparency narrows each country’s range of uncertainty about what the worst-case scenario could be, and thereby decreases tension between rivals.

A third potential risk is that less racing between nations could decelerate the development of potentially beneficial technologies that would actually make war less destructive, such as defensive technologies that decrease the risk of collateral damage or help defend against attacks. However, based on long-term trends and expert views on the topic, we believe that technological competition is more likely to increase the severity of future wars as well as the risk of accidents and proliferation, so slowing down development on both sides is a net benefit for global safety.

On the whole, after considering the risks, we still consider track 2 diplomacy to be a worthwhile bet for philanthropists to support.

Our work in this space

In 2023, for every $1 spent on ensuring AI systems were safe, $250 was invested in making them more powerful. That’s a huge gap that neither governments nor tech companies are incentivized to fill. When the private and public sectors fail to address an important societal issue, philanthropic funding can play a critical role.

After identifying backchannel diplomacy as a potentially high-leverage intervention, we’ve supported toward several funding opportunities that develop international dialogues and workshops related to AI risks, namely:

  • US-China AI dialogue organized by the Brookings Institution ($100,000). The Brookings Institution hosts high-level strategic dialogues between national security technology experts from the US and China to discuss risks emerging from the use of AI military systems and develop recommendations for mitigating those risks.
  • US-China AI dialogue for a session on AI-bio risks organized by INHR and CACDA (€115,000). Our grant funded new INHR dialogues starting in May 2024, where experts from the US, China, and other nations convened in Thailand to discuss the risks posed by the nexus between AI and other weapons of mass destruction, such as bioengineered pathogens.
  • We’ve also recommended FARAI, which acts as an umbrella organization for the International Dialogues on AI Safety. These dialogues, which began in October 2023, engage top AI safety researchers globally through dialogues between Western & Chinese scientists. Their most recent gathering in Venice led to a statement emphasizing the need for AI safety authorities, which was signed by scientists from the US, China, Britain, Singapore, Canada, and elsewhere.

We still have several open questions that we’ll continue to research as we move forward, such as:

  • How many dialogues are the right number? How does redundancy trade off against inefficiency?
  • What are the highest-consequence global risks that could be effectively addressed in backchannel dialogues? Are some topics—e.g., high-containment laboratory safety in China—too politically sensitive for some countries to discuss?
  • Who should be in the room for these dialogues, and how can we ensure diplomatic efforts reflect a diversity of viewpoints? For example, we feel that there is a dearth of dialogues representing right-of-center political viewpoints in the US, even though most consequential arms control treaties are passed under Republican administration.
  • How important is it for track 2 dialogues to “mature” into official track 1 diplomacy?
  • How can we ensure better coordination and collaboration between the different ongoing track 2 efforts?
  • How can track 2 efforts most effectively coordinate with government partners to ensure that unofficial diplomatic efforts do not accidentally “short circuit” official state-to-state negotiations?

Moving forward, we plan to continue funding track 2 diplomacy efforts around catastrophic risks. If you want to support more high-impact philanthropy to help solve critical problems like the race to develop advanced AI, consider donating to the Global Catastrophic Risks Fund.

Notes

  1. Ken Alibek, Biohazard: The Chilling True Story of the Largest Covert Biological Weapons Program in the World-Told from Inside by the Man Who Ran It.

  2. “Tech Advantage Critical to Prevail in Strategic Competition With China, DOD Official Says,” US Department of Defense, accessed December 5, 2023, https://www.defense.gov/News/News-Stories/Article/Article/2835616/tech-advantage-critical-to-prevail-in-st rategic-competition-with-china-dod-offi/.

  3. “Understanding China’s AI Strategy,” accessed December 5, 2023, https://www.cnas.org/publications/reports/understanding-chinas-ai-strategy.

  4. These numbers are based on the $45 billion invested in AI as of September 29 2023 (State of AI Report 2023, Slide 109; assume an extra $15 billion for the final quarter of 2023) and the investment in GenAI, representing the frontier (State of AI Report 2023, Slide 116; assume an extra 25% for the final quarter of 2023). https://www.stateof.ai/

  1. The international race to develop powerful AI
  2. How backchannel diplomacy could mitigate AI racing dynamics
  3. Our work in this space
  4. Notes

    About the author

    Portrait

    Hannah Yang

    Research Communicator

    Hannah joined Founders Pledge as the Research Communicator in September 2024. After earning a BA in Economics from Yale, she began her career as a corporate strategy consultant at McKinsey, and then pivoted into pursuing a creative career as a speculative fiction author and content writer. Her interest in the intersection of writing, data science, and real-world impact led her to her work at Founders Pledge.