Giving from your Founders Pledge DAF this year-end? Check our 2024 giving deadlines

Center for a New American Security's project on threat detection and response for advanced AI

The Center for a New American Security (CNAS) is a bipartisan US think tank that researches and develops national security and defense policies. We recently made a $156,770 grant to CNAS through the Global Catastrophic Risks Fund to improve how the US government detects and responds to advanced AI threats. This is an active grantmaking effort to help boost global resilience to AI-related catastrophic risks.

What problem are we trying to solve?

Advanced AI systems, if developed and deployed without sufficient alignment to human interests, have the potential to pose existential threats to global safety. While much attention has been paid to preventing misaligned AI systems from getting deployed, far less work has gone into preparing for how to minimize damage if deployment occurs.

Currently, governments are largely underprepared for detecting and responding to misaligned AI systems. This lack of preparedness leaves global security vulnerable to potentially catastrophic risks. Specifically, key weaknesses can be found in:

  • Threat detection: identifying potentially dangerous AI capabilities as they emerge
  • Preparedness: having plans and systems in place to respond to AI incidents
  • Response: executing effective containment and mitigation strategies if an incident occurs

What do they do?

CNAS is a respected bipartisan organization that engages policymakers, experts, and the public with actionable research and analysis. They have deep expertise in emerging technologies like AI and a strong track record of conducting policy research that shapes US national security strategy.

Our grant to CNAS will catalyze a new project focused on increasing US government preparedness for dangerous AI. They will conduct in-depth research, develop actionable policy recommendations, and share their findings with policymakers. Some of the key areas they will explore include ways to:

  • Increase public visibility into frontier AI development (e.g., by enhancing whistleblower protections to promote disclosures of unsafe AI practices)
  • Strengthen public and private sector capabilities to identify and respond to early “AI warning shots” (e.g., by exploring options to establish a safety review board)
  • Enhance public-private cooperation on AI incident containment and response (e.g., by adapting existing crisis management frameworks to AI-specific scenarios)

Why do we recommend them?

The US is the most influential country in AI development due to its high GDP and geopolitical influence, combined with the fact that a large nexus of AI investment and talent resides in the US. Given the outsized influence of the US on AI development globally, improving its preparedness for AI risks could significantly boost overall global resilience.

We’ve identified CNAS as a promising organization to make meaningful progress on developing US AI policy. CNAS has a strong reputation for producing useful research, particularly on issues related to emerging technologies like AI. They also have a track record of successfully engaging with high-level government officials on AI policy issues. Several former CNAS staff members have gone on to join senior roles in US national security departments, further expanding their network of influence.

Improving governmental preparedness is a critical and relatively neglected area within AI governance. Even if not all of CNAS’s recommendations are immediately adopted, we believe this work can help develop the conversation on AI preparedness within policy circles, potentially paving the way for more effective policies in the future. By supporting CNAS in developing practical policy recommendations for AI threat detection and response, we aim to boost overall global resilience to catastrophic risks.

Notes

  1. We have previously funded CNAS on the idea of an "International Autonomous Incidents Agreement," and also advised Effektiv Spenden to make a grant to support a workshop and scenario exercise.

  1. What problem are we trying to solve?
  2. What do they do?
  3. Why do we recommend them?
  4. Notes