Giving from your Founders Pledge DAF this year-end? Check our 2024 giving deadlines

Center for Responsible Innovation

Related research

The Center for Responsible Innovation (CRI) is an organization working to shape the responsible development and use of AI. CRI focuses on promoting ethical innovation, conducting AI policy research, developing strong policy proposals, and educating policymakers on AI. We’ve identified CRI's work to advance responsible innovation and effective AI policy in the United States as a high-impact funding opportunity to help ensure that AI systems are developed safely.

What problem are they trying to solve?

Artificial intelligence is progressing at a remarkable pace. We estimate that there’s a high likelihood that general, highly capable AI systems will be developed in the next couple of decades, with a greater than 50% chance of development by 2050. While AI has immense potential benefits, it also poses serious risks if not developed responsibly. Advanced AI systems could be misused or behave in unpredictable ways, potentially causing catastrophic harm.

Currently, AI development is largely driven by commercial incentives that may not prioritize safety and ethics. Governments are ill-equipped to keep up with the rapid pace of AI progress, and are vulnerable to regulatory capture from big tech companies. To address these risks, we urgently need advocacy to increase the likelihood that actionable AI policies get implemented.

What do they do?

The Center for Responsible Innovation conducts a broad range of nonpartisan efforts to help US policymakers develop more thoughtful governance of AI. Their work fills the critical gap between policy ideation and policy implementation, ensuring that Congress understands the concrete policy ideas that are necessary for responsible innovation. CRI conducts this work in close partnership with its sister organization, Americans for Responsible Innovation (ARI).

Their work includes:

  • Researching, analyzing, developing, and translating AI policy ideas into actionable policy proposals
  • Building coalitions to generate broader support in Congress for responsible innovation
  • Educating policymakers on key AI risks and policy options through briefings, teach-ins, and other outreach

Why do we recommend them?

Within the AI governance community, there are many groups that focus on AI policy research, but few that focus on translating ideas into actionable policies or seizing urgent policy windows when they open. We believe that effective policy development is an impactful and likely neglected pathway toward steering AI development in a positive direction.

CRI is especially well-positioned to fill this gap. One of CRI’s key strengths is that they take a “big tent” approach that can reach across the aisle of the bipartisan political system in the US. This coalition-building approach helps create broader buy-in for responsible AI policies across the political spectrum, increasing the likelihood of policies being adopted. They also have a strong team, with leadership that includes former members of Congress and senior government officials.

Though CRI and ARI are relatively young, they have already achieved early successes, including:

  • Coordinating a letter to Congress signed by 90 tech companies and civil society organizations in support of funding for AI safety work at the National Institute of Standards and Technology (NIST)
  • Leading a letter to Congress signed by over 45 industry, civil society, nonprofit, university, trade association, and research laboratory groups that called on federal lawmakers to permanently establish the NIST AI Safety Institute
  • Providing advice and input to US policymakers on multiple AI-related policy proposals in partnership
  • Proposing new policy approaches such as the the establishment of an AI incident reporting system
  • Holding an AI Policy Forum in which bipartisan leaders from the US Senate and House of Representatives gathered to publicly discuss the challenges associated with AI facing the federal government

Shaping the trajectory of artificial intelligence may be one of the most consequential challenges of our time. While many uncertainties remain, we believe CRI's approach of building broad coalitions to advance responsible AI policies in the US is an especially promising path toward impact.