Effective Institutions Project's work on AI governance

Illustrative image

▲ Photo by Kevin Ku on Unsplash

Transformative Artificial Intelligence (AI) will profoundly affect the trajectory of human civilization and is likely to be developed and deployed within our lifetimes. We think there is a reasonable chance that it could be developed much sooner—in the next ten years. As with other powerful technologies, powerful AI could cause a catastrophe if poorly managed. Advanced AI systems may be misused for destructive purposes, such as for biological attack or military advantage, or could result in broader structural failures, including the extreme concentration of power and wealth, the erosion of epistemic security, or the mistreatment of people and animals. Finally, we worry that sufficiently advanced AI systems could accidentally disempower, hurt, or even destroy humanity as a byproduct of operating without concern for human priorities.

The goal of the Effective Institutions Project (EIP)’s AI governance work is to influence important institutions, including big tech companies and key governments and multilateral bodies, to take actions now that anticipate the full range of possibilities from both present-day and transformative AI and allow society time to come to collective agreement about how to develop and deploy these technologies responsibly, safety, and beneficially. EIP expects that many of the foundational laws, regulations, and norms that will govern the development and use of artificial intelligence for decades to come will likely be forged over the next few years, and believes we face a critical window of opportunity to “get it right.”

What do they do?

At a high level, EIP’s goal within its AI work is to influence key institutions to act more safely and responsibly, that is to say in a way that ensures the benefits of AI accrue to everyone, aligns AI to collective human preferences, and guards against worst-case scenarios.

EIP pursues this strategy via multiple avenues:

  • Creating opportunities for productive collaboration and problem-solving across different “factions” in the AI ecosystem, with the goal of building more unified coalitions behind common-sense responsible AI reforms and policies.
  • Strengthening networks and making connections between important nodes in the AI ecosystem (e.g. between major labs and safety researchers).
  • Publishing research reports (e.g. from credible insiders) arguing for institutional changes that can improve AI-related outcomes.
  • Commissioning reports that present pathways for improving the capacity of specific institutions (e.g. the US National Security Council) to respond to relevant risks and opportunities.
  • Educating funders on the AI landscape in order to grow the pie of available funding and upskill funders on the main concerns in the space.
  • Making direct recommendations to these donors for funding to promote responsible development and thoughtful governance of advanced AI.

We believe EIP has the potential to affect the decisions of major actors in AI in a positive way. Algorithmic advances, the continued leaking of open-source models, and the reduced cost of compute all gesture toward a world in which the number of potentially important institutions for the future of AI is increasing. Ensuring these institutions fully understand relevant issues in AI governance and are committed to putting society’s interests first is critical.

We also think EIP has the potential to build cross-ideological bridges within the AI ecosystem, as EIP has strong connections to a wide range of funders in AI and AI-adjacent areas. To the best of our understanding, EIP is the only organization attempting to engage with large mainstream donors and facilitate communication across existing siloes in pursuit of a more effective AI funding ecosystem that takes into account the full range of possible futures in front of us.

Additional funds will be used to continue EIP’s current work in AI and increase the scope of EIP’s operations by hiring additional staff, enabling them to address more institutions and open new workstreams.

Message from the organization

The Effective Institutions Project invests in societal leadership to address major global challenges, including those from transformative AI. AI governance has been a core focus for the organization since its founding in 2021, and the majority of our organization’s resources are currently devoted toward making progress on this important and timely issue. With generative AI models having approached or surpassed human abilities in a number of domains in the past year, the world’s institutions are sitting up and taking notice – but too many are still uncertain of what to do or lack sufficient urgency to act. EIP works in partnership with funders and civil society organizations across the globe to coordinate efforts, share knowledge, and drive towards coherent and broadly endorsed policy outcomes that reduce the risks of transformative AI and enable our society to flourish. Some recent and upcoming work that we’re excited about includes:

  • A collective sense-making and strategic clarity workshop for AI funders, designed and presented in partnership with Metaculus and AI Objectives Institute, that drew participants planning to distribute hundreds of millions of dollars in AI research and advocacy funding in 2024.
  • Partnering with Forward Global to program and facilitate a six-session learning series for donors interested in AI over the course of the coming year.
  • Commissioning research to understand the potential impact of the 2024 elections on US AI policy through the rest of the decade.

We are actively fundraising to sustain and expand this work in 2025 and beyond, and commitments made in the next few months will play a major role in how much we are able to staff up to meet the demand and interest we’re seeing from our network. We are tremendously grateful for Founders Pledge’s support and endorsement.

More resources