New research and recommendations on Advanced AI

Illustrative image

▲ Photo by Google DeepMind on Unsplash. An artist’s illustration of artificial intelligence.

Related recommendations

This week, we are releasing new research on advanced artificial intelligence (AI), the opportunities and risks it presents, and the role philanthropy can play in its evolution. As with our previous research reports investigating areas such as nuclear risks and catastrophic biological risks, our report on advanced AI provides a deep-dive into the topic, aimed at guiding philanthropists to achieve outsized impact in a complex and important field.

Our condensed report provides an overview of our research, including a short summary. For a deeper dive, we also have a 140-page technical report.

Action on AI from philanthropists and wider civil society is vital. Big tech companies are racing to build AI systems with ever-increasing risk, but governments have failed to act decisively to address these challenges. The report’s author, Tom Barnes, had the opportunity to work as an expert secondee at the heart of the UK government on AI policy earlier this year. His main takeaway from his experience is that governments are not prepared for the dangers that advanced AI will very soon pose to all of us.

In brief, the key points from our report are:

  1. General, highly capable AI systems are likely to be developed in the next couple of decades, with the possibility of emerging in the next few years.
  2. Such AI systems will radically upend the existing order - presenting a wide range of risks, scaling up to and including catastrophic threats.
  3. AI companies - funded by big tech - are racing to build these systems without appropriate caution or restraint given the stakes at play.
  4. Governments are under-resourced, ill-equipped and vulnerable to regulatory capture from big tech companies, leaving a worrying gap in our defenses against dangerous AI systems.
  5. Philanthropists can and must step in where governments and the private sector are missing the mark.
  6. We recommend special attention to funding opportunities to boost global resilience, improve government capacity, coordinate major players, and advance technical safety research.

AI recommendations

Fortunately, there are several highly promising organizations tackling these challenges head-on. Alongside this report, we are sharing our recommended high-impact funding opportunities: The Centre for Long-Term Resilience, the Institute for Law and AI, the Effective Institutions Project and FAR AI are four promising organizations we have recently evaluated and recommend for more funding, covering our four respective focus areas. We are in the process of evaluating more organizations, and hope to release more recommendations.

Furthermore, the Founders Pledge’s Global Catastrophic Risks Fund supports critical work on these issues. If you would like to make progress on a range of risks - including advanced AI - then please consider donating to the Fund.


About the author

Portrait

Tom Barnes

Applied Researcher

Tom joined Founders Pledge in September 2021 as a Researcher. He studied Philosophy, Politics and Economics at Warwick before interning at Rethink Priorities, where he researched various future issues. In 2024, Tom went on secondment to the UK government as an expert in AI policy.