Centre for Long-Term Resilience
The Centre for Long-Term Resilience (CLTR) is a UK-based think tank specializing in extreme risks. Their primary focuses are (1) extreme AI risks, (2) extreme biological risks, and (3) improving government risk management capacity. CLTR conducts research and provides policy advice to help governments better manage these risks.
What problem are they trying to solve?
Advances in technology are creating risks that are potentially catastrophic if mismanaged, yet decision-makers often struggle to give these risks appropriate consideration and take action. Areas such as AI safety, biosecurity, and systemic risk management require expertise and resources that governments frequently lack.
CLTR addresses these gaps by providing governments with concrete policy recommendations, built on expertise in AI, biosecurity, and risk management. CLTR’s primary focus is on the UK government, which is an influential global actor in building global resilience to extreme AI and biological risks. Their goal is to see policies adopted that substantially reduce risks from advanced technologies and enable better response to extreme events.
What do they do?
CLTR researches extreme risks and works directly with governments to translate findings into actionable policy recommendations. Core activities include:
- Providing policy advice to senior government officials on AI, biosecurity, and risk management. For example, on AI, CLTR have provided input on The Ministry of Defence’s AI Strategy, with many of CLTR’s recommendations adopted (as outlined here); on Biosecurity, CLTR have assisted Cabinet Office’s Biosecurity Strategy Refresh leading to their recommendations included in the UK's 2023 Biological Security Strategy
- Conducting research to generate policy proposals within CLTR’s three focus areas of AI, Biosecurity, and Risk Management
- “Mainstreaming” extreme risks more broadly amongst politicians, think tanks, the media, and the public. For example, CLTR have provided oral evidence at a session of the Joint Committee on National Security Strategy in parliament to review the UK Resilience Framework and Integrated Review Framework; they have provided support for the Institute for Government’s Managing Extreme Risks report; and written op-eds including in Times Red Box and the Financial Times.
Why do we recommend them?
UK government policy is critical for reducing global extreme risk. The UK has significant economic and diplomatic power, and has especially high influence in emerging technology and extreme risks. For instance, major AI tech companies such as DeepMind, OpenAI and Anthropic have offices based in London; whilst organizations with relevant expertise on extreme risks (such as the Centre for the Study of Existential Risk and the Centre for the Governance of AI) are based in Cambridge and Oxford, respectively.
More broadly, policies in the UK have the potential to be adopted internationally, due to the UK’s geopolitical influence (as a permanent member of the UNSC, the world's 6th largest economy, and alliance with the US). In June this year, the UK Prime Minister confirmed that, following a discussion with the US President, the UK would host a global AI safety summit autumn 2023 to evaluate and monitor AI's "most significant risks," including those posed by frontier systems, and that he wanted to make the UK the "home of global AI safety regulation."
CLTR has a proven track record of influencing UK policy on extreme risks. Policy changes they have contributed to include:
- In AI, the Ministry of Defence’s AI Strategy report, which explicitly mentions AI as a potential extreme risk, and proposes various positive safety measures
- In Biosecurity, the recently refreshed UK Biosecurity Strategy, which relied on CLTR’s expertise and network
- In Risk Management, increasing the time horizon for the National Security Risk Assessment from 2 to 5 years, and a new exercise to identify longer-term chronic risks and vulnerability to them
What would they do with more funding?
CLTR is currently an nine-person team of experts, with small policy units. They plan to expand to fifteen by 2025, which will allow them to:
- Provide critical advice to relevant policymakers on AI, Biosecurity and Risk Management
- Generate research reports and input on AI, Biosecurity and Risk Management
- Continue developing a strong network with policymakers and politicians, to spot future opportunities and brief senior stakeholders on the critical importance of boosting resilience to extreme risks