Research

Existential Risk Executive Summary

Published On
2019-01-01
Written by
John Halstead
Share this story

Existential Risk Executive Summary and Giving Recommendation

This is a summary of our cause area report on Existential Risk. The full report can be found here, and the giving recommendations based on this research are the Center for Health Security, the Biosecurity Initiative at the Center for International Security and Cooperation, and the Center for Human Compatible AI (CHAI).

Homo sapiens have been on Earth for 200,000 years, but human civilisation could, if things go well, survive and thrive for millions of years. This means that whatever you value – be it happiness, knowledge, creativity, or something else – as long as we survive, there is much more to come in the future. As long as we survive, humanity could flourish to a much greater extent than today: millions of generations could live lives involving much more happiness, knowledge, or creativity than today. Therefore, if our aim is to do as much good as possible, a top priority should be to survive and to protect the long-term flourishing of civilisation. For members who wish to benefit future generations, focusing on existential risk reduction – to broadly protect the long-term future – looks a promising approach.

1. Unprecedented risks


This is an especially urgent time to focus on reducing existential risk. Homo sapiens have survived for around 200,000 years without being killed off by natural risks such as disease, asteroids, and volcanoes, which is evidence that these pose a relatively small risk. The major risks that have emerged over the last century, however, are man-made. Following millennia of stagnation, living standards improved enormously at the dawn of the Industrial Revolution with an explosion of innovation and technological discovery. We gained the power to feed a growing population, to reduce child mortality, and to create technologies allowing us to travel and communicate across great distances. However, our power to improve our material conditions increased in tow with our destructive power.

Nuclear war

The most dramatic such increase came with the invention of nuclear weapons at the end of the Second World War. This marked the dawn of a new epoch in which humankind may for the first time have gained the ability to destroy itself. The most concerning effect, first raised during the Cold War, is a potential nuclear winter in which the smoke from a nuclear war blocks out the Sun, disrupting agriculture for years. The potential severity of a nuclear winter is the subject of some controversy, but given the current split in expert opinion, it would be premature to rule it out. Nuclear war remains a risk today, but has now been joined by other emerging technological risks.

Engineered bioweapons

Developments in biotechnology promise to bring huge benefits to human health, helping to cure genetic disease and create new medicines. But they also carry major risks. Scientists have already demonstrated the ability to create enhanced pathogens, such as a form of bird flu potentially transmissible between mammals, as well as to create dangerous pathogens from scratch, such as horsepox, a virus similar to smallpox. The worry is that as biotechnology capabilities increase and biotechnology becomes more widely accessible, scientists, governments or terrorists might be able, by accident or design, to create viruses or bacteria that could kill hundreds of millions of people. Such weapons would be much harder to control than nuclear weapons because the barriers to acquiring them are likely to be considerably lower.

Artificial intelligence

Developments in artificial intelligence also promise significant benefits, such as helping to automate tasks, improving scientific research, and diagnosing disease. However, they also bring risks. Humanity’s prosperity on the planet is due to our intelligence: we are only slightly more intelligent than chimpanzees, but, as Stuart Armstrong has noted, in this slight advantage lies the difference between planetary dominance and a permanent place on the endangered species list. Most surveyed AI researchers believe that we will develop advanced human-level AI systems at some point this century, and that the chance of this happening in the next few decades is upwards of 1 in 10. In creating advanced general AI systems, we would be forfeiting our place as the most intelligent being on the planet, but currently we do not know how to ensure that AI systems are aligned with human interests.

Experience with today’s narrow AI systems has shown that it can be difficult to ensure that the systems do what we want rather than what we specify, that they are reliable across contexts, and that we have meaningful oversight. In narrow domains, such failures are usually trivial, but for a highly competent general AI, especially one that is connected to much of our infrastructure through the internet, the risk of unintended consequences is great. Developing a highly competent general AI could also make one state unassailably powerful, and there could in turn be a “race to the bottom” as countries skimp on safety for the sake of making AI systems more powerful.

Managing the transition to AI systems that surpass humans at all tasks is likely to be one of humanity’s most important challenges this century, because the outcome could be extremely good or extremely bad for our species.

Climate change

Burning fossil fuels has allowed us to harness huge amounts of energy for industrial production, but also exacerbates the greenhouse effect. On current plans and policy, there is over a 10% chance of global warming in excess of 6°C. This would make the Earth unrecognisable, causing flooding of major cities, making much of the tropics effectively uninhabitable, and exacerbating drought. Whether climate change could cause an existential catastrophe is unclear. However, in the context of focusing on reducing existential risk, a key factor is that work on climate change is much less neglected than work on the other risks mentioned above, so working on those risks is likely to be higher impact on the margin, though much more effort overall should go into solving the climate problem. If you are interested in focusing on climate change, see our climate change report.

Overall risk this century

Overall, the picture for the 21st century is one of increasing prosperity, but also one of increasing risk that threatens to undo all progress. Estimating the overall level of existential risk this century is difficult, but the evidence, combined with expert surveys, suggests that the risk is plausibly greater than 1 in 100. To put this in perspective, this is roughly on a par with the average European’s chance of dying in a car crash. Given the stakes involved, we owe it to future generations to reduce the risk significantly.

2. Existential risk is a highly neglected problem

Despite the unprecedented threat, existential risk reduction is highly neglected. Future generations are the main beneficiaries of existential risk reduction, but they cannot vote, nor can they pay the current generation for protection. Existential risks are also global in scope, so no single nation enjoys all the benefits of reducing them. Consequently, global efforts to reduce existential risk have tended to be inadequate. Although hundreds of billions of dollars are spent on climate change each year, much of this spending is ineffective, and the other major risks – nuclear security, biosecurity and advanced general AI – each receive much less than $100bn in government spending each year. These three risks are also highly neglected by philanthropists.

For prospective donors, this means that the potential to find “low-hanging fruit” in this cause area is exceptionally high at present. Just as VC investors can make outsized returns in large uncrowded markets, philanthropists can have outsized impact by working on large and uncrowded problems.

3. Charity recommendations

Based on our review of the evidence, we conclude that engineered pathogens and AI are the greatest known existential risks, that nuclear security is also a major concern, and that general research into existential risk would be highly valuable given how neglected the space is.

This said, these areas are highly complex. This is why we are grateful to be able to utilise the in-depth expertise and background research done by current and former staff at the Open Philanthropy Project, the world’s largest grant-maker on existential risk. The Open Philanthropy Project identifies high-impact giving opportunities, makes grants, follows the results and publishes its findings. Its main funders are Cari Tuna and Dustin Moskovitz, a co-founder of Facebook and Asana, and it also partners with other donors. While the majority of our research projects start the charity evaluation stage by creating an extensive long-list of candidates, we deemed it most sound and efficient, in this particular case, to utilise the expertise of Program Officers and former Program Officers at the Open Philanthropy Project’s. (Disclosure: Open Philanthropy Project has made several unrelated grants to Founders Pledge).

Utilising expertise

Based on their research and knowledge of the field, these Program Officers and former Program Officers gave us a list of promising organisations working on the areas outlined above. An initial round of vetting filtered out some of these organisations either because: they lacked room for more funding, they were not the subject of an in-depth investigation by the Program Officer, or were unable to answer our queries in sufficient time. As a result, we have a list of recommendations of charities that only work on reducing the risk of engineered pathogens and AI safety.

In the area of engineered pathogens, we use the advice and expertise of Jaime Yassif, a former Program Officer in Biosecurity and Pandemic Preparedness at the Open Philanthropy Project. Ms Yassif was previously a Science & Technology Policy Advisor at the US Department of Defense, where she focused on oversight of the Cooperative Threat Reduction Program. During this period, she also worked on the Global Health Security Agenda at the US Department of Health and Human Services.

In the area of AI safety, we use the advice and expertise of Daniel Dewey a Program Officer on Potential Risks from Advanced Artificial Intelligence at Open Philanthropy Project. Mr Dewey was previously a software engineer at Google, and a Research Fellow at the Future of Humanity Institute at the University of Oxford.

These recommendations are the personal judgements of these individuals only, not of the Open Philanthropy Project. They draw on significant subject area expertise and knowledge of the overall quality of the organisations, and these organisations have been the recipients of earlier grants from the Open Philanthropy Project, but the vetting of the organisations’ future project is not as heavy as many of Open Philanthropy’s grants. The Founders Pledge research team has carried out additional vetting of these donation recommendations.

Recommended giving opportunities

We have three recommended organisations that help to reduce existential risk, two working on biosecurity and pandemic preparedness, and one on AI safety.

The Center for Health Security at Johns Hopkins University

The Center for Health Security (CHS) is a think tank at Johns Hopkins University which carries out research on biosecurity, and advocates for improved policy in the US and internationally. According to Jaime Yassif, CHS is one of the best organisations working in the US and globally on health security, is a trusted source of advice for the US government, and has a strong track record of achieving substantial policy change and raising awareness among academics and policymakers.

Past success

CHS’ past successes include:

• Its development of guidance for the US on response to anthrax, smallpox, plague, and Ebola before the US government had any guidance or efforts on these issues. CHS’ recommendations were incorporated into the US Hospital Preparedness Program, as well as US programmes and national strategies on biosurveillance, medical countermeasure development, and pandemic planning and response.

• CHS has also run a number of highly influential scenario planning exercises for disease outbreaks, one of which — Dark Winter — played a major role in encouraging the US government to stockpile smallpox vaccine.

• CHS has also helped to raise awareness about the risk of engineered pathogens in academia, for example, by being heavily involved in a special issue of the journal Health Security on global catastrophic biological risks.

Future projects

With additional funding, CHS could carry out a number of highly promising future projects, including:

• Deterring Biological Attacks by Denying the Anonymity of an Attacker

CHS plans to build on recent preliminary work showing that an AI method called deep learning can be used to discover the identity of the creator of engineered bioweapons. This could be a major deterrent for malicious actors. CHS plans to analyse the feasibility of algorithm-based attribution and to identify the steps needed to realise this approach.

• Illuminating Biological Dark Matter: Pandemic Preparedness & Widespread Diagnostic Testing for Infectious Diseases

CHS plans to execute a 12-month project for the widespread adoption of cutting-edge diagnostic technologies in the US. In recent years, several sophisticated diagnostic technologies have entered the market. These would allow us to better understand the nature of disease outbreaks, facilitate targeted therapies and improve the US’ ability to manage antibiotics in responsible ways. CHS’ diagnostics project will deliver a roadmap outlining major barriers in the US to adoption of advanced diagnostics, and the means for overcoming them, as well as spreading any relevant lessons to other countries.

Funding needs

In the next year, for these and other projects, CHS could productively absorb an additional $2.2m in funding. We recommend unrestricted funding to give CHS maximum flexibility to achieve their aims.

The Biosecurity Initiative at the Center for International Security and Cooperation, Stanford

The Biosecurity Initiative at the Center for International Security and Cooperation (CISAC) is a research centre at Stanford that carries out policy research and industry outreach to reduce the risk of natural and engineered pandemics. According to Jaime Yassif, CISAC’s in-house biosecurity experts, David Relman and Megan Palmer, are both thought leaders in the field, and CISAC collaborates with other faculty at Stanford, who have deep technical knowledge and biosecurity expertise. Due to the quality of its existing staff, its ties to bioscience departments at Stanford and its location in Silicon Valley – a biotech industry hub – CISAC has a comparative advantage in working on the technical and governance aspects of biosecurity.

Past success

The Biosecurity Initiative has contributed world-leading research on emerging issues at the intersection of pandemic risk and biotechnology. One of the Initiative’s main aims thus far has been to leverage its place in a major biotech hub by exploring how to foster a culture of safety and responsibility in biotechnology research. For example, in 2012 and 2015, Dr Palmer and Dr Endy (also of CISAC) ran the Synthetic Biology Leadership Excellence Accelerator Program (LEAP), an international part-time fellowship programme in responsible biotechnology leadership. These programmes selected early/mid-career practitioners across academia, industry and policy to work with a network of mentors. LEAP alumni have gone on to place significant emphasis on policy and security in their careers.

Future plans

With additional funding, The Biosecurity Initiative plans to carry out a number of promising projects in the future, including:

• Substantial support for CISAC lead researchers

Additional funding would allow CISAC to initiate or expand on some projects, such as: Professor Relman’s work on technology horizon scanning, which provides an overview of the latest advances in science that are most relevant to catastrophic biological risks; and Dr Steve Luby’s research on the conditions required for a major biological catastrophe.

• Biosecurity Bootcamp

The Biosecurity Initiative aims to pilot a short course or workshop on the biotechnology and biosecurity landscape. This course would target influential individuals with limited exposure to the emerging issues associated with biotechnology, such as congressional staff, industry leaders, media, funders, intelligence analysts and others.

• Biosecurity Fellows

The Initiative wishes to recruit post-doctoral fellows and visiting scholars to spend one to two years carrying out policy-relevant scholarship in biosecurity. CISAC has a strong track record of supporting security experts in a range of disciplines, including Condoleezza Rice and former Secretary of Defense William J. Perry.

Funding needs

The Biosecurity Initiative could productively absorb $2m per year for the next three years. We recommend unrestricted funding to give the Biosecurity Initiative maximum flexibility to achieve their aims.

The Center for Human-Compatible AI, University of California, Berkeley

The Center for Human Compatible AI (CHAI) is an academic research centre at the University of California, Berkeley that carries out technical research and wider public advocacy on ensuring the safety of AI systems, as well as building the field of future AI researchers concerned about AI safety. CHAI is led by Professor Stuart Russell, author of Artificial Intelligence: A Modern Approach, one of the leading AI textbooks, and one of the most articulate and respected voices in the debate about the risks of advanced AI systems. Given that the field of AI safety is somewhat controversial, especially in relation to existential risk, CHAI’s ability to afford the topic academic legitimacy is likely to be especially valuable. CHAI faculty members and Principal Investigators generally have a very strong reputation in the field.

Past success

Although CHAI has only been operating since 2016, it has already made substantial progress. Since 2016, dozens of papers have been published by CHAI faculty, affiliates and students on technical AI safety. (A full list of papers can be found here.) One of their most potentially significant achievements is the introduction of Cooperative Inverse Reinforcement Learning and the proof that it leads to provably beneficial AI systems that are necessarily deferential to humans. This paper and two other papers by CHAI researchers have been accepted by NIPS, a major conference on machine learning, making these some of the best-received AI safety papers among machine learning researchers.

CHAI researchers have also had a major role in influencing public opinion. CHAI’s work in this area has included:

• Invitations to advise the governments of numerous countries.

• Dozens of talks, including Professor Stuart Russell’s popular TED talk on ‘3 principles for creating safer AI’, and talks at the World Economic Forum in Davos, as well as the Nobel Week Dialogues in Stockholm and Tokyo.

• A variety of media articles, including “Yes, We Are Worried About the Existential Risk of Superintelligence” in MIT Technology Review.

Future plans

Additional funding would allow CHAI to execute a number of new projects, including:

• Developing a safe and powerful AI assistant

CHAI would like to begin a research project to build a safe online personal assistant, with the aim of testing out CHAI research ideas in the context of an actual system that interacts with humans and has access to one’s credit card and email. This would help to demonstrate to the broader AI community the importance of provably beneficial AI systems. This would be a good way to better understand the difficulty of preference alignment in a relatively constrained, simplified environment.

• New CHAI branches

CHAI is currently exploring the possibility of expanding to other universities in the near future, including Princeton and MIT, and have already had conversations with potential Principal Investigators at these institutions.

Funding needs

Over the next three years, CHAI could absorb an additional ~$3m for these and other projects. We recommend unrestricted funding to give CHAI maximum flexibility to achieve their aims.

John Halstead

Author

John joined Founders Pledge in 2017 from a background in policy think tanks and academia. He has a doctorate in political philosophy from Oxford and taught philosophy at the Blavatnik School of Government in Oxford. Following that, he moved to the Global Priorities Project, working as a researcher on global catastrophic risk.