Podcasts

How I Give: Existential risk with John Halstead

Published on
2020-07-24
Hosted by
Philip Kasumu
Share this story

From Founder to Philanthropist

How I Give is a Founders Pledge podcast about entrepreneurship, philanthropy and social change. We talk to founders, philanthropists and social innovators about what drives them, how they are rethinking social impact, and how we can do philanthropy better.

Episode 5

Existential risk with John Halstead, Head of Applied Research at Founders Pledge

The COVID-19 pandemic is showing us that preparing for – and proactively addressing – risks that perhaps seem far-fetched and unlikely is more crucial than ever. For all the devastating consequences of the current crisis, there are greater risks out there that threaten humanity’s long-term prospects. In this episode of How I Give, we talk to John Halstead, Head of Applied Research at Founders Pledge, about existential risk and why philanthropists should focus on safeguarding the welfare of future generations.

John joined Founders Pledge in 2017 from a background in policy think tanks and academia. He has a doctorate in political philosophy from Oxford and taught philosophy at the Blavatnik School of Government in Oxford. Following that, he moved to the Global Priorities Project, working as a researcher on global catastrophic risk.

Listen on Spotify
Listen on Apple Podcasts

Post-show readings:

  • Take a look at John's 2019 cause area report on existential risk and how to mitigate it.
  • Delve into existential risk and the future of humanity by reading The Precipice by Toby Ord.
  • Read John's latest blog about how we can compare the health costs and economic costs of lockdown.
  • Contact us: podcasts@founderspledge.com

    Transcript

    [00:00:00] Philip Kasumu: Welcome to How I Give, a Founders pledge podcast about entrepreneurship, philanthropy and social change. My name is Philip Kasumu and I head up Growth across Europe for Founders Pledge. On this podcast, we talk to founders, philanthropists and social innovators about what drives them, how they're rethinking social impact and how we can do philanthropy better.

    The coronavirus pandemic is showing us that preparing for, and proactively addressing, risks that perhaps seem farfetched and unlikely is more crucial than ever. And for all the devastating consequences of the current crisis, there are worse risks out there that threaten humanity's long-term prospects. In this episode of How I Give, I talked to John Halstead, head of research here at Founders Pledge, about existential risk and why philanthropists should focus on safeguarding the welfare of future generations.

    But before we get into this incredible conversation, here's a few words from our founder.

    [00:01:00] David Goldberg: Hey everybody. Thanks for tuning into the How I Give podcast. I'm David Goldberg and I'm the founder and CEO of Founders Pledge, a nonprofit on a mission to help entrepreneurs do immense good.

    We're a global community of more than 1,400 entrepreneurs finding solutions to the world's most pressing challenges, just like the one we face today. Since COVID-19 brought the world to a standstill, Founders Pledge has mobilised quickly and decisively. We've partnered with charities, frontline responders and research institutes at the forefront of fighting this pandemic and directed millions of dollars from our members to high-impact interventions that are working to stop the spread of the virus, mitigate the social and economic costs, and prepare us for future pandemics. It's been so inspiring to see our members take action when we need them most. The Founders Pledge model is simple; on joining, every member commits a portion of their personal exit proceeds to charitable causes. We offer the simplest path to impact for successful entrepreneurs, providing end to end giving infrastructure, [00:02:00] pioneering research, and a world-class network of experts all at zero cost.

    If you're an entrepreneur of a series B or C+ funded startup, and you'd like to be a part of this incredible community of entrepreneurs, we'd really love to hear from you. You can email us at podcasts@founderspledge.com. Again, that's podcasts@founderspledge.com.

    Thanks, and I hope you enjoy listening.

    Philip Kasumu: So, John, thank you for coming on the show.

    John Halstead: Thanks very much for having me.

    Philip Kasumu: No worries. So, John, before we start talking about existential risk and how to reduce that, tell me a little bit about yourself. So who are you and what's your role at Founders Pledge?

    John Halstead: Yeah, I'm a retired philosopher, I suppose. So I spent a while trying to be an academic philosopher and I taught philosophy at Oxford for a bit and then got slightly disenchanted and the job [00:03:00] market is very tough. And so I decided to leave and hopefully try and make a bigger difference with my career. So after that, I went to a think tank called the Global Priorities Project and we researched major global risks, I suppose, a happy topic. And then, yeah, after that, it was after that I came to Founders Pledge about three years ago. And I'm now head of Applied Research at Founders Pledge. So I help to set our research agenda and oversee the quality of the research that we do.

    Philip Kasumu: Nice. Nice. And we're gonna delve into a bit about, you know, your expertise in your subject area. So you are the resident expert on existential risk and reduction, or existential risk reduction, rather, so for us normal people or people who don't come from, I guess, the nonprofit [00:04:00] world or the scientific world, what does that exactly mean?

    John Halstead: Yeah, an existential risk is typically defined as being one of two things. It's either something, an event in which everyone dies. So, you know, like a huge asteroid hitting the earth and then all humanity and all sentient life dying, or it could be something that damages the long-term prospects of future flourishing, I suppose, for the species. So, you know, if we just never scientifically develop beyond where we are now and so we just lose out on a lot of potential or, we just get into this equilibrium where there's just different nation states fighting and we never, we never kind of rise to where we could go, which is the you know, maybe, curing disease and curing poverty and making sure people live kind of happy and fulfilled lives and then maybe exploring the [00:05:00] stars after that many centuries into the future.

    So that's it really, it's all centered around the idea of our long-run potential as a species, I guess.

    Philip Kasumu: In your experience, this might be a bit of a leading question, what kind of existential risks have we as humans experienced? I mean the most notable one of course seems to be COVID-19, right?

    John Halstead: Yeah, so I suppose the next essential risk would be defined as a very extreme event. So, you know, kind of by definition, we haven't really experienced them yet, but we have had to deal with risks so far and I suppose prior to the 20th century where we were mainly dealing with natural risks, so things like volcanoes and asteroids and pandemic diseases and things like that. And then in the 20th century [00:06:00] things changed a bit following the Industrial Revolution and we gained a lot more power to influence our environment and that meant improving our material condition, but it also introduced some new risks. So we discovered nuclear weapons, we discovered various techniques in biotechnology.

    Some experts think that developments in artificial intelligence could pose a risk to us in the future. Like this is looking many, many decades into the future. And then of course there's also climate change. So I suppose you could think of the discovery of nuclear weapons as really the dawn of a new epoch of human civilisation, where we kind of gained the ability to permanently destroy ourselves, or at least come close.

    Philip Kasumu: Yeah, the AI existential risk is gaining a lot of traction over the last few years. I mean, I'm constantly bombarded with articles [00:07:00] about, you know, major institutions taking this really seriously. I guess in your experience, or I guess could we get to know your opinion on AI and how you think about AI in terms of existential risk? And if it's a real risk, right like, if it's something that we should really be considering?

    John Halstead: Yeah. I think my view has changed a bit on how much of a risk it is. So I think the common story for why it's a big risk is that our dominance on the planet is due to our intelligence and the fact that we're more intelligent than other rival species.

    So, you know, we're more intelligent than chimps. And then in that slight difference in intelligence lies, you know, complete planetary domination, versus, you know, chimps who have a permanent place on the endangered species list. So the thought is that then if we invent some [00:08:00] machine that's more intelligent than us, then there's lots of risks involved in that for us as a species and for other species. So, and then the next thought is that if we, you know, machine learning researchers seem to think that, you know, it's not impossible that we'll build an AI system that's smarter than humans. I.e. like more competent at achieving its goals in all domains.

    So AIs are better than humans at chess and image recognition and poker and things like that. And the thought is that that list of things would just come to comprise all the main important relevant tasks. And then at that point it's smarter than us, and then it basically, its preferences determine what happens on earth.

    I suppose, the question you have is whether now is the right [00:09:00] time to put money into it, or whether it will be solved anyway, because you know, there’s fewer than 10 really big AI developers who maybe stand a prospect of developing an artificial intelligence that's like comparable to humans and they're gonna have incentives to make sure that the systems are safe for humans and for the ends they want to achieve. But the counter argument is that, you know, there's this view that maybe we get to human level intelligence at some point in the next 20 years, and then things take off really fast, so we need to start thinking about how to have, how to make sure these systems are safe right now and this is like a really important problem. And it's, we're at this kind of hinge of history, where what we do now is going to have this huge effect on how, on how the [00:10:00] rest of the human story unfolds. And there's different views on that. I'm probably leaning towards us not being in that position, but I definitely think that some resources should go towards thinking about the AI safety question.

    Philip Kasumu: Absolutely. And I guess that leads to my next question is like why should we focus, you know, AI aside, but why should we be focused on and invest in existential risk prevention? And why is it the right time to do so?

    John Halstead: Yeah. These major risk events are necessarily very rare, so they're not very salient to the general public.

    You look at things like COVID-19 or the Spanish flu. It seemed like we had good reasons to think that we would be hit with a pandemic of that magnitude, but we still weren't prepared because it's not the kind of thing that's in the news. It's not very salient to voters. It's not very salient to politicians. Politicians aren't going to gain much from trying to deal with those problems. So it's not really on people's radar. And then if you [00:11:00] do talk about it, you sort of are in the bucket of one of the people who's talking about how this sky's going to fall, so it's easy to ignore those people.

    So there's that. And then there's the fact that it's a, it's a global public good. So the classic problem with climate change is that, you know, the UK reduces its emissions, but the benefits of those emissions reductions just accrue to everyone on earth. So the UK only captures a small portion of the benefits of emissions reductions but bears all the costs. So that's a reason to think that the UK is going to under-invest in reducing emissions and certainly, every other country is going to look at it in the same way. So nothing's going to happen on climate change. And that kind of model describes quite well where we are today.

    The other reason to think that they'll be kind of unduly neglected is that most of the benefits accrue to future generations. So with climate change, we think most [00:12:00] of the damage from climate change is probably going to come towards the end of the century or after the end of 2100.

    So again, future generations can't vote. Politicians don't really get any political points for protecting future generations. They tend to want to be sensitive to the wishes of current voters. So, current voters will tend to under-invest for the most part in things that benefit future generations.
    So that's another reason to think these problems will be neglected. And then kind of building on that. A lot of the concern with existential risks comes out of this ethical idea, which is gaining traction in some circles, which is known as long-termism. And it's the observation that, you know, we're a very small fraction, we've so far experienced a very small fraction of the human story so far. And like, whatever [00:13:00] you care about, whether that's like artistic enjoyment, or human happiness or fulfillment, or time with friends, there's a lot more to come in the future because we have thousands and thousands and maybe millions of generations to come in the future provided that we survive. So that means that it's very important that we actually get through this potential time of peril with AI and nuclear war and that kind of thing, and ensure that our descendants can actually enjoy the things which they could do if we were to survive. So those are the main arguments for focusing on existential risk.

    Philip Kasumu: And I guess you alluded to some of them, but what would you say some of the counter arguments are? I mean, I know you mentioned, for example, you know, politicians want to focus on more pressing issues where, you know, they have control and can influence current voters, right, and appease current voters? But like, what are some of the other kind of counter arguments for investing in [00:14:00] existential risk?

    John Halstead: Yeah. So I think there's two broad camps. Like one would take a long-term point of view and would say, I agree that what matters most is the very long-term effects of our actions, but I don't think that these risks are actually that big. And I don't think that spending money on AI safety now is a good way to spend money because it just it doesn't have any impact. So what we should instead do is, we should invest our money. So we should, if we care about the very long term, we should invest our money for the very long term.

    We then get kind of cumulative, exponential growth that you get with investing in the stock market. And you've got to invest a lot more in the future when the time is right, basically. So to that end Founders Pledge is setting up, or considering setting up, a long-term investment fund where people can say, I'm going to put £1 million into this donor advised fund and then that's [00:15:00] going to be invested in the stock market, but we'll only spend out the money every 100 years or so. So we're trying to find once in a century opportunities. So that's one counter argument to focusing on, like spending money on the risks today.

    Another one is that you might reasonably want to focus on problems that affect the current generation. So there's a common view in moral philosophy and an intuition that's kind of widely shared among non-philosophers is that, you know, people who don't exist yet don't matter. So it makes sense for climate change to be bad because it harms people existing today, but it's not bad because it harms future generations because they don't exist yet.

    I think. Yeah. There's sort of disagreement within the Founders Pledge research team about that, and lots of people find it an intuitively [00:16:00] implausible position, but it's one of these ideas that's the subject of a lot of disagreement in moral philosophy. I'm kind of on the side of the long-term focused people.

    So I think that the interests of future generations do matter. And you can see this clearly, I think most clearly when you think about, well, what about if, you know, we have a choice about whether to bring future generations into existence, but they all live terrible lives of suffering. To me that does seem like a reason that should bear on our decisions. And I think we should try and avoid causing lots of suffering in the future, even if it's for future people, but it's obviously difficult to resolve that topic now. But those are the two main arguments I think, against focusing on existential risks.

    Philip Kasumu: Yeah. And it's definitely something that we can't really debate to what extent, via this show, because you know, [00:17:00] that's an ongoing debate, right? Like how do we pay it forward or resolve issues today, you know, are we going to be able to experience what a better future looks like ever? So it's like, how do we ensure that even the work that we think we're doing now is going to actually be impactful, right?

    John Halstead: Yeah, that's right. I mean, it's very hard to know what effect you're having on the future. Like there's some things that seem really robustly good, like tracking how many asteroids there are and knowing whether there's one that's heading straight for us, it seems like a good idea, for example, but then there's stuff where it's like really hard to know what the sign of the effect is. Like, is it even good or bad for the very long term? You know, if we're thinking thousands and thousands of years into the future, how do we think about, you know, maybe in some world climate change is actually good for the very long term for some odd reason. And then we just don't, you know, we don't have [00:18:00] lots of evidence to go on there. So we're really using like quite weak signals about, well, it seems kind of like destabilising things is generally bad because that leads to political conflict. So we sort of want to avoid things that destabilise the balance between great powers and things like that, so that's a reason to act on climate change or to avoid things like COVID that cause lots of tension and stop economic activity. But yeah, it's like, it's really hard to know what to do.

    And then there's this like emotional argument, which is that there's lots of people dying of hunger or of easily preventable illnesses today. And then why are we focusing on future generations? But this is one of those tough choices you have to make. If you're gonna, if you're thinking about how to do the most good.

    Philip Kasumu: Totally, totally. And I guess like, based on your framework, how do these different existential risks compare to each other? Like which ones should we prioritise? I mean, I know you [00:19:00] mentioned, you know, climate change, we've spoken about AI, but how do we decide which ones to tackle?

    John Halstead: Yeah, it's a good question. And a very hard one to answer. I think there is sort of the general view from people who study the topic, there's research institutes at Oxford and Cambridge and the Open Philanthropy who fund Founders Pledge, they have done lots of research into this. And the general view seems to be that the greatest risks that we face are AI and biosecurity and pandemic preparedness. And then there's also risks from nuclear war and from climate change, but they seem, they seem smaller. Quantifying it is really hard. Toby Ord recently wrote a book called The Precipice, which I did a bit of research for.

    He's a researcher at the Future of Humanity Institute in Oxford and he tried to put probabilities of how [00:20:00] likely we are to get taken out this century. And he had the per-century risk from AI as I think it was one in 10, from like a pandemic similar to what we're seeing now, but more severe was one in 30 he had for this century. And yeah, he had the climate change much, much lower than that, one in 1000, but still worth paying attention to. It's hard to say, I think is the answer, and yeah, I suppose I would broadly agree that it would be good to put money into AI and biosecurity. It's a bit tricky now 'cause of COVID because there's lots of money and attention flowing into pandemic preparedness.

    So it's a bit hard to know what difference your money's gonna make on the margin. But [00:21:00] yeah, I think the risk is that all the money that's going into biosecurity will focus on natural diseases. So things like COVID, COVID appears to have come from bats or a civet or a pangolin, like something in the animal kingdom. But due to new developments in biotechnology, it's becoming possible to engineer viruses to have more dangerous properties. And you know, states have tried to, have actually done that on a massive scale and terrorist groups have tried to use such pathogens. So I think there is this emerging risk from engineered pathogens. I would probably put them top and then in a similar ballpark, AI, and then maybe slightly lower climate change.

    Philip Kasumu: I guess [00:22:00] in regards to like engineered pathogens, you know, maybe this is a silly question, but what’s the purpose of an organisation or a lab practicing this when we can have cases of you know, what's happening now with COVID happen, like what's the necessity of testing and practicing and running these trials when they could cause a global pandemic?

    John Halstead: Yeah. So I think there's two justifications. One is that it allows us to carry out research into vaccines and things like that. So there's this controversial gain of function research where there's some scientists who tried to, who successfully mutated a version of bird flu and made it potentially transmissible between humans.

    So they made this version of bird flu that was transmissable between [00:23:00] ferrets. And that's like maybe a good model for something that could pass from human to human. As it is it mercifully virtually doesn't pass from human to human. It does pass from birds to humans and when it does the fatality rate is very high. It's like, estimates vary, but it's like between 1% and 60%. So if it were anything in that range, and it was transmissible between humans, it would be extremely bad. And there was this big debate in the scientific community about whether these people should be doing this research, given that the risks of it leaking from the lab, because these labs aren't infallible, like the foot and mouth outbreak in the UK was it was due to a release from a lab, for example, so the lab safety probably isn't what we want it to be given that the magnitude of the risk of these pathogens.

    [00:24:00] So yeah, there's research purposes and then there's kind of debates about that, I think like, most scientists are on the side of this, probably that that research probably wasn't worth the risk. And then there's just, another rationale is just academic freedom. Like in a liberal society, scientists want to be able to, they should be able to research what they like. And you know, that's another rationale, but then there's also state programs where they've, where States have tried to engineer, they've tried to create and engineer bioweapons.

    So after the Soviet Union had signed the Biological Weapons Convention, they got a huge biological weapons program where they, you know, made loads of things like smallpox and anthrax. And I suppose the thought was that they were guided, that maybe this would be useful in a war or in some sort of conflict [00:25:00] situation. So there's that risk to be concerned about.

    And then there's like terrorist groups and the worry is that as the technology gets better, it becomes more accessible to like, you know, untrained or less well-trained, less skilled people. And yeah, so there's a group called Shinrikyo who were a kind of doomsday cult in Japan in the eighties and nineties, and they did the sarin attacks on the Tokyo subway, and actually, they had a fairly advanced sort of chemical and biological weapons program. And their explicit aim was to bring about the extinction of humanity. Thankfully, they didn't succeed, but they did do various, like, biological attacks. So the concern is that maybe there'd be another crazy group in the future who would try and do something similar. And, yeah, but the technology would develop to such an extent that it would be easier for them to do a lot [00:26:00] of damage. So that's that's the concern. It's a bit like, you know, imagine if it was really, really easy to make nuclear weapons, that seems to be potentially the direction in which things could go with bioweapons, unless we sort of take action to regulate the spread of the technology.

    Philip Kasumu: Got it. Right. And I guess what can entrepreneurs and philanthropists do to kind of help reduce existential risk? If anything?

    John Halstead: Yeah, I mean, this is the big uncertainty. We have charity recommendations working on existential risk. So we have a few recommendations at the moment. We have the Center for Health Security who've been quite prominent in the response to COVID and they've provided lots of maps of the outbreak and they're a trusted source of advice to the US government, as much good as that can do at the moment. And then there's the Nuclear Threat Initiative. You have a biosecurity program led by people who were quite senior officials [00:27:00] in the Obama administration working on biosecurity.

    And then on the AI side, we have the Center for Human-Compatible AI, which is led by Stuart Russell, who wrote one of the leading AI textbooks. And they're working on kind of raising awareness of this potential issue of AI risk and AI safety. And they're also doing technical research. Like how can we actually make sure the AI systems are safe? So those are the recommendations we have for donors at the moment. So if people want to like think... for people look who are thinking about what can I do with my money to help out the long term and reduce existential risks then that seems like a good option.

    We also have recommendations in climate change. We have the Clear Air Taskforce who work in the US on clean energy innovation stuff, and clean air stuff. And the Coalition for Rainforest Nations, who advocate for increased money for protecting the rainforest. We should [00:28:00] should help with climate change and we're actively exploring other climate charities at the moment and should have a few new recommendations the next few months.
    So I think we're good people to speak to if people want to have an impact on existential risk.

    Philip Kasumu: Yeah. And what are some examples, I know you just mentioned a few just now, but what are some examples of funding opportunities that you are personally excited about in the space and why?

    John Halstead: Yeah. I suppose I would, I suppose before COVID hit, I probably would have said biosecurity would be my top pick, but given that COVID's hit and there's lots of money flowing in now from lots of donors into the area, it's a bit unclear where you can have an impact on the margin. Prior to COVID there weren't that many philanthropists working on pandemic preparedness, like Gates were kind of getting ready [00:29:00] for an outbreak to come and Open Philanthropy had a project on it, but, yeah, there weren't that many, there weren't that many philanthropists working on it and now that's all changed. So it's a bit unclear what difference you can make in that space.

    So I dunno if I had to give now I would probably give to the Center for Human-Compatible AI from an existential risk point of view. And, yeah, just kind of that's my main rationale there would be just raising awareness about the issues. I think it is important, I suppose, I think that's more important than doing the actual technical research and just getting people to consider, people in the government and other AI researchers this to consider like, oh, this is probably something we need to pay attention to maybe in the next like 20 years. And we need to [00:30:00] start thinking about this seriously. And I think the Center for Human-Compatible AI are a good way to raise awareness about the issue. That being said, I don't know, I'm kind of personally persuaded by the arguments for giving later. And if you want to read more about that, there's an article by Phillip Trammell called Patient Philanthropy and he makes the case for investing and giving later. So that's the argument for the 100-year investment.

    Philip Kasumu: Got it. Got it. And you mentioned earlier that you know, global pandemics, would you consider COVID-19 as an existential risk? I know we mentioned it earlier, but do you still consider it as an existential risk? If not, why?

    John Halstead: It probably doesn't fall into that bracket as it's typically defined. So the costs, although terrible in humanitarian and economic terms, don't seem to be like they would threaten the [00:31:00] long-term flourishing of humanity. Like they wouldn't threaten how well we're going to do past 2100. I think we've kind of, we have faced quite severe pandemics in the past. So there was the 1918 flu pandemic that killed, you know, estimates vary, but between sort of 20 million and 100 million people and the Black Death, which was a naturally occurring pathogen, killed about a third of the population of Europe, and that just didn't seem to have that much effect on the trajectory of European civilization and there's economists who argue that it kind of helps a bit by reducing population growth. I'm not completely sure whether I agree with that, but I think that it just shows how big a pandemic would have to be to threaten the long-term future of humanity.

    When you're looking at something with, you know, if COVID was less left to run its course, it's got like a [00:32:00] case fatality rate of something like 1.6% and then maybe you think like between, like estimates vary of when we reach herd immunity, but between like 25% and 70% of people get it, it's kind of not coming close to killing like more than 10% of the world population. And that's when you're starting to think, oh, maybe this could affect how things go for the long term. And probably even more extreme than that you're looking at like killing half of the global population. And then it's really like unclear how we'd respond to that and whether political institutions would recover.

    So yeah, with existential risks, you're talking about very extreme things and that's not to diminish the significance of these smaller events, but just to say, just to kind of carve off this little section of risks that are already really severe.

    Philip Kasumu: So I guess in your [00:33:00] opinion then, do you think COVID-19 and the way we've dealt with it as a nation and a global community, has it been dealt with properly or, you know, based on your expertise looking at existential risks and pandemics, does this not warrant the amount of caution that's been taken?

    John Halstead: I think there's other reasons to focus on things aside from existential risks, like, you know, existential risk isn't everything, and we can, we should consider like short-term humanitarian effects. And then we want to like, try and minimise the costs really to the current generation. And that means balancing the economic costs with the health costs. I think, you know, it's a bit of a bland answer, but I think some countries have done better than others at dealing with it. And I have a blog on the Founders Pledge website about how we could go [00:34:00] about comparing the health costs and the economic costs.

    I think broadly delaying lockdown seems to be robustly bad, you want to do it earlier in the course of the outbreak. And it would have been, I think the UK government now accepts that they were a bit late to the party on that one. And then there's the testing. Like there've been big delays in testing in certain countries. So I think we just weren't very well prepared really, even though we were warned, I mean, in my last job, I remember reading the UK national risk register. That's 2015. And it said, the next five years is between a 1 in 2 and a 1 in 20 chance of a pandemic killing hundreds of thousands of people. And I remember reading going, wow, that's really high. And then, you know, we wrote these reports about biosecurity and nothing really happened. I think [00:35:00] in spite of the fact that we kind of knew what was coming, the response hasn't been particularly good in most countries really.

    Philip Kasumu: I guess does the COVID-19 crisis, or has it rather yielded new opportunities for existential risk reduction?

    John Halstead: Yeah. I mean, that might be like one silver lining even from COVID is that the world starts to pay more attention to these biological risks. As I said before, I mean, prior to this, not many philanthropists were working on pandemic preparedness. Whereas now I think lots of the big philanthropists will start looking at it and start thinking, how can we actually prepare for the next one? And what if there's something that's worse than COVID? COVID just doesn't, it doesn't seem impossible that we'd have something with a case fatality rate of 10%. That was like more infectious, especially with these developments in biotechnology that we're seeing. So yeah, I [00:36:00] do think and hope that this will act as a bit of a nudge, and get us to think, well, maybe, you know, we're losing trillions of dollars in economic value here. Maybe we should actually try putting some money into preparing for this sort of thing.

    And, yeah, so I'm hopeful of that, I hope it will also make people consider, like when people talk about these irregular risks that happen that, you know, they can actually sort of, they can actually happen and it's not just catastrophism, it can actually happen sometimes. And I think that's something we need to be aware of for AI risk and I think for climate change, it's easy to become kind of numb to warnings about climate change, but, you know, the projections, the projections of how bad it could get are quite bad, really, you know, on current policy, there's potentially more than 1% chance of 6 degrees Celsius and potentially much higher than that.
    And [00:37:00] it's easy to become numb to that, but I think we need to start thinking, we do actually need to engage in some forward planning, be more proactive, because you just can't be reactive in this context. You need to be ready and the world wasn't ready for COVID. I'm hoping they'll be ready for other events that happen.

    Philip Kasumu: Absolutely. Absolutely. And finally, are you optimistic or pessimistic about the future prospects of humanity? That feels like a really heavy question.

    John Halstead: I go different ways on it. I suppose part of the reason is it's, I suppose I was very pessimistic up until a few years ago. And then the big factor is I'm not sure how the world is going to respond to warning shots like COVID or other warning shots that we get.

    Like if we get a warning shot from climate change or maybe like, some sort of nuclear [00:38:00] accident with nuclear weapons or something like that. I don't know how the world will respond. It might be that everyone comes together and puts lots of effort into risk reduction. And then in that case, maybe we'll make it through this, maybe we'll make it through the century. I mean, some experts put the risk of being taken out as like the role of a dice, something like that. Toby Ord says that in The Precipice and people at the Future of Humanity Institute in Oxford take this like quite pessimistic view of that. But I suppose I'm maybe more optimistic that once these things become a problem, more resources will move into them and we'll start trying to reduce the risk. But it would be hard to put it below 1% per century. I think it's just a few, add [00:39:00] up all the risks we face and it does seem, it does seem tricky to do that.

    Philip Kasumu: That's good. No, yeah, no, totally. Totally. No, this was great, John, thanks so much for walking us through kind of like your process, you know, your thoughts and your commentary on existential risks. Hopefully people now will listen back to this and maybe reconsider how they think about donating, you know, whether it's not just a case of addressing things that are so obvious to the eye now, but thinking about, you know, what could happen in the future and sometimes in the near future. Well, yeah, who knows? I mean, hopefully there's not another pandemic, but you know, you mentioned the top 10 AI scientists, you know, who knows what they're bringing up these days. But yeah, so if people wanted to get in contact with you or learn a bit more about this, what’s the best way [00:40:00] to read more of your research? Or maybe like if they want to interview you, what's the best way to get in contact?

    John Halstead: Yeah, well just send me an email at john@founderspledge.com and then there's also, we have reports on climate change and existential risk on the Founders Pledge website, I suppose, would be the place to go.

    Philip Kasumu: Great. Awesome. Thanks so much, John.

    John Halstead: Thanks, Philip.

    Philip Kasumu: Thanks again, John, for a great conversation. And if you'd like to learn more about the work that we do in existential risk, then please visit our website at www.founderspledge.com. You can also find the link to the book that John mentioned, The Precipice, in the notes to this episode. And as always guys, thank you so much for listening.

    And if you haven't already, please leave us a review on the Apple podcasting app or anywhere you listen to your favourite podcasts. Thanks again, and see you next week. [00:41:00]

Philip Kasumu

Host

Philip joined Founders Pledge as Growth Lead for Europe in January 2020. He has spent the last few years building health and wellness tech products in both London and New York. He’s extremely passionate about tackling the global issue of obesity and diabetes in particular the impact of fizzy drinks. In addition to his passion for health and wellness reform Philip truly does drink the startup kool-aid and regularly nerds out on his weekly podcast Startup Handmedowns where he sits down with successful founders.

Philip studied Accounting and Management at the University of Essex.