What kind of approach is the right one to take to carrying out such an endeavour? Surely there is only one answer: a conservative approach. One that prioritises good judgment, caution and prudence; one that values avoiding negative outcomes well above achieving positive ones.
Really interesting read!
Would you agree that an underlying assumption of conservatism is that continuing ‘business as usual’ is the safe option?
In Bioterrorism and AI Safety the assumption is that we’re on course for disasters that results in billions of deaths unless we do something radical to change course.
Whether you agree about the risks of Bioterrorism and AGI shouldn’t be about a general vibe you pick up of “science fiction scenario(s)” or being on “the crazy train to absurd beliefs”. I think it should be about engaging with those arguments on the object level. Sam Harris / Rob Reid’s podcast and Robert Miles’ YouTube channel are great ways in if you’re interested.
That’s an interesting point. There’s a lot of thinking about how we judge the output of experts in other fields (and I’m not an expert in that), but I’ll give you my thoughts. In short, I’m not sure you can engage with all the arguments on the object level. Couple of reasons:
(1) There are lots of people who know more about X than I do. If they are trying to fool me about X, they can; and if they are honestly wrong about X then I’ve got no chance. If some quantum physicist explains how setting up a quantum computer could trigger a chain reaction that could end human life, I’ve got no chance of delving into the details of quantum theory to disprove that. I’ve got to with … not just vibes, exactly, but a kind of human approach to the numbers of people who believe things on both sides of the argument, how plausible they are and so on. That’s the way I deal with Flat Earth, Creationism and Global Warming arguments: there are guys out there who know much more than me, but I just don’t bother looking at their arguments.
(2) People love catastrophes and apocalypses! Those guys who keep moving the doomsday clock so that we are 2 seconds to midnight or whatever; the guys who thought the Cold War was bound to end in a nuclear holocaust; all the sects who have thought the world is going to end and gathered together to await the Rapture or the aliens or whatever—there are just too many examples of prophets predicting disaster. So I think it’s fair to discount anyone who says the End is Nigh. On the other hand, the civilisation we have behind us has got us to this state, which is not perfect, but involves billions of decently-fed people living long-ish lives, mostly in peace. There’s a risk (a much less exciting risk that people don’t get so excited about) that if you make radical changes to that then you’ll make things much worse.
(1) I don’t think that the engineered pandemics argument is of the same type as the Flat Earther or Creationist arguments. And it’s not the kind of argument that requires a PhD in biochemistry to follow either. But I guess from your point of view there’s no reason to trust me on that? I’m not sure where to go from there.
I’ve got to with … not just vibes, exactly, but a kind of human approach to the numbers of people who believe things on both sides of the argument, how plausible they are and so on.
Maybe one question is: why do you think engineered pandemics are implausible?
(2) I agree that you should start from a position of skepticism when people say the End is Nigh. But I don’t think it should be a complete prohibition on considering those arguments.
And the fact that previous predictions have proven overblown is a pattern worth paying attention to (although as an aside: I think people were right to worry during the cold war — we really did come close to full nuclear exchange on more than 1 occasion! The fact that we got through it unscathed doesn’t mean they were wrong to worry. If somebody played Russian Roulette and survived you shouldn’t conclude “look, Russian Roulette is completely safe.”). Where I think the pattern of overblown predictions of doom has a risk of breaking down is when you introduce dangerous new technologies. I don’t expect technology to remain roughly at current levels. I expect technology to be very different in 25, 50, 100 years’ time. Previous centuries have been relatively stable because no new dangerous technologies were invented (nuclear weapons aside). You can’t extrapolate that pattern into the future if the future contains for example easily available machines that can print Covid-19 but with 10x transmissibility and a 50% mortality rate. Part of my brain wants to say “We will rise to the challenge! Some hero will emerge at the last moment and save the day” but then I remember the universe runs on science and not movie plot lines.
I wouldn’t say that I believe engineered pandemics or AI mis-alignment or whatever are implausible. It’s simply that I think I’ll get a better handle on whether they are real threats by seeing if there’s a consensus view among respected experts that these things are dangerous than if I try to dive in to the details myself. Nuclear weapons are a good example because everyone did agree that they were dangerous and even during the Cold War the superpowers co-operated to try to reduce the risks (hotline, arms treaties), albeit after a shaky start, as you say.
I also agree with you that there is no prohibition on considering really bad but unlikely outcomes. In fact, I think this is one of the good things EA has done – to encourage us to look seriously at the difference between very very bad threats and disastrous, civilisation-destroying threats. The sort of thing I have in mind is: “let’s leave some coal in the ground in case we need to re-do the Industrial Revolution”. Also, things like seed banks. These kinds of ‘insurance policies’ seem like really sensible – and also really conservative – things to think about. That’s the kind of ‘expect the best, prepare for the worst’ conservatism that I fully endorse. Just like I recommend you get life insurance if your family depend on your income, although I have no reason to think you won’t live to a ripe old age. Whatever the chances of an asteroid strike or nuclear war or an engineered pandemic are, I fully support having some defences against them and/or building capacity to come back afterwards.
I suppose I’d put it this way: I’m a fan of looking out for asteroids, thinking about how they could be deflected and preparing a space craft that can shoot them down. But I wouldn’t suggest we all move underground right now – and abandon our current civilisation – just to reduce the risk. I’m exaggerating for effect, but I hope you see my point.
Really interesting read!
Would you agree that an underlying assumption of conservatism is that continuing ‘business as usual’ is the safe option?
In Bioterrorism and AI Safety the assumption is that we’re on course for disasters that results in billions of deaths unless we do something radical to change course.
Whether you agree about the risks of Bioterrorism and AGI shouldn’t be about a general vibe you pick up of “science fiction scenario(s)” or being on “the crazy train to absurd beliefs”. I think it should be about engaging with those arguments on the object level. Sam Harris / Rob Reid’s podcast and Robert Miles’ YouTube channel are great ways in if you’re interested.
That’s an interesting point. There’s a lot of thinking about how we judge the output of experts in other fields (and I’m not an expert in that), but I’ll give you my thoughts. In short, I’m not sure you can engage with all the arguments on the object level. Couple of reasons:
(1) There are lots of people who know more about X than I do. If they are trying to fool me about X, they can; and if they are honestly wrong about X then I’ve got no chance. If some quantum physicist explains how setting up a quantum computer could trigger a chain reaction that could end human life, I’ve got no chance of delving into the details of quantum theory to disprove that. I’ve got to with … not just vibes, exactly, but a kind of human approach to the numbers of people who believe things on both sides of the argument, how plausible they are and so on. That’s the way I deal with Flat Earth, Creationism and Global Warming arguments: there are guys out there who know much more than me, but I just don’t bother looking at their arguments.
(2) People love catastrophes and apocalypses! Those guys who keep moving the doomsday clock so that we are 2 seconds to midnight or whatever; the guys who thought the Cold War was bound to end in a nuclear holocaust; all the sects who have thought the world is going to end and gathered together to await the Rapture or the aliens or whatever—there are just too many examples of prophets predicting disaster. So I think it’s fair to discount anyone who says the End is Nigh. On the other hand, the civilisation we have behind us has got us to this state, which is not perfect, but involves billions of decently-fed people living long-ish lives, mostly in peace. There’s a risk (a much less exciting risk that people don’t get so excited about) that if you make radical changes to that then you’ll make things much worse.
I appreciate the thoughtful reply.
(1) I don’t think that the engineered pandemics argument is of the same type as the Flat Earther or Creationist arguments. And it’s not the kind of argument that requires a PhD in biochemistry to follow either. But I guess from your point of view there’s no reason to trust me on that? I’m not sure where to go from there.
Maybe one question is: why do you think engineered pandemics are implausible?
(2) I agree that you should start from a position of skepticism when people say the End is Nigh. But I don’t think it should be a complete prohibition on considering those arguments.
And the fact that previous predictions have proven overblown is a pattern worth paying attention to (although as an aside: I think people were right to worry during the cold war — we really did come close to full nuclear exchange on more than 1 occasion! The fact that we got through it unscathed doesn’t mean they were wrong to worry. If somebody played Russian Roulette and survived you shouldn’t conclude “look, Russian Roulette is completely safe.”). Where I think the pattern of overblown predictions of doom has a risk of breaking down is when you introduce dangerous new technologies. I don’t expect technology to remain roughly at current levels. I expect technology to be very different in 25, 50, 100 years’ time. Previous centuries have been relatively stable because no new dangerous technologies were invented (nuclear weapons aside). You can’t extrapolate that pattern into the future if the future contains for example easily available machines that can print Covid-19 but with 10x transmissibility and a 50% mortality rate. Part of my brain wants to say “We will rise to the challenge! Some hero will emerge at the last moment and save the day” but then I remember the universe runs on science and not movie plot lines.
Thank you for your comments.
I wouldn’t say that I believe engineered pandemics or AI mis-alignment or whatever are implausible. It’s simply that I think I’ll get a better handle on whether they are real threats by seeing if there’s a consensus view among respected experts that these things are dangerous than if I try to dive in to the details myself. Nuclear weapons are a good example because everyone did agree that they were dangerous and even during the Cold War the superpowers co-operated to try to reduce the risks (hotline, arms treaties), albeit after a shaky start, as you say.
I also agree with you that there is no prohibition on considering really bad but unlikely outcomes. In fact, I think this is one of the good things EA has done – to encourage us to look seriously at the difference between very very bad threats and disastrous, civilisation-destroying threats. The sort of thing I have in mind is: “let’s leave some coal in the ground in case we need to re-do the Industrial Revolution”. Also, things like seed banks. These kinds of ‘insurance policies’ seem like really sensible – and also really conservative – things to think about. That’s the kind of ‘expect the best, prepare for the worst’ conservatism that I fully endorse. Just like I recommend you get life insurance if your family depend on your income, although I have no reason to think you won’t live to a ripe old age. Whatever the chances of an asteroid strike or nuclear war or an engineered pandemic are, I fully support having some defences against them and/or building capacity to come back afterwards.
I suppose I’d put it this way: I’m a fan of looking out for asteroids, thinking about how they could be deflected and preparing a space craft that can shoot them down. But I wouldn’t suggest we all move underground right now – and abandon our current civilisation – just to reduce the risk. I’m exaggerating for effect, but I hope you see my point.