In this spirit, here are some x-risk sceptical thoughts:
You could reasonably think human extinction this century is very unlikely. One way to reach this conclusion is simply to work through the most plausible causes of human extinction, and reach low odds for each. Vasco Grilo does this for (great power) conflict and nuclear winter, John Halstead suggests extinction risk from extreme climate change is very low here, and the background rate of extinction from natural sources can be bounded by (among other things) observing how long humans have already been around for. That leaves extinction risk from AI and (AI-enabled) engineered pandemics, where discussion is more scattered and inconclusive. Here and here are some reasons for scepticism about AI existential risk.
Even if the arguments for AI x-risk are sound, then it’s not clear how they are arguments for expecting literal human extinction over outcomes like ‘takeover’ or ‘disempowerment’. It’s hard to see why AI takeover would lead to smouldering ruins, versus continued activity and ‘life’, just a version not guided by humans or their values.
So “existential catastrophe” probably shouldn’t just mean “human extinction”. But then it surprisingly slippery as a concept. Existential risk is the risk of existential catastrophe, but it’s difficult to give a neat and intuitive definition of “existential catastrophe” such that “minimise existential catastrophe” is a very strong guide for how to do good. Hilary Greaves dicusses candidate definitions here.
From (1), you might think that if x-risk reduction this century should be a near-top priority, then most its importance comes from mitigating non-extinction catastrophes, like irreversible dystopias. But few current efforts are explicitly framed as ways to avoid dystopian outcomes, and it’s less clear how to do that. Other than preventing AI disempowerment or takeover, assuming those things are dystopian.
But then isn’t x-risk work basically just about AI, and maybe also biorisk? Shouldn’t specific arguments for those risks and ways to prevent them therefore matter more than more abstract arguments for the value of mitigating existential risks in general?
Many strategies to mitigate x-risks trade off uncomfortably against other goods. Of course they require money and talent, but it’s hard to argue the world is spending too much on e.g. preventing engineered pandemics. But (to give a random example), mitigating x-risk from AI might require strong AI control measures. If we also end up thinking things like AI autonomy matter, that could be an uncomfortable (if worthwhile) price to pay.
It’s not obvious that efforts to improve prospects for the long-run future should focus on preventing unrecoverable disasters. There is a strong preemptive argument for this; roughly that humans are likely to recover from less severe disasters, and so retain most their prospects (minus the cost of recovering, which is assumed to be small in terms of humanity’s entire future). The picture here is one on which the value of the future is roughly bimodal — either we mess up irrecoverable and achieve close to zero of our potential, or we reach roughly our full potential. But that bimodal picture isn’t obviously true. It might be comparably important to find ways to turn a mediocre-by-default future into a really great future, for instance.
A related picture that “existential catastrophe” suggests is that the causes of losing all our potential are fast and discrete events (bangs) rather than gradual processes (whimpers). But why are bangs more likely than whimpers? (See e.g. “you get what you measure” here).
Arguments for prioritising x-risk mitigation often involve mistakes, like strong ‘time of perils’ assumptions and apples to oranges comparisons. A naive case for prioritising x-risk mitigation might go like this: “reducing x-risk this century by 1 percentage point is worth one percentage point of the expected value of the entire future conditional on no existential catastrophes. And the entire future is huge, it’s like 10x lives. So reducing x-risk by even a tiny fraction, say y%, this century saves 10x⋅y% (a huge number of) lives in expectation. The same resources going to any work directed at saving lives within this century cannot save such a huge number of lives in expectation even if it saved 10 billion people.” This is too naive for a couple reasons:
This assumes this century is the only time where an existential catastrophe could occur. Better would be “the expected value of the entire futureconditional on no existential catastrophe this century”, which could be much lower.
This compares long-run effects with short-run effects without attempting to evaluate the long-run effects of interventions not deliberately targeted at reducing existential catastrophe this century.
Naive analysis of the value of reducing existential catastrophe also doesn’t account for ‘which world gets saved’. This feels especially relevant when assessing the value of preventing human extinction, where you might expect the worlds where extinction-preventing interventions succeed in preventing extinction are far less valuable than the expected value of the world conditional on no extinction (since narrowly avoiding extinction is bad news about the value of the rest of the future). Vasco Grilo explores this line of thinking here, and I suggest some extra thoughts here.
The fact that some existential problems (e.g. AI alignment) seem, on our best guess, just about solvable with an extra push from x-risk motivated people doesn’t itself say much about the chance that x-risk motivated people make the difference in solving those problems (if we’re very uncertain about how difficult the problems are). Here are some thoughts about that.
These thoughts make me hesitant about confidently acting as if x-risk is overwhelmingly important, even compared to other potential ways to improve the long-run future, or other framings on the importance of helping navigate the transition to very powerful AI.
But I still existential risk matters greatly as an action-guiding idea. I like this snippet from the FAQ page for The Precipice —
But for most purposes there is no need to debate which of these noble tasks is the most important—the key point is just that safeguarding humanity’s longterm potential is up there among the very most important priorities of our time.
So “existential catastrophe” probably shouldn’t just mean “human extinction”. But then it surprisingly slippery as a concept. Existential risk is the risk of existential catastrophe, but it’s difficult to give a neat and intuitive definition of “existential catastrophe” such that “minimise existential catastrophe” is a very strong guide for how to do good. Hilary Greaves dicusses candidate definitions here.
Tooting my own trumpet, I did a lot of work on improving the question x-riskers are asking in this sequence.
I certainly think these are all good to express (and I could reply to them, though I won’t right now). But also, they’re all still pretty crisp/explicit. Which is good! But I wouldn’t want people to think that sceptical thoughts have to get to this level of crispness before they can deserve attention.
I endorse many (more) people focusing on x-risk and it is a motivation and focus of mine; I don’t endorse “we should act confidently as if x-risk is the overwhelmingly most important thing”.
Honestly, I think the explicitness of my points misrepresents what it really feels like to form a view on this, which is to engage with lots of arguments and see what my gut says at the end. My gut is moved by the idea of existential risk reduction as a central priority, and it feels uncomfortable being fanatical about it and suggesting others do the same. But it struggles to credit particular reasons for that.
To actually answer the question: (6), (5), and (8) stand out, and feel connected.
On nuclear winter, besides my crosspost for Bean’s analysis linked above, I looked more in-depth into the famine deaths and extinction risk (arriving to an annual extinction risk of 5.93*10^-12). I also got an astronomically low annual extinction risk risk from asteroids and comets (2.20*10^-14) and volcanoes (3.38*10^-14).
I think this study also implies as astronomically low extinction risk from climate change.
And the entire future is huge, it’s like 10x lives. So reducing x-risk by even a tiny fraction, say y%, this century saves 10(x−2)⋅y (a huge number of) lives in expectation.
I believe y is not supposed to be in the exponent.
This compares long-run effects with short-run effects without attempting to evaluate the long-run effects of interventions not deliberately targeted at reducing existential catastrophe this century.
In this spirit, here are some x-risk sceptical thoughts:
You could reasonably think human extinction this century is very unlikely. One way to reach this conclusion is simply to work through the most plausible causes of human extinction, and reach low odds for each. Vasco Grilo does this for (great power) conflict and nuclear winter, John Halstead suggests extinction risk from extreme climate change is very low here, and the background rate of extinction from natural sources can be bounded by (among other things) observing how long humans have already been around for. That leaves extinction risk from AI and (AI-enabled) engineered pandemics, where discussion is more scattered and inconclusive. Here and here are some reasons for scepticism about AI existential risk.
Even if the arguments for AI x-risk are sound, then it’s not clear how they are arguments for expecting literal human extinction over outcomes like ‘takeover’ or ‘disempowerment’. It’s hard to see why AI takeover would lead to smouldering ruins, versus continued activity and ‘life’, just a version not guided by humans or their values.
So “existential catastrophe” probably shouldn’t just mean “human extinction”. But then it surprisingly slippery as a concept. Existential risk is the risk of existential catastrophe, but it’s difficult to give a neat and intuitive definition of “existential catastrophe” such that “minimise existential catastrophe” is a very strong guide for how to do good. Hilary Greaves dicusses candidate definitions here.
From (1), you might think that if x-risk reduction this century should be a near-top priority, then most its importance comes from mitigating non-extinction catastrophes, like irreversible dystopias. But few current efforts are explicitly framed as ways to avoid dystopian outcomes, and it’s less clear how to do that. Other than preventing AI disempowerment or takeover, assuming those things are dystopian.
But then isn’t x-risk work basically just about AI, and maybe also biorisk? Shouldn’t specific arguments for those risks and ways to prevent them therefore matter more than more abstract arguments for the value of mitigating existential risks in general?
Many strategies to mitigate x-risks trade off uncomfortably against other goods. Of course they require money and talent, but it’s hard to argue the world is spending too much on e.g. preventing engineered pandemics. But (to give a random example), mitigating x-risk from AI might require strong AI control measures. If we also end up thinking things like AI autonomy matter, that could be an uncomfortable (if worthwhile) price to pay.
It’s not obvious that efforts to improve prospects for the long-run future should focus on preventing unrecoverable disasters. There is a strong preemptive argument for this; roughly that humans are likely to recover from less severe disasters, and so retain most their prospects (minus the cost of recovering, which is assumed to be small in terms of humanity’s entire future). The picture here is one on which the value of the future is roughly bimodal — either we mess up irrecoverable and achieve close to zero of our potential, or we reach roughly our full potential. But that bimodal picture isn’t obviously true. It might be comparably important to find ways to turn a mediocre-by-default future into a really great future, for instance.
A related picture that “existential catastrophe” suggests is that the causes of losing all our potential are fast and discrete events (bangs) rather than gradual processes (whimpers). But why are bangs more likely than whimpers? (See e.g. “you get what you measure” here).
Arguments for prioritising x-risk mitigation often involve mistakes, like strong ‘time of perils’ assumptions and apples to oranges comparisons. A naive case for prioritising x-risk mitigation might go like this: “reducing x-risk this century by 1 percentage point is worth one percentage point of the expected value of the entire future conditional on no existential catastrophes. And the entire future is huge, it’s like 10x lives. So reducing x-risk by even a tiny fraction, say y%, this century saves 10x⋅y% (a huge number of) lives in expectation. The same resources going to any work directed at saving lives within this century cannot save such a huge number of lives in expectation even if it saved 10 billion people.” This is too naive for a couple reasons:
This assumes this century is the only time where an existential catastrophe could occur. Better would be “the expected value of the entire future conditional on no existential catastrophe this century”, which could be much lower.
This compares long-run effects with short-run effects without attempting to evaluate the long-run effects of interventions not deliberately targeted at reducing existential catastrophe this century.
Naive analysis of the value of reducing existential catastrophe also doesn’t account for ‘which world gets saved’. This feels especially relevant when assessing the value of preventing human extinction, where you might expect the worlds where extinction-preventing interventions succeed in preventing extinction are far less valuable than the expected value of the world conditional on no extinction (since narrowly avoiding extinction is bad news about the value of the rest of the future). Vasco Grilo explores this line of thinking here, and I suggest some extra thoughts here.
The fact that some existential problems (e.g. AI alignment) seem, on our best guess, just about solvable with an extra push from x-risk motivated people doesn’t itself say much about the chance that x-risk motivated people make the difference in solving those problems (if we’re very uncertain about how difficult the problems are). Here are some thoughts about that.
These thoughts make me hesitant about confidently acting as if x-risk is overwhelmingly important, even compared to other potential ways to improve the long-run future, or other framings on the importance of helping navigate the transition to very powerful AI.
But I still existential risk matters greatly as an action-guiding idea. I like this snippet from the FAQ page for The Precipice —
[Edited a bit for clarity after posting]
Tooting my own trumpet, I did a lot of work on improving the question x-riskers are asking in this sequence.
I certainly think these are all good to express (and I could reply to them, though I won’t right now). But also, they’re all still pretty crisp/explicit. Which is good! But I wouldn’t want people to think that sceptical thoughts have to get to this level of crispness before they can deserve attention.
Agree.
By the way, I’m curious which of these points give you personally the greatest hesitance in endorsing a focus on x-risk, or something.
I endorse many (more) people focusing on x-risk and it is a motivation and focus of mine; I don’t endorse “we should act confidently as if x-risk is the overwhelmingly most important thing”.
Honestly, I think the explicitness of my points misrepresents what it really feels like to form a view on this, which is to engage with lots of arguments and see what my gut says at the end. My gut is moved by the idea of existential risk reduction as a central priority, and it feels uncomfortable being fanatical about it and suggesting others do the same. But it struggles to credit particular reasons for that.
To actually answer the question: (6), (5), and (8) stand out, and feel connected.
Great points, Fin!
On nuclear winter, besides my crosspost for Bean’s analysis linked above, I looked more in-depth into the famine deaths and extinction risk (arriving to an annual extinction risk of 5.93*10^-12). I also got an astronomically low annual extinction risk risk from asteroids and comets (2.20*10^-14) and volcanoes (3.38*10^-14).
I think this study also implies as astronomically low extinction risk from climate change.
I believe y is not supposed to be in the exponent.
Relatedly:
Why Charities Usually Don’t Differ Astronomically in Expected Cost-Effectiveness.
Saving lives in normal times is better to improve the longterm future than doing so in catastrophes?.
Thanks Vasco!