One thing I think the piece glosses over is that “surviving” is framed as surviving this century—but in longtermist terms, that’s not enough. What we really care about is existential security: a persistent, long-term reduction in existential risk. If we don’t achieve that, then we’re still on track to eventually go extinct and miss out on a huge amount of future value.
Existential security is a much harder target than just getting through the 21st century. Reframing survival in this way likely changes the calculus—we may not be at all near the “ceiling for survival” if survival means existential security.
1/ I’m comparing Surviving (as I define it) and Flourishing. But if long-term existential risk is high, that equally decreases the value of increasing Surviving and the value of increasing Flourishing. So how much long-term existential risk there is doesn’t affect that comparison.
2/ But maybe efforts to reduce long-term existential risk are even better than work on either Surviving or Flourishing?
In the supplement, I assume (for simplicity) that we just can’t affect long-term existential risk.
But I also think that, if there are ways to affect it, that’ll seem more like interventions to increase Flourishing than to increase Survival. (E.g. making human decision-making wiser, more competent, and more reflective).
3/ I think that, in expectation at least, if we survive the next century then the future is very long.
The best discussion of reasons for that I know of is Carl Shulman’s comment here:
”It’s quite likely the extinction/existential catastrophe rate approaches zero within a few centuries if civilization survives, because:
Riches and technology make us comprehensively immune to natural disasters.
Cheap ubiquitous detection, barriers, and sterilization make civilization immune to biothreats
Advanced tech makes neutral parties immune to the effects of nuclear winter.
Local cheap production makes for small supply chains that can regrow from disruption as industry becomes more like information goods.
Space colonization creates robustness against local disruption.
Aligned AI blocks threats from misaligned AI (and many other things).
Advanced technology enables stable policies (e.g. the same AI police systems enforce treaties banning WMD war for billions of years), and the world is likely to wind up in some stable situation (bouncing around until it does).
If we’re more than 50% likely to get to that kind of robust state, which I think is true, and I believe Toby does as well, then the life expectancy of civilization is very long, almost as long on a log scale as with 100%.
Your argument depends on 99%+++ credence that such safe stable states won’t be attained, which is doubtful for 50% credence, and quite implausible at that level. A classic paper by the climate economist Martin Weitzman shows that the average discount rate over long periods is set by the lowest plausible rate (as the possibilities of high rates drop out after a short period and you get a constant factor penalty for the probability of low discount rates, not exponential decay).”
In the supplement, I assume (for simplicity) that we just can’t affect long-term existential risk.
But I also think that, if there are ways to affect it, that’ll seem more like interventions to increase Flourishing than to increase Survival. (E.g. making human decision-making wiser, more competent, and more reflective).
IMO the mathematical argument for spreading out to other planets and eventually stars is a far stronger a source of existential security than increasing hard-to-pin-down properties like ‘wisdom’.
If different settlements’ survival were independent, and if our probability per unit time of going extinct is p, then n settlements would give p^n probability of going extinct over whatever time period. You have to assume an extremely high level of dependence or of ongoing per-settlement risk for that not to approach 0 rapidly.
To give an example, typical estimates of per-year x-risk put it at about 0.2% per year.† On that assumption, to give ourselves a better than evens chance of surviving say 100,000 years, we’d need to in some sense become 1000 times wiser than we are now. I can’t imagine what that could even mean—unless it simply involves extreme authoritarian control of the population.
Compare to an admittedly hypernaive model in which we assume some interdependence of offworld settlements, such that having N settlements reduces our risk per-year risk of going to extinct by 1/sqr(N). Now for N >= 4, we have a greater than 50% chance of surviving 100,000 years—and for N = 5, it’s already more than 91% likely that we survive for that long. This is somewhat optimistic in assuming that if any smaller number are destroyed they’re immediately rebuilt, but extremely pessimistic given assumption of a world with N >=2, in which we somehow settle 1-3 other colonies and then entirely stop.
† (1-[0.19 probability of extinction given by end of century])**(1/[92 years of century left at time of predictions]) = 0.9977 probability of survival per year
This isn’t necessarily to argue against reducing flourishing being a better option—just that te above is an example of a robust long-term-x-risk-affecting strategy that doesn’t seem much like increasing flourishing.
I think a common assumption is that if you can survive superintelligent AI, then the AI can figure out how to provide existential safety. So all you need to do is survive AI.
(“Surviving AI” means not just aligning AI, but also ensuring fair governance—making sure AI doesn’t enable a permanent dictatorship or whatever.)
(FWIW I think this assumption is probably correct.)
One thing I think the piece glosses over is that “surviving” is framed as surviving this century—but in longtermist terms, that’s not enough. What we really care about is existential security: a persistent, long-term reduction in existential risk. If we don’t achieve that, then we’re still on track to eventually go extinct and miss out on a huge amount of future value.
Existential security is a much harder target than just getting through the 21st century. Reframing survival in this way likely changes the calculus—we may not be at all near the “ceiling for survival” if survival means existential security.
Thanks!
A couple of comments:
1/
I’m comparing Surviving (as I define it) and Flourishing. But if long-term existential risk is high, that equally decreases the value of increasing Surviving and the value of increasing Flourishing. So how much long-term existential risk there is doesn’t affect that comparison.
2/
But maybe efforts to reduce long-term existential risk are even better than work on either Surviving or Flourishing?
In the supplement, I assume (for simplicity) that we just can’t affect long-term existential risk.
But I also think that, if there are ways to affect it, that’ll seem more like interventions to increase Flourishing than to increase Survival. (E.g. making human decision-making wiser, more competent, and more reflective).
3/
I think that, in expectation at least, if we survive the next century then the future is very long.
The best discussion of reasons for that I know of is Carl Shulman’s comment here:
”It’s quite likely the extinction/existential catastrophe rate approaches zero within a few centuries if civilization survives, because:
Riches and technology make us comprehensively immune to natural disasters.
Cheap ubiquitous detection, barriers, and sterilization make civilization immune to biothreats
Advanced tech makes neutral parties immune to the effects of nuclear winter.
Local cheap production makes for small supply chains that can regrow from disruption as industry becomes more like information goods.
Space colonization creates robustness against local disruption.
Aligned AI blocks threats from misaligned AI (and many other things).
Advanced technology enables stable policies (e.g. the same AI police systems enforce treaties banning WMD war for billions of years), and the world is likely to wind up in some stable situation (bouncing around until it does).
If we’re more than 50% likely to get to that kind of robust state, which I think is true, and I believe Toby does as well, then the life expectancy of civilization is very long, almost as long on a log scale as with 100%.
Your argument depends on 99%+++ credence that such safe stable states won’t be attained, which is doubtful for 50% credence, and quite implausible at that level. A classic paper by the climate economist Martin Weitzman shows that the average discount rate over long periods is set by the lowest plausible rate (as the possibilities of high rates drop out after a short period and you get a constant factor penalty for the probability of low discount rates, not exponential decay).”
Thanks for your replies!
IMO the mathematical argument for spreading out to other planets and eventually stars is a far stronger a source of existential security than increasing hard-to-pin-down properties like ‘wisdom’.
If different settlements’ survival were independent, and if our probability per unit time of going extinct is p, then n settlements would give p^n probability of going extinct over whatever time period. You have to assume an extremely high level of dependence or of ongoing per-settlement risk for that not to approach 0 rapidly.
To give an example, typical estimates of per-year x-risk put it at about 0.2% per year.† On that assumption, to give ourselves a better than evens chance of surviving say 100,000 years, we’d need to in some sense become 1000 times wiser than we are now. I can’t imagine what that could even mean—unless it simply involves extreme authoritarian control of the population.
Compare to an admittedly hypernaive model in which we assume some interdependence of offworld settlements, such that having N settlements reduces our risk per-year risk of going to extinct by 1/sqr(N). Now for N >= 4, we have a greater than 50% chance of surviving 100,000 years—and for N = 5, it’s already more than 91% likely that we survive for that long. This is somewhat optimistic in assuming that if any smaller number are destroyed they’re immediately rebuilt, but extremely pessimistic given assumption of a world with N >=2, in which we somehow settle 1-3 other colonies and then entirely stop.
† (1-[0.19 probability of extinction given by end of century])**(1/[92 years of century left at time of predictions]) = 0.9977 probability of survival per year
This isn’t necessarily to argue against reducing flourishing being a better option—just that te above is an example of a robust long-term-x-risk-affecting strategy that doesn’t seem much like increasing flourishing.
I think a common assumption is that if you can survive superintelligent AI, then the AI can figure out how to provide existential safety. So all you need to do is survive AI.
(“Surviving AI” means not just aligning AI, but also ensuring fair governance—making sure AI doesn’t enable a permanent dictatorship or whatever.)
(FWIW I think this assumption is probably correct.)