In the supplement, I assume (for simplicity) that we just can’t affect long-term existential risk.
But I also think that, if there are ways to affect it, that’ll seem more like interventions to increase Flourishing than to increase Survival. (E.g. making human decision-making wiser, more competent, and more reflective).
IMO the mathematical argument for spreading out to other planets and eventually stars is a far stronger a source of existential security than increasing hard-to-pin-down properties like ‘wisdom’.
If different settlements’ survival were independent, and if our probability per unit time of going extinct is p, then n settlements would give p^n probability of going extinct over whatever time period. You have to assume an extremely high level of dependence or of ongoing per-settlement risk for that not to approach 0 rapidly.
To give an example, typical estimates of per-year x-risk put it at about 0.2% per year.† On that assumption, to give ourselves a better than evens chance of surviving say 100,000 years, we’d need to in some sense become 1000 times wiser than we are now. I can’t imagine what that could even mean—unless it simply involves extreme authoritarian control of the population.
Compare to an admittedly hypernaive model in which we assume some interdependence of offworld settlements, such that having N settlements reduces our risk per-year risk of going to extinct by 1/sqr(N). Now for N >= 4, we have a greater than 50% chance of surviving 100,000 years—and for N = 5, it’s already more than 91% likely that we survive for that long. This is somewhat optimistic in assuming that if any smaller number are destroyed they’re immediately rebuilt, but extremely pessimistic given assumption of a world with N >=2, in which we somehow settle 1-3 other colonies and then entirely stop.
† (1-[0.19 probability of extinction given by end of century])**(1/[92 years of century left at time of predictions]) = 0.9977 probability of survival per year
This isn’t necessarily to argue against reducing flourishing being a better option—just that te above is an example of a robust long-term-x-risk-affecting strategy that doesn’t seem much like increasing flourishing.
IMO the mathematical argument for spreading out to other planets and eventually stars is a far stronger a source of existential security than increasing hard-to-pin-down properties like ‘wisdom’.
If different settlements’ survival were independent, and if our probability per unit time of going extinct is p, then n settlements would give p^n probability of going extinct over whatever time period. You have to assume an extremely high level of dependence or of ongoing per-settlement risk for that not to approach 0 rapidly.
To give an example, typical estimates of per-year x-risk put it at about 0.2% per year.† On that assumption, to give ourselves a better than evens chance of surviving say 100,000 years, we’d need to in some sense become 1000 times wiser than we are now. I can’t imagine what that could even mean—unless it simply involves extreme authoritarian control of the population.
Compare to an admittedly hypernaive model in which we assume some interdependence of offworld settlements, such that having N settlements reduces our risk per-year risk of going to extinct by 1/sqr(N). Now for N >= 4, we have a greater than 50% chance of surviving 100,000 years—and for N = 5, it’s already more than 91% likely that we survive for that long. This is somewhat optimistic in assuming that if any smaller number are destroyed they’re immediately rebuilt, but extremely pessimistic given assumption of a world with N >=2, in which we somehow settle 1-3 other colonies and then entirely stop.
† (1-[0.19 probability of extinction given by end of century])**(1/[92 years of century left at time of predictions]) = 0.9977 probability of survival per year
This isn’t necessarily to argue against reducing flourishing being a better option—just that te above is an example of a robust long-term-x-risk-affecting strategy that doesn’t seem much like increasing flourishing.