Thanks for the extensive reply! Thoughts in order:
I would also note that #3 could be much worse than #2 if #3 entails spreading wild animal suffering.
I think this is fair, though if we’re not fixing that issue then it seems problematic for any pro- longtermism view, since it implies the ideal outcome is probably destroying the biosphere. Fwiw I also find it hard to imagine humans populating the universe with anything resembling ‘wild animals’, given the level of control we’d have in such scenarios, and our incentives to exert it. That’s not to say we couldn’t wind up with something much worse though (planetwide factory farms, or some digital fear-driven economy adjacent to Hanson’s Age of Em)
I’m having a hard time wrapping my head around what the “1 unit of extinction” equation is supposed to represent.
It’s whatever the cost in expected future value extinction today would be. The cost can be negative if wild-animal-suffering proliferates, and some trajectory changes could have a negative cost of more than 1 UoEs if they make the potential future more than twice as good, and vice versa (a positive cost of more than 1 UoE if they make the future expectation negative from positive).
But in most cases I think its use is to describe non-extinction catastrophes as having a cost C such that 0 < C < 1UoE.
the parable of the apple tree is more about P(recovery) than it is about P(flourishing|recovery)
Good point. I might write a v2 of this essay at some stage, and I’ll try and think of a way to fix that if so.
“Resources get used up, so getting back to a level of technology the 2nd time is harder than the 1st time.” ... ”A higher probability of catastrophe means there’s a higher chance that civilization keeps getting set back by catastrophes without ever expanding to the stars.”
I’m not sure I follow your confusion here, unless it’s a restatement of what you wrote in the previous bullet. The latter statement, if I understand it accurately is closer to my primary thesis. The first statement could be true if
a) Recovery is hard; or
b) Developing technology beyond ‘recovery’ is hard
I don’t have a strong view on a), except that it worries me that so many people who’ve looked into it think it could be very hard, yet x-riskers still seem to write it off as trivial on long timelines without much argument.
b) is roughly a subset of my thesis, though one could believe the main source of friction increase would come when society runs out of technological information from previous civilisations.
I’m not sure if I’m clearing anything up here...
“we might still have a greater expected loss of value from those catastrophes”—This seems unlikely to me, but I’d like to see some explicit modeling.
So would I, though modelling it sensibly is extremely hard. My previous sequence’s model was too simple to capture this question, despite being probably too complicated for what most people would consider practical use. To answer comparative value loss, you need to look at at least:
Risk per year of non-AI catastrophes of various magnitudes
Difficulty of recovery from other catastrophes
Difficulty of flourishing given recovery from other catastrophes
Risk per year of AI catastrophes of various magnitudes
Effect of AI-catastrophe risk reduction on other catastrophes? E.g. does benign AI basically lock in a secure future, or would we retain the capacity and willingness to launch powerful weapons at each other?
How likely is it that AI outcome is largely predetermined by, such that developing benign AI once would be strong evidence that if society subsequently collapsed and developed it again, it would be benign again?
The long-term nature of AI catastrophic risk. Is it a one-and-done problem if it goes well? Or does making a non-omnicidal AI just give us some breathing space until we create its successor, at which point we have to solve the problem all over again?
Effect of other catastrophe risk reduction on AI-catastrophe. E.g. does reducing global nuclear arsenals meaningfully reduce the risk that AI goes horribly wrong by accident? Or do we think most of the threat is from AI that deliberatively plans our destruction, and is smart enough not to need existing weaponry?
The long-term moral status of AI. Is a world where it replaces us as good or better than a world where we stick around on reasonable value systems?
Expected changes to human-descendant values given flourishing after other catastrophes
My old model didn’t have much to say on any beyond the first three of these considerations.
Though if we return to the much simpler model and handwave a bit, if we suppose that annual non-extinction catastrophic risk is between 1 and 2%, then 10-20 year risk is between 20 and 35%. If we also suppose that chances of flourishing after collapse drop by 10 or more %, that puts it in the realm of ‘substantially bigger threat than the more conservative AI x-riskers view AI as, substantially smaller than the most pessimistic views of AI x-risk’.
It could be somewhat more important either if chances of flourishing after collapse drop by substantially more (as I think they do), and much more important if we could persistently reduce catastrophic risk that persist for beyond the 10-20-year period (e.g. by moving towards stable global governance or at least substantially reducing nuclear arsenals).
Thanks for the extensive reply! Thoughts in order:
I think this is fair, though if we’re not fixing that issue then it seems problematic for any pro- longtermism view, since it implies the ideal outcome is probably destroying the biosphere. Fwiw I also find it hard to imagine humans populating the universe with anything resembling ‘wild animals’, given the level of control we’d have in such scenarios, and our incentives to exert it. That’s not to say we couldn’t wind up with something much worse though (planetwide factory farms, or some digital fear-driven economy adjacent to Hanson’s Age of Em)
It’s whatever the cost in expected future value extinction today would be. The cost can be negative if wild-animal-suffering proliferates, and some trajectory changes could have a negative cost of more than 1 UoEs if they make the potential future more than twice as good, and vice versa (a positive cost of more than 1 UoE if they make the future expectation negative from positive).
But in most cases I think its use is to describe non-extinction catastrophes as having a cost C such that 0 < C < 1UoE.
Good point. I might write a v2 of this essay at some stage, and I’ll try and think of a way to fix that if so.
I’m not sure I follow your confusion here, unless it’s a restatement of what you wrote in the previous bullet. The latter statement, if I understand it accurately is closer to my primary thesis. The first statement could be true if
a) Recovery is hard; or
b) Developing technology beyond ‘recovery’ is hard
I don’t have a strong view on a), except that it worries me that so many people who’ve looked into it think it could be very hard, yet x-riskers still seem to write it off as trivial on long timelines without much argument.
b) is roughly a subset of my thesis, though one could believe the main source of friction increase would come when society runs out of technological information from previous civilisations.
I’m not sure if I’m clearing anything up here...
So would I, though modelling it sensibly is extremely hard. My previous sequence’s model was too simple to capture this question, despite being probably too complicated for what most people would consider practical use. To answer comparative value loss, you need to look at at least:
Risk per year of non-AI catastrophes of various magnitudes
Difficulty of recovery from other catastrophes
Difficulty of flourishing given recovery from other catastrophes
Risk per year of AI catastrophes of various magnitudes
Effect of AI-catastrophe risk reduction on other catastrophes? E.g. does benign AI basically lock in a secure future, or would we retain the capacity and willingness to launch powerful weapons at each other?
How likely is it that AI outcome is largely predetermined by, such that developing benign AI once would be strong evidence that if society subsequently collapsed and developed it again, it would be benign again?
The long-term nature of AI catastrophic risk. Is it a one-and-done problem if it goes well? Or does making a non-omnicidal AI just give us some breathing space until we create its successor, at which point we have to solve the problem all over again?
Effect of other catastrophe risk reduction on AI-catastrophe. E.g. does reducing global nuclear arsenals meaningfully reduce the risk that AI goes horribly wrong by accident? Or do we think most of the threat is from AI that deliberatively plans our destruction, and is smart enough not to need existing weaponry?
The long-term moral status of AI. Is a world where it replaces us as good or better than a world where we stick around on reasonable value systems?
Expected changes to human-descendant values given flourishing after other catastrophes
My old model didn’t have much to say on any beyond the first three of these considerations.
Though if we return to the much simpler model and handwave a bit, if we suppose that annual non-extinction catastrophic risk is between 1 and 2%, then 10-20 year risk is between 20 and 35%. If we also suppose that chances of flourishing after collapse drop by 10 or more %, that puts it in the realm of ‘substantially bigger threat than the more conservative AI x-riskers view AI as, substantially smaller than the most pessimistic views of AI x-risk’.
It could be somewhat more important either if chances of flourishing after collapse drop by substantially more (as I think they do), and much more important if we could persistently reduce catastrophic risk that persist for beyond the 10-20-year period (e.g. by moving towards stable global governance or at least substantially reducing nuclear arsenals).