It is an interesting idea. If I remember correctly, something slightly similar was explored in the context of GCR by Seth Baum. The case there: if the recovery time after a global catastrophic event is relatively short (compared to background extinction rate), a violent civilization destroying itself before reaching technology allowing it to make whole species go extinct may be a better outcome.
As a quick guess, I’d predict careful analysis of this style of considerations will lead to more emphasis on risks from AI and less on nuclear. Within AI safety it would likely lead to less emphasis on approaches which run the risk of creating “almost aligned” AI—the adjustments would be somewhat similar to what the negative utilitarian camp says, just much less extreme.
I’m slightly skeptical studying this consideration in more depth than a few days is marginally effective. The reason is the study will run into questions like “should we expect a society recovering after near-extinction due to bio- attack to be in better or worse position to develop grand futures?” or doing some sort of estimates on how future where AI risks were just narrowly avoided could look like.
It is an interesting idea. If I remember correctly, something slightly similar was explored in the context of GCR by Seth Baum. The case there: if the recovery time after a global catastrophic event is relatively short (compared to background extinction rate), a violent civilization destroying itself before reaching technology allowing it to make whole species go extinct may be a better outcome.
As a quick guess, I’d predict careful analysis of this style of considerations will lead to more emphasis on risks from AI and less on nuclear. Within AI safety it would likely lead to less emphasis on approaches which run the risk of creating “almost aligned” AI—the adjustments would be somewhat similar to what the negative utilitarian camp says, just much less extreme.
I’m slightly skeptical studying this consideration in more depth than a few days is marginally effective. The reason is the study will run into questions like “should we expect a society recovering after near-extinction due to bio- attack to be in better or worse position to develop grand futures?” or doing some sort of estimates on how future where AI risks were just narrowly avoided could look like.