Regarding the section on estimating the probability of AI extinction, I think a useful framing is to focus on disjunctive scenarios where AI ends up being used. If we imagine a highly detailed scenario where a single artificial intelligence goes rougue, then of course these types of things will seem unlikely.
However, my guess is that AI will gradually become more capable and integrated into the world economy, and there won’t be a discrete point where we can say “now the AI was invented.” Over the broad course of history, we have witnessed numerous instances of populations displacing other populations eg. species displacements in ecosystems, and humans populations displacing other humans. If we think about AI as displacing humanity’s seat of power in this abstract way, then an AI takeover doesn’t seem implausible anymore, and indeed I find it quite likely in the long run.
Regarding the section on estimating the probability of AI extinction, I think a useful framing is to focus on disjunctive scenarios where AI ends up being used. If we imagine a highly detailed scenario where a single artificial intelligence goes rougue, then of course these types of things will seem unlikely.
However, my guess is that AI will gradually become more capable and integrated into the world economy, and there won’t be a discrete point where we can say “now the AI was invented.” Over the broad course of history, we have witnessed numerous instances of populations displacing other populations eg. species displacements in ecosystems, and humans populations displacing other humans. If we think about AI as displacing humanity’s seat of power in this abstract way, then an AI takeover doesn’t seem implausible anymore, and indeed I find it quite likely in the long run.