Thanks for writing this—even though I’ve been familiar with AI x-risk for a while, it didn’t really hit me on an emotional level that dying from misaligned AI would happen to me too, and not just “humanity” in the abstract. This post changed that.
Might eventually be useful to have one of these that accounts for biorisk too, although biorisk “timelines” aren’t as straightforward as trying to estimate the date that humanity builds the first AGI.
Thanks for writing this—even though I’ve been familiar with AI x-risk for a while, it didn’t really hit me on an emotional level that dying from misaligned AI would happen to me too, and not just “humanity” in the abstract. This post changed that.
Might eventually be useful to have one of these that accounts for biorisk too, although biorisk “timelines” aren’t as straightforward as trying to estimate the date that humanity builds the first AGI.