I think it’s worth bringing in the idea of an “endgame” here, defined as “a state in which existential risk from AI is negligible either indefinitely or for long enough that humanity can carefully plan its future”.
Some waypoints are endgames, some aren’t and some may be treated as an endgame by one strategy, but not by another.
I think it’s worth bringing in the idea of an “endgame” here, defined as “a state in which existential risk from AI is negligible either indefinitely or for long enough that humanity can carefully plan its future”.
Some waypoints are endgames, some aren’t and some may be treated as an endgame by one strategy, but not by another.