I think 2 and 3 are the wrong way to think about the question. Was humankind “motivated to conquer” the dodo? Or did we just have a better use for its habitat, and its extinction was just a whoopsie in the process?
When I say “motivated to”, I don’t mean that it would be it’s primary motivation. I mean that it has motivations that, at some point, would lead to it having “perform actions that would kill all of humanity” as a sub-goal. And in order to get to the point where we were dodo’s to it, it would have to disempower humanity somehow.
Would you prefer the following restatement, each conditional on the previous step:
At least one Agi is built in our lifetimes
At least one of these AGI’s has the motivations that include “disempower humanity” as a sub-goal
At least one of these disempowerment attempts are successful
And then either:
4a: The process of disempowering humanity involves wiping out all of humanity
Or
4b: After successfully disempowering humanity with some of humanity still intact, the AI ends up wiping out the rest of humanity anyway
When I say “motivated to”, I don’t mean that it would be it’s primary motivation. I mean that it has motivations that, at some point, would lead to it having “perform actions that would kill all of humanity” as a sub-goal. And in order to get to the point where we were dodo’s to it, it would have to disempower humanity somehow.
Would you prefer the following restatement, each conditional on the previous step:
At least one Agi is built in our lifetimes
At least one of these AGI’s has the motivations that include “disempower humanity” as a sub-goal
At least one of these disempowerment attempts are successful
And then either:
4a: The process of disempowering humanity involves wiping out all of humanity
Or
4b: After successfully disempowering humanity with some of humanity still intact, the AI ends up wiping out the rest of humanity anyway