It would be useful to have a term along the lines of outcome lock-in to describe situations where the future is out of human hands.
That said, this is more of a spectrum than a dichotomy. As we outsource more decisions to AI, outcomes become more locked in and, as you note, we may never completely eliminate the human in the loop.
Nonetheless, this seems like a useful concept for thinking about what the future might look like.
I think the word “lock-in” can be confusing here. I usually think of “lock-in” as worrying about a future where things stop improving, or a particular value system or set of goals gets permanent supremacy. If this is what we mean, then I don’t think “the future is out of human hands” is a sufficient for lock-in, because the future could continue to be dynamic or uncertain or getting better or worse, with AIs facing new and unique challenges and rising to them or failing to rise to them. Whatever story humans have set in motion is “locked in” in the sense that we can no longer influence it, but not in the sense that it’ll necessarily have a stable state of affairs persist for those who exist in it. Maybe it’s clearer to think of humans being “locked out” here, while AIs continue to have influence.
I think there’s maybe a useful distinction to make between future-out-of-human-hands (what this post was about, where human incompetence no longer matters) and future-out-of-human-control (where humans can no longer in any meaningful sense choose what happens).
It would be useful to have a term along the lines of outcome lock-in to describe situations where the future is out of human hands.
That said, this is more of a spectrum than a dichotomy. As we outsource more decisions to AI, outcomes become more locked in and, as you note, we may never completely eliminate the human in the loop.
Nonetheless, this seems like a useful concept for thinking about what the future might look like.
I think the word “lock-in” can be confusing here. I usually think of “lock-in” as worrying about a future where things stop improving, or a particular value system or set of goals gets permanent supremacy. If this is what we mean, then I don’t think “the future is out of human hands” is a sufficient for lock-in, because the future could continue to be dynamic or uncertain or getting better or worse, with AIs facing new and unique challenges and rising to them or failing to rise to them. Whatever story humans have set in motion is “locked in” in the sense that we can no longer influence it, but not in the sense that it’ll necessarily have a stable state of affairs persist for those who exist in it. Maybe it’s clearer to think of humans being “locked out” here, while AIs continue to have influence.
I think there’s maybe a useful distinction to make between future-out-of-human-hands (what this post was about, where human incompetence no longer matters) and future-out-of-human-control (where humans can no longer in any meaningful sense choose what happens).