I think the word âlock-inâ can be confusing here. I usually think of âlock-inâ as worrying about a future where things stop improving, or a particular value system or set of goals gets permanent supremacy. If this is what we mean, then I donât think âthe future is out of human handsâ is a sufficient for lock-in, because the future could continue to be dynamic or uncertain or getting better or worse, with AIs facing new and unique challenges and rising to them or failing to rise to them. Whatever story humans have set in motion is âlocked inâ in the sense that we can no longer influence it, but not in the sense that itâll necessarily have a stable state of affairs persist for those who exist in it. Maybe itâs clearer to think of humans being âlocked outâ here, while AIs continue to have influence.
I think the word âlock-inâ can be confusing here. I usually think of âlock-inâ as worrying about a future where things stop improving, or a particular value system or set of goals gets permanent supremacy. If this is what we mean, then I donât think âthe future is out of human handsâ is a sufficient for lock-in, because the future could continue to be dynamic or uncertain or getting better or worse, with AIs facing new and unique challenges and rising to them or failing to rise to them. Whatever story humans have set in motion is âlocked inâ in the sense that we can no longer influence it, but not in the sense that itâll necessarily have a stable state of affairs persist for those who exist in it. Maybe itâs clearer to think of humans being âlocked outâ here, while AIs continue to have influence.