I mean, there’s an extremely narrow range of final goals for which flesh-and-blood humans are physically optimal infrastructure. Human arms can carry materials, human brains an solve problems, etc.; but if something is keeping us around just for that purpose, and not out of any concern for our welfare, then we’ll inevitably be phased out.
(And in reality, I don’t think AI will ever be at a capability level where it’s strong enough to take control, but not strong enough to benefit in expectation from phasing humans out.)
I think the right takeaway is very clearly “don’t build AGI that has no concern for human welfare”, not “try to be like gut bacteria (or talking ants) to a misaligned AGI”.
>extremely narrow range of final goals for which flesh-and-blood humans are physically optimal
Not so quick there. Currently Al can’t do anything without depending on humans. I have yet to hear an explanation of how the AI rids itself of this dependence.
Or maybe both analogies are correct? Then the question is how can we be like gut bacteria for the AI and not ants?
Or maybe analogies just add more confusion and we should go back to first principles xd
I mean, there’s an extremely narrow range of final goals for which flesh-and-blood humans are physically optimal infrastructure. Human arms can carry materials, human brains an solve problems, etc.; but if something is keeping us around just for that purpose, and not out of any concern for our welfare, then we’ll inevitably be phased out.
(And in reality, I don’t think AI will ever be at a capability level where it’s strong enough to take control, but not strong enough to benefit in expectation from phasing humans out.)
I think the right takeaway is very clearly “don’t build AGI that has no concern for human welfare”, not “try to be like gut bacteria (or talking ants) to a misaligned AGI”.
>extremely narrow range of final goals for which flesh-and-blood humans are physically optimal
Not so quick there. Currently Al can’t do anything without depending on humans. I have yet to hear an explanation of how the AI rids itself of this dependence.