To steelman the other side I would point to 16th century new world encounters between Europeans and Natives. It seems like this was a case where the technological advantage of the Europeans made conquest better than comparative advantage trade.
The high productivity of the Europeans made it easy for them to lawfully accumulate wealth (e.g buying large tracts of land for small quantities of manufactured goods), but they still often chose to take land by conquest rather than trade.
Maybe transaction frictions were higher here than they might be with AIs since we’d share a language and be able to use AI tools to communicate.
But what makes you think that this can be a longterm solution if the needs and capabilities of the involved parties are strongly divergent as in human vs AI scenarios?
I agree that trading can probably work for a couple of years, maybe decades, but if the AIs want something different from us in the long term what should stop them from getting this?
I don’t see a way around value alignment in the strict sense (ironically this could also involve AIs aligning our values to theirs similar to how we have aligned dogs).
I agree with this and I’m glad you wrote it.
To steelman the other side I would point to 16th century new world encounters between Europeans and Natives. It seems like this was a case where the technological advantage of the Europeans made conquest better than comparative advantage trade.
The high productivity of the Europeans made it easy for them to lawfully accumulate wealth (e.g buying large tracts of land for small quantities of manufactured goods), but they still often chose to take land by conquest rather than trade.
Maybe transaction frictions were higher here than they might be with AIs since we’d share a language and be able to use AI tools to communicate.
But what makes you think that this can be a longterm solution if the needs and capabilities of the involved parties are strongly divergent as in human vs AI scenarios?
I agree that trading can probably work for a couple of years, maybe decades, but if the AIs want something different from us in the long term what should stop them from getting this?
I don’t see a way around value alignment in the strict sense (ironically this could also involve AIs aligning our values to theirs similar to how we have aligned dogs).