Iirc, one problem is that there are ways to trade in positive sum ways, but they are multiple ways, and they don’t mix. So to agree on something, you first have to agree on the method you want to use to agree, but some may be at an advantage using one method and others using another method.
More empirically, there have been plenty of situations in which groups of smart humans after long deliberation have made bad decision because they thought another was bluffing, because they thought they could get away with a bluff, because their intended bluff got out of control, etc.
Tobias Baumann has thought a bit about whether perfectly rational all-knowing superintelligences might still fail to realize certain gains from trade. I don’t think he arrived at a strong conclusion even in that ideal case. (Idealized models of AIs don’t ring true to me and are at best helpful to establish hypothetical limits of sorts, I think.) But in practice even superintelligences will have some uncertainty over whether another is lying, concealing something, might not have something that they think they have, etc. Such imperfect knowledge of each other has historically led to a lot of unnecessary bloodshed.
Another source of problems is behavior in single-shot vs. iterated games. An AI might be forced into a situation where it has to allow a smaller s-risk to prevent a greater s-risk.
Folks at CLR have a ton of research into all the various failure modes, and it’s not clear to me at all what constellations of attitudes minimize or maximize s-risk. I’ve been hypothesizing that the Tit-For-Tat-heavy European culture may (if learned by AIs) lead to fewer worse suffering catastrophes whereas the more “Pavlovian” (in the game theory sense) cultures of South Korea or Australia (iirc?) may cause more smaller catastrophes.
But that’s just as vague speculation as it sounds. My takeaway is rather that I think that any multipolar scenarios will lead to tons of small and large bargaining failures, and some of those may involve extreme suffering on an unprecedented scale.
This article for example makes the case.
Iirc, one problem is that there are ways to trade in positive sum ways, but they are multiple ways, and they don’t mix. So to agree on something, you first have to agree on the method you want to use to agree, but some may be at an advantage using one method and others using another method.
More empirically, there have been plenty of situations in which groups of smart humans after long deliberation have made bad decision because they thought another was bluffing, because they thought they could get away with a bluff, because their intended bluff got out of control, etc.
Tobias Baumann has thought a bit about whether perfectly rational all-knowing superintelligences might still fail to realize certain gains from trade. I don’t think he arrived at a strong conclusion even in that ideal case. (Idealized models of AIs don’t ring true to me and are at best helpful to establish hypothetical limits of sorts, I think.) But in practice even superintelligences will have some uncertainty over whether another is lying, concealing something, might not have something that they think they have, etc. Such imperfect knowledge of each other has historically led to a lot of unnecessary bloodshed.
Another source of problems is behavior in single-shot vs. iterated games. An AI might be forced into a situation where it has to allow a smaller s-risk to prevent a greater s-risk.
Folks at CLR have a ton of research into all the various failure modes, and it’s not clear to me at all what constellations of attitudes minimize or maximize s-risk. I’ve been hypothesizing that the Tit-For-Tat-heavy European culture may (if learned by AIs) lead to fewer worse suffering catastrophes whereas the more “Pavlovian” (in the game theory sense) cultures of South Korea or Australia (iirc?) may cause more smaller catastrophes.
But that’s just as vague speculation as it sounds. My takeaway is rather that I think that any multipolar scenarios will lead to tons of small and large bargaining failures, and some of those may involve extreme suffering on an unprecedented scale.