I think it’s closer to 2, and the clearer term to use is probably “superrational cooperator,” but I suppose that’s probably meant by “superrationalist”? Unclear. But “superrational cooperator” is clearer about (1) knowing about superrationality and (2) wanting to reap the gains from trade from superrationality. Condition 2 can be false because people use CDT or because they have very local or easily satisfied values and don’t care about distant or additional stuff.
So just as in all the thought experiments where EDT gets richer than CDT, your own behavior is the only evidence you have about what others are likely to predict about you. The multiverse part probably smooths that out a bit, so your own behavior gives you evidence of increasing or decreasing gains from trade as the fraction of agents in the multiverse that you think cooperate with you increases or decreases.
I think it would be “hard” to try to occupy that Goldilocks zone where you maximize the number of agents who wrongly believe that you’ll cooperate while you’re really defecting, because you’d have to simultaneously believe that you’re the sort of agent that cooperates despite actually defecting, which should give you evidence that you’re wrong about what reference class you’re likely to be put in. There may be agents like that out there, but even if that’s the case, they won’t have control over it. The way this will probably be factored in is that superrational cooperators will expect a slightly lower cooperation incidence to agents in reference classes of agents that are empirically very likely to cooperate while not being physically forced to cooperate because being in that reference class makes defection more profitable up to the point where it actually changes the assumptions others are likely to make about the reference class that have enabled the effect in the first place. That could mean that for any given reference class of agent who are able to defect, cooperation “densities” over 99% or so get rapidly less likely.
But really, I think, the winning strategy for anyone at all interested in distant gains from trade is to be a very simple, clear kind of superrational cooperator agent because that maximizes the chances that others will cooperate with that sort of agent. All that “trying to be clever” and “being the sort of agent that tries to be clever” probably just costs so much gains from trade right away that you’d have to value the distant gains from trade very low compared to your local stuff for it to make any economic sense, and then you can probably forget about the gains from trade anyway because others will also predict that. I think David Althaus and Johannes Treutlein have thought about this from the perspective of different value systems, but I don’t know of any published artifacts from that.
We can have a chat some time, gladly! But it’s been a while that I’ve done all this so I’m a bit slow. ^.^′
I think it’s closer to 2, and the clearer term to use is probably “superrational cooperator,” but I suppose that’s probably meant by “superrationalist”? Unclear. But “superrational cooperator” is clearer about (1) knowing about superrationality and (2) wanting to reap the gains from trade from superrationality. Condition 2 can be false because people use CDT or because they have very local or easily satisfied values and don’t care about distant or additional stuff.
So just as in all the thought experiments where EDT gets richer than CDT, your own behavior is the only evidence you have about what others are likely to predict about you. The multiverse part probably smooths that out a bit, so your own behavior gives you evidence of increasing or decreasing gains from trade as the fraction of agents in the multiverse that you think cooperate with you increases or decreases.
I think it would be “hard” to try to occupy that Goldilocks zone where you maximize the number of agents who wrongly believe that you’ll cooperate while you’re really defecting, because you’d have to simultaneously believe that you’re the sort of agent that cooperates despite actually defecting, which should give you evidence that you’re wrong about what reference class you’re likely to be put in. There may be agents like that out there, but even if that’s the case, they won’t have control over it. The way this will probably be factored in is that superrational cooperators will expect a slightly lower cooperation incidence to agents in reference classes of agents that are empirically very likely to cooperate while not being physically forced to cooperate because being in that reference class makes defection more profitable up to the point where it actually changes the assumptions others are likely to make about the reference class that have enabled the effect in the first place. That could mean that for any given reference class of agent who are able to defect, cooperation “densities” over 99% or so get rapidly less likely.
But really, I think, the winning strategy for anyone at all interested in distant gains from trade is to be a very simple, clear kind of superrational cooperator agent because that maximizes the chances that others will cooperate with that sort of agent. All that “trying to be clever” and “being the sort of agent that tries to be clever” probably just costs so much gains from trade right away that you’d have to value the distant gains from trade very low compared to your local stuff for it to make any economic sense, and then you can probably forget about the gains from trade anyway because others will also predict that. I think David Althaus and Johannes Treutlein have thought about this from the perspective of different value systems, but I don’t know of any published artifacts from that.
We can have a chat some time, gladly! But it’s been a while that I’ve done all this so I’m a bit slow. ^.^′