1. Simulations are not the most efficient way for A and B to reach their agreement. Rather, writing out arguments or formal proofs about each other is much more computationally efficient, because nested arguments naturally avoid stack overflows in a way that nested simulations do not. In short, each of A and B can write out an argument about each other that self-validates without an infinite recursion. There are several ways to do this, such as using Löb’s Theorem-like constructions (as in this 2019 JSL paper), or even more simply and efficiently using Payor’s Lemma (as in this 2023 LessWrong post).
I’m wondering to what extent this is the exact same as Evidential Cooperation in Large Worlds, with which you don’t need simulations because you cooperate only with the agents that are decision-entangled with you (i.e., those you can prove will cooperate if you cooperate). While not needing simulation is an advantage, the big limitation of Evidential Cooperation in Large Worlds is that the sample of agents you can cooperate with is fairly small (since they need to be decision-entangled with you).
The whole point of nesting simulations—and classic acausal trade—is to create some form of artificial/”indirect” decision-entanglement with agents who would otherwise not be entangled with you (by creating a channel of “communication” that makes the players able to see what the other is actually playing so you can start implementing a tit-for-tat strategy). Without those simulations, you’re limited to the agents you can prove will necessarily cooperate if you cooperate (without any way to verify/coordinate via mutual simulation). (Although one might argue that you can hardly simulate agents you can’t prove anything about or are not (close to) decision-entangled with, anyway.)
Very interesting post, thanks for writing this!
I’m wondering to what extent this is the exact same as Evidential Cooperation in Large Worlds, with which you don’t need simulations because you cooperate only with the agents that are decision-entangled with you (i.e., those you can prove will cooperate if you cooperate). While not needing simulation is an advantage, the big limitation of Evidential Cooperation in Large Worlds is that the sample of agents you can cooperate with is fairly small (since they need to be decision-entangled with you).
The whole point of nesting simulations—and classic acausal trade—is to create some form of artificial/”indirect” decision-entanglement with agents who would otherwise not be entangled with you (by creating a channel of “communication” that makes the players able to see what the other is actually playing so you can start implementing a tit-for-tat strategy). Without those simulations, you’re limited to the agents you can prove will necessarily cooperate if you cooperate (without any way to verify/coordinate via mutual simulation). (Although one might argue that you can hardly simulate agents you can’t prove anything about or are not (close to) decision-entangled with, anyway.)
So is your idea basically Evidential Cooperation in Large Worlds explained in another way or is it something in between that and classic acausal trade?