Ok, so I thought about this more and want to double down on my Objection 1:
Consider the following three scenarios for clarity:
Scenario 1: Two identical, self interested agents play prisoner’s dilemma in your respective rooms, light years apart. These two agents are just straight out of our econ 101 lecture. Also, they know they are identical and self-interested. Ok dokie. So we get the “usual” defect/defect single-shot result. Note that we can have these agents identical, down to the last molecule and quantum effect, but it doesn’t matter. I think we all accept that we get the defect/defect result.
Scenario 2: We have your process or Omega create two identical agents, molecularly identical, quantum effect identical, etc. Again, they know they are identical and self-interested. Now, again they play the game in their respective rooms, light years apart. Again, once I point out that nothing has changed from Scenario 1, I think you would agree we get the defect/defect result.
Scenario 3: We have your process or Omega create one primary agent, and then create a puppet or slave of this primary agent that will do exactly what the primary agent does (and we put them in the two rooms with whiteboards, etc.). Now, it’s going to seem counterintuitive how this puppeting works, across the light-years, with no causation or information passing between agents. What’s going on is that, just as in Newcomb’s boxing thingy, that Omega is exercising extraordinary agency or foresight, probably over both agent and copy, e.g.it’s foreseen what the primary agent will do and creates that over the puppet.
Ok. Now, in Scenario 3, indeed your story about getting the cooperate result works, because it’s truly mirroring and the primary agent can trust the puppet will copy as they do.
However, I think your story is merely creating Scenario 2, and the copying doesn’t go through.
There is no puppeting effect or Omega’s effect—this is what is biting for Scenario 3.
To see why the puppeting doesn’t go through, it’s because Scenario 2 is the same as Scenario 1.
Another way of seeing this is that, imagine in your story in your post, imagine your agent doing something horrific, almost unthinkable, like committing genocide or stroking a cat backwards. Despite both the agent and the copy are able to do the horrific act, and despite the fact that they would mirror eachother, is not adequate for this act to actually happen. Both agents need to do/choose this.
You get your result, by rounding this off. You point out how tempting cooperate looks like, which is indeed true and indeed human subjects will actually probably cooperate in this situation. But that’s not causality or control.
As a side note, I think this “Omega effect”, or control/agency is the root of the Newcomb’s box paradox thing. Basically CDT’s refuse the idea that they are in the inner loop of Omega or in Omega’s mind’s eye as they eye the $1000 box, and think they can grab two boxes without consequence. But this rejects the premise of the whole story and doesn’t take Omega’s agency seriously (which is indeed extraordinary and maybe very hard to imagine). This makes Newcomb’s paradox really uninteresting.
Also, I read all this Newcomb stuff over the last 24 hours, so I might be wrong.
Ok, so I thought about this more and want to double down on my Objection 1:
Consider the following three scenarios for clarity:
Scenario 1: Two identical, self interested agents play prisoner’s dilemma in your respective rooms, light years apart. These two agents are just straight out of our econ 101 lecture. Also, they know they are identical and self-interested. Ok dokie. So we get the “usual” defect/defect single-shot result. Note that we can have these agents identical, down to the last molecule and quantum effect, but it doesn’t matter. I think we all accept that we get the defect/defect result.
Scenario 2: We have your process or Omega create two identical agents, molecularly identical, quantum effect identical, etc. Again, they know they are identical and self-interested. Now, again they play the game in their respective rooms, light years apart. Again, once I point out that nothing has changed from Scenario 1, I think you would agree we get the defect/defect result.
Scenario 3: We have your process or Omega create one primary agent, and then create a puppet or slave of this primary agent that will do exactly what the primary agent does (and we put them in the two rooms with whiteboards, etc.). Now, it’s going to seem counterintuitive how this puppeting works, across the light-years, with no causation or information passing between agents. What’s going on is that, just as in Newcomb’s boxing thingy, that Omega is exercising extraordinary agency or foresight, probably over both agent and copy, e.g. it’s foreseen what the primary agent will do and creates that over the puppet.
Ok. Now, in Scenario 3, indeed your story about getting the cooperate result works, because it’s truly mirroring and the primary agent can trust the puppet will copy as they do.
However, I think your story is merely creating Scenario 2, and the copying doesn’t go through.
There is no puppeting effect or Omega’s effect—this is what is biting for Scenario 3.
To see why the puppeting doesn’t go through, it’s because Scenario 2 is the same as Scenario 1.
Another way of seeing this is that, imagine in your story in your post, imagine your agent doing something horrific, almost unthinkable, like committing genocide or stroking a cat backwards. Despite both the agent and the copy are able to do the horrific act, and despite the fact that they would mirror eachother, is not adequate for this act to actually happen. Both agents need to do/choose this.
You get your result, by rounding this off. You point out how tempting cooperate looks like, which is indeed true and indeed human subjects will actually probably cooperate in this situation. But that’s not causality or control.
As a side note, I think this “Omega effect”, or control/agency is the root of the Newcomb’s box paradox thing. Basically CDT’s refuse the idea that they are in the inner loop of Omega or in Omega’s mind’s eye as they eye the $1000 box, and think they can grab two boxes without consequence. But this rejects the premise of the whole story and doesn’t take Omega’s agency seriously (which is indeed extraordinary and maybe very hard to imagine). This makes Newcomb’s paradox really uninteresting.
Also, I read all this Newcomb stuff over the last 24 hours, so I might be wrong.