I’ve explored very similar ideas before in things like this simulation based on the Iterated Prisoner’s Dilemma but with Death, Asymmetric Power, and Aggressor Reputation. Long story short, the cooperative strategies do generally outlast the aggressive ones in the long run. It’s also an idea I’ve tried to discuss (albeit less rigorously) before as The Alpha Omega Theorem and Superrational Signalling. The first of those was from 2017 and got downvoted to oblivion, while the second was probably too long-winded and got mostly ignored.
There are a bunch of random people like James Miller and A.V. Turchin and Ryo who have had similar ideas that can broadly be categorized under Bostrom’s concept of Anthropic Capture, or Game Theoretic Alignment, or possibly a subset of Agent Foundations. The ideas are mostly not taken very seriously by the greater LW and EA communities, so I’d be prepared for a similar reception.
I’ve explored very similar ideas before in things like this simulation based on the Iterated Prisoner’s Dilemma but with Death, Asymmetric Power, and Aggressor Reputation. Long story short, the cooperative strategies do generally outlast the aggressive ones in the long run. It’s also an idea I’ve tried to discuss (albeit less rigorously) before as The Alpha Omega Theorem and Superrational Signalling. The first of those was from 2017 and got downvoted to oblivion, while the second was probably too long-winded and got mostly ignored.
There are a bunch of random people like James Miller and A.V. Turchin and Ryo who have had similar ideas that can broadly be categorized under Bostrom’s concept of Anthropic Capture, or Game Theoretic Alignment, or possibly a subset of Agent Foundations. The ideas are mostly not taken very seriously by the greater LW and EA communities, so I’d be prepared for a similar reception.