I agree that altruistic sentiments are a confounder in the prisoner’s dilemma. Yudkowsky (who would cooperate against a copy) makes a similar point in The True Prisoner’s Dilemma, and there are lots of psychology studies showing that humans cooperate with each other in the PD in cases where I think they (that is, each individually) shouldn’t. (Cf. section 6.4 of the MSR paper.)
But I don’t think that altruistic sentiments are the primary reason for why some philosophers and other sophisticated people tend to favor cooperation in the prisoner’s dilemma against a copy. As you may know, Newcomb’s problem is decision-theoretically similar to the PD against a copy. In contrast to the PD, however, it doesn’t seem to evoke any altruistic sentiments. And yet, many people prefer EDT’s recommendations in Newcomb’s problem. Thus, the “altruism error theory” of cooperation in the PD is not particularly convincing.
I don’t see much evidence in favor of the “wishful thinking” hypothesis. It, too, seems to fail in the non-multiverse problems like Newcomb’s paradox. Also, it’s easy to come up with lots of incorrect theories about how any particular view results from biased epistemics, so I have quite low credence in any such hypothesis that isn’t backed up by any evidence.
before I’m willing to throw out causality
Of course, causal eliminativism (or skepticism) is one motivation to one-box in Newcomb’s problem, but subscribing to eliminitavism is not necessary to do so.
For example, in Evidence, Decision and Causality Arif Ahmed argues that causality is irrelevant for decision making. (The book starts with: “Causality is a pointless superstition. These days it would take more than one book to persuade anyone of that. This book focuses on the ‘pointless’ bit, not the ‘superstition’ bit. I take for granted that there are causal relations and ask what doing so is good for. More narrowly still, I ask whether causal belief plays a special role in decision.”) Alternatively, one could even endorse the use of causal relationships for informing one’s decision but still endorse one-boxing. See, e.g., Yudkowsky, 2010; Fisher, n.d.; Spohn, 2012 or this talk by Ilya Shpitser.
A few of the points made in this piece are similar to the points I make here: https://casparoesterheld.com/2017/06/25/complications-in-evaluating-neglectedness/
For example, the linked piece also argues that returns may diminish in a variety of different ways. In particular, it also argues that the returns diminish more slowly if the problem is big and that clustered value problems only produce benefits once the whole problem is solved.