I agree that altruistic sentiments are a confounder in the prisoner’s dilemma. Yudkowsky (who would cooperate against a copy) makes a similar point in The True Prisoner’s Dilemma, and there are lots of psychology studies showing that humans cooperate with each other in the PD in cases where I think they (that is, each individually) shouldn’t. (Cf. section 6.4 of the MSR paper.)
But I don’t think that altruistic sentiments are the primary reason for why some philosophers and other sophisticated people tend to favor cooperation in the prisoner’s dilemma against a copy. As you may know, Newcomb’s problem is decision-theoretically similar to the PD against a copy. In contrast to the PD, however, it doesn’t seem to evoke any altruistic sentiments. And yet, many people prefer EDT’s recommendations in Newcomb’s problem. Thus, the “altruism error theory” of cooperation in the PD is not particularly convincing.
I don’t see much evidence in favor of the “wishful thinking” hypothesis. It, too, seems to fail in the non-multiverse problems like Newcomb’s paradox. Also, it’s easy to come up with lots of incorrect theories about how any particular view results from biased epistemics, so I have quite low credence in any such hypothesis that isn’t backed up by any evidence.
before I’m willing to throw out causality
Of course, causal eliminativism (or skepticism) is one motivation to one-box in Newcomb’s problem, but subscribing to eliminitavism is not necessary to do so.
For example, in Evidence, Decision and Causality Arif Ahmed argues that causality is irrelevant for decision making. (The book starts with: “Causality is a pointless superstition. These days it would take more than one book to persuade anyone of that. This book focuses on the ‘pointless’ bit, not the ‘superstition’ bit. I take for granted that there are causal relations and ask what doing so is good for. More narrowly still, I ask whether causal belief plays a special role in decision.”) Alternatively, one could even endorse the use of causal relationships for informing one’s decision but still endorse one-boxing. See, e.g., Yudkowsky, 2010; Fisher, n.d.; Spohn, 2012 or this talk by Ilya Shpitser.
Probably you’re already aware of this, but the APA’s Goldwater rule seems relevant. It states:
From the perspective of this article, this rule is problematic when applied to politicians and harmful traits. (This is similar to how the right to confidentiality has the Duty to Warn exception.) A quick Google Scholar search gives a couple of articles since 2016 that basically make this point. For example, see Lilienfeld et al. (2018): The Goldwater Rule: Perspectives From, and Implications for, Psychological Science.
Of course, the other important (more empirical than ethical) question regarding the Goldwater rule is whether “conducting an examination” is a necessary prerequisite for gaining insight into a person’s alleged pathology. Lilienfeld et al. also address this issue at length.