I know a lot of people through a shared interest in truth-seeking and epistemics. I also know a lot of people through a shared interest in trying to do good in the world.
I think I would have naively expected that the people who care less about the world would be better at having good epistemics. For example, people who care a lot about particular causes might end up getting really mindkilled by politics, or might end up strongly affiliated with groups that have false beliefs as part of their tribal identity.
But I don’t think that this prediction is true: I think that I see a weak positive correlation between how altruistic people are and how good their epistemics seem.
----
I think the main reason for this is that striving for accurate beliefs is unpleasant and unrewarding. In particular, having accurate beliefs involves doing things like trying actively to step outside the current frame you’re using, and looking for ways you might be wrong, and maintaining constant vigilance against disagreeing with people because they’re annoying and stupid.
Altruists often seem to me to do better than people who instrumentally value epistemics; I think this is because valuing epistemics terminally has some attractive properties compared to valuing it instrumentally. One reason this is better is that it means that you’re less likely to stop being rational when it stops being fun. For example, I find many animal rights activists very annoying, and if I didn’t feel tied to them by virtue of our shared interest in the welfare of animals, I’d be tempted to sneer at them.
Another reason is that if you’re an altruist, you find yourself interested in various subjects that aren’t the subjects you would have learned about for fun—you have less of an opportunity to only ever think in the way you think in by default. I think that it might be healthy that altruists are forced by the world to learn subjects that are further from their predispositions.
----
I think it’s indeed true that altruistic people sometimes end up mindkilled. But I think that truth-seeking-enthusiasts seem to get mindkilled at around the same rate. One major mechanism here is that truth-seekers often start to really hate opinions that they regularly hear bad arguments for, and they end up rationalizing their way into dumb contrarian takes.
I think it’s common for altruists to avoid saying unpopular true things because they don’t want to get in trouble; I think that this isn’t actually that bad for epistemics.
----
I think that EAs would have much worse epistemics if EA wasn’t pretty strongly tied to the rationalist community; I’d be pretty worried about weakening those ties. I think my claim here is that being altruistic seems to make you overall a bit better at using rationality techniques, instead of it making you substantially worse.
I tried searching the literature a bit, as I’m sure that there are studies on the relation between rationality and altruistic behavior. The most relevant paper I found (from about 20 minutes of search and reading) is The cognitive basis of social behavior (2015). It seems to agree with your hypothesis. From the abstract:
Applying a dual-process framework to the study of social preferences, we show in two studies that individuals with a more reflective/deliberative cognitive style, as measured by scores on the Cognitive Reflection Test (CRT), are more likely to make choices consistent with “mild” altruism in simple non-strategic decisions. Such choices increase social welfare by increasing the other person’s payoff at very low or no cost for the individual. The choices of less reflective individuals (i.e. those who rely more heavily on intuition), on the other hand, are more likely to be associated with either egalitarian or spiteful motives. We also identify a negative link between reflection and choices characterized by “strong” altruism, but this result holds only in Study 2. Moreover, we provide evidence that the relationship between social preferences and CRT scores is not driven by general intelligence. We discuss how our results can reconcile some previous conflicting findings on the cognitive basis of social behavior.
Does cooperating require the inhibition of selfish urges? Or does “rational” self-interest constrain cooperative impulses? I investigated the role of intuition and deliberation in cooperation by meta-analyzing 67 studies in which cognitive-processing manipulations were applied to economic cooperation games. My meta-analysis was guided by the social heuristics hypothesis, which proposes that intuition favors behavior that typically maximizes payoffs, whereas deliberation favors behavior that maximizes one’s payoff in the current situation. Therefore, this theory predicts that deliberation will undermine pure cooperation (i.e., cooperation in settings where there are few future consequences for one’s actions, such that cooperating is not in one’s self-interest) but not strategic cooperation (i.e., cooperation in settings where cooperating can maximize one’s payoff). As predicted, the meta-analysis revealed 17.3% more pure cooperation when intuition was promoted over deliberation, but no significant difference in strategic cooperation between more intuitive and more deliberative conditions.
However, contra our predictions, cognitive reflection was not significantly negatively correlated with belief in altruism (r(285) = .04, p =.52, 95% CI [-.08,.15]).
Where belief in altruism is a measure of how much people believe that other people are acting out of care or compassion to others as opposed to self-interest.
Note: I think that this might be a delicate subject in EA and it might be useful to be more careful about alienating people. I definitely agree that better epistemics is very important to the EA community and to doing good generally and that the ties to the rationalist community probably played (and plays) a very important role, and in fact I think that it is sometimes useful to think of EA as rationality applied to altruism. However, many amazing altruistic people have a totally different view on what would be good epistemics (nevermind the question of “are they right?”), and many people already involved in the EA community seem to have a negative view of (at least some aspects of) the rationality community, both of which call for a more kind and appreciative conversation.
In this shortform post, the most obvious point where I think that this becomes a problem is the example
For example, I find many animal rights activists very annoying, and if I didn’t feel tied to them by virtue of our shared interest in the welfare of animals, I’d be tempted to sneer at them.
This is supposed to be an example of a case where people are not behaving rationally since that would stop them from having fun. You could have used a lot of abstract or personal examples where people in their day to day work are not taking time to think something through or seek negative feedback or update their actions based on (noticing when they) update their beliefs.
I know a lot of people through a shared interest in truth-seeking and epistemics. I also know a lot of people through a shared interest in trying to do good in the world.
I think I would have naively expected that the people who care less about the world would be better at having good epistemics. For example, people who care a lot about particular causes might end up getting really mindkilled by politics, or might end up strongly affiliated with groups that have false beliefs as part of their tribal identity.
But I don’t think that this prediction is true: I think that I see a weak positive correlation between how altruistic people are and how good their epistemics seem.
----
I think the main reason for this is that striving for accurate beliefs is unpleasant and unrewarding. In particular, having accurate beliefs involves doing things like trying actively to step outside the current frame you’re using, and looking for ways you might be wrong, and maintaining constant vigilance against disagreeing with people because they’re annoying and stupid.
Altruists often seem to me to do better than people who instrumentally value epistemics; I think this is because valuing epistemics terminally has some attractive properties compared to valuing it instrumentally. One reason this is better is that it means that you’re less likely to stop being rational when it stops being fun. For example, I find many animal rights activists very annoying, and if I didn’t feel tied to them by virtue of our shared interest in the welfare of animals, I’d be tempted to sneer at them.
Another reason is that if you’re an altruist, you find yourself interested in various subjects that aren’t the subjects you would have learned about for fun—you have less of an opportunity to only ever think in the way you think in by default. I think that it might be healthy that altruists are forced by the world to learn subjects that are further from their predispositions.
----
I think it’s indeed true that altruistic people sometimes end up mindkilled. But I think that truth-seeking-enthusiasts seem to get mindkilled at around the same rate. One major mechanism here is that truth-seekers often start to really hate opinions that they regularly hear bad arguments for, and they end up rationalizing their way into dumb contrarian takes.
I think it’s common for altruists to avoid saying unpopular true things because they don’t want to get in trouble; I think that this isn’t actually that bad for epistemics.
----
I think that EAs would have much worse epistemics if EA wasn’t pretty strongly tied to the rationalist community; I’d be pretty worried about weakening those ties. I think my claim here is that being altruistic seems to make you overall a bit better at using rationality techniques, instead of it making you substantially worse.
I tried searching the literature a bit, as I’m sure that there are studies on the relation between rationality and altruistic behavior. The most relevant paper I found (from about 20 minutes of search and reading) is The cognitive basis of social behavior (2015). It seems to agree with your hypothesis. From the abstract:
Also relevant is This Review (2016) by Rand:
And This Paper (2016) on Belief in Altruism and Rationality claims that
Where belief in altruism is a measure of how much people believe that other people are acting out of care or compassion to others as opposed to self-interest.
Note: I think that this might be a delicate subject in EA and it might be useful to be more careful about alienating people. I definitely agree that better epistemics is very important to the EA community and to doing good generally and that the ties to the rationalist community probably played (and plays) a very important role, and in fact I think that it is sometimes useful to think of EA as rationality applied to altruism. However, many amazing altruistic people have a totally different view on what would be good epistemics (nevermind the question of “are they right?”), and many people already involved in the EA community seem to have a negative view of (at least some aspects of) the rationality community, both of which call for a more kind and appreciative conversation.
In this shortform post, the most obvious point where I think that this becomes a problem is the example
This is supposed to be an example of a case where people are not behaving rationally since that would stop them from having fun. You could have used a lot of abstract or personal examples where people in their day to day work are not taking time to think something through or seek negative feedback or update their actions based on (noticing when they) update their beliefs.