Beware frictions from altruistic value differences
I believe value differences pose some underappreciated challenges in large-scale altruistic efforts. My aim in this post is to outline what I see as the main such challenges, and to present a few psychological reasons as to why we should expect these challenges to be significant and difficult to overcome.[1]
To clarify, my aim in this post is not to make a case against value differences per se, much less a case against vigorous debate over values (I believe that such debate is healthy and desirable). Instead, my aim is to highlight some of the challenges and pitfalls that are associated with value differences, in the hope that we can better mitigate these pitfalls. After all, value differences are sure to persist among people who are trying to help others, and hence a critical issue is how well — or how poorly — we are going to handle these differences.
Examples of challenges posed by value differences among altruists
A key challenge posed by value differences, in my view, is that they can make us prone to tribal or otherwise antagonistic dynamics that are suboptimal by the lights of our own moral values. Such values-related frictions may in turn lead to the following pitfalls and failure modes:
Failing to achieve moral aims that are already widely shared, such as avoiding worst-case outcomes (cf. “Common ground for longtermists”).
Failing to make mutually beneficial moral trades and compromises when possible (in ways that do not introduce problematic behavior such as dishonesty or censorship).
Failing to update on arguments, whether they be empirical or values-related, because the arguments are made by those who, to our minds, seem like they belong to the “other side”.[2]
Some people committing harmful acts out of spite or primitive tribal instincts. (The sections below give some sense as to why this might happen.)[3]
Of course, some of the failure modes listed above can have other causes beyond values- and coalition-related frictions. Yet poorly handled such frictions are probably still a key risk factor for these failure modes.
Reasons to expect values-related frictions to be significant
The following are some reasons to expect values-related frictions to be both common and quite difficult to handle by default.
Harmful actions based on different moral beliefs may be judged more harshly than intentional harm
One set of findings that seem relevant come from a 2016 anthropological study that examined the moral judgments of people across ten different cultures, eight of which were traditional small-scale societies (Barrett et al., 2016).
The study specifically asked people how they would evaluate a harmful act in light of a range of potentially extenuating circumstances, such as different moral beliefs, a mistake of fact, or self-defense. While there was significant variation in people’s moral judgments across cultures, there was nevertheless unanimous agreement that committing a harmful act based on different moral beliefs was not an extenuating circumstance. Indeed, on average across cultures, committing a harmful act based on different moral beliefs was considered worse than was committing the harmful act intentionally (see Barrett et al., 2016, fig. 5). [Edit: The particular moral belief used in the study was that “striking a weak person to toughen him up is praiseworthy”.]
It is unclear whether this pattern in moral judgment necessarily applies to all or even most kinds of acts inspired by different moral beliefs. Yet these results still tentatively suggest that we may be inclined to see value differences as a uniquely aggravating factor in our moral judgments of people’s actions — as something that tends to inspire harsher judgments rather than understanding. (See also Delton et al., 2020.)
Hot cognition about values-related beliefs, alliances, and opponents
Another relevant finding is that our minds appear to reflexively process moral and political groups and issues in ways that are strongly emotionally charged — an instance of “hot cognition”. Specifically, we appear to affectively process our own groups and beliefs in a favorable light while similarly processing the “outgroup” and their beliefs in an unfavorable light. And what is striking about this affectively charged processing is that it appears to be swift and automatic, occurring prior to conscious thought, which suggests that we are mostly unaware that it happens (Lodge & Taber, 2005; see also Kunda, 1990; Haidt, 2001).
These findings give us reason to expect that our reflexive processing of those who hold different altruistic values will tend to be affectively charged in ways that we are not aware of, and in ways that are not so easily changed (cf. Lodge & Taber, 2005, p. 476).
Coalitional instincts
A related reason to expect values-driven tensions to be significant and difficult to avoid is that the human mind plausibly has strong coalitional instincts, i.e. instincts for carving the world into, and smoothly navigating among, competing coalitions (Tooby & Cosmides, 2010; Pinsof et al, 2023).[4]
As John Tooby notes, these instincts may dispose us to blindly flatter and protect our own groups while misrepresenting and attacking other groups and coalitions. He likewise suggests that our coalitional instincts may push our public discourse less toward substance and more toward displaying loyalty to our own groups (see also Hannon, 2021).
In general, it seems that “team victory” is a strong yet often hidden motive in human behavior. And these coalitional instincts and “team victory” motives arguably further highlight the psychological challenges posed by value differences, not least since value differences often serve as the defining features of contrasting coalitions.[5]
Concrete suggestions for mitigating the risks of values-related frictions
Below are a few suggestions for how one might address the challenges and risks associated with values-related frictions. More suggestions are welcome.[6]
Acknowledging good-faith intentions and attempts to help others
It seems helpful to remind ourselves that altruists who have different values from ourselves are generally acting in good faith, and are trying to help others based on what they sincerely believe to be the best or most plausible views.
Keeping in mind shared goals and potential gains from compromise
Another helpful strategy may be to keep in mind the shared goals and the important points of agreement that we have with our fellow altruists — e.g. a strong emphasis on impartiality, a strong focus on sentient welfare, a wide agreement on the importance of avoiding the very worst future outcomes, etc.
Likewise, it might be helpful to think of the positive-sum gains that people with different values may achieve by cooperating. After all, contrary to what our intuitions might suggest, it is quite conceivable that some of our greatest counterfactual gains can be found in the realm of cooperation with agents who hold different values from ourselves — e.g. by steering clear of “fights” and by instead collaborating to expand our Pareto frontier (cf. Hanson on “Expand vs Fight”). It would be tragic to lose out on such gains due to unwittingly navigating more by our coalitional instincts and identities than by impartial impact.
Making an effort to become aware of, and to actively reduce, the tendency to engage in reflexive ingroup liking and promotion
It is to be expected that we are prone to ingroup liking and ingroup promotion to a somewhat excessive degree (relative to what our impartial values would recommend). In that case, it may be helpful to become more aware of these reflexive tendencies, and to try to reduce them through deliberate “system-2” reasoning that is cautiously skeptical of our most immediate coalitional drives and intuitions, in effect adding a cooling element to our hot cognition.
Validating the difficulty of the situation
Finally, it may be helpful to take a step back and to validate how eminently understandable it is that strong reactions can emerge in the context of altruistic value differences.
After all, beyond the psychological reasons reviewed above, it is worth remembering that there is often a lot of identity on the line when value differences come up among altruists. Indeed, it is not only identity that is on the line, but also individual and collective priorities, plans, visions, and so on.
These are all quite foundational elements that touch virtually every level of our cognitive and emotional processing. And when all these elements effectively become condensed into a single conversation with a person who appears to have significant disagreements with us on just about all of these consequential issues, and our minds are under the influence of a fair dose of coalition-driven hot cognition, then no wonder that things start to feel a little tense and challenging.
Validating the full magnitude of this challenge might help lower the temperature, and in turn open the door to more fruitful engagements and collaborations going forward.
- ^
By “value differences”, I mean differences in underlying axiological and moral views relating to altruism. I don’t have in mind anything that involves, say, hateful values or overt failures of moral character. Such moral failures are obviously worth being acutely aware of, too, but mostly for other reasons than the ones I explore here.
- ^
By analogy to how discriminatory hiring practices can cause economic inefficiencies, it seems plausible that values- and coalition-driven antagonisms can likewise cause “epistemic inefficiencies” (cf. Simler’s “Crony beliefs”).
- ^
That is, not only can values-driven antagonisms prevent us from capitalizing on potential gains, but they may in the worst case lead some people to actively sabotage and undermine just about everyone’s moral aims, including the reflective moral aims of the emotion-driven actors themselves.
- ^
This point is closely related to the previous point, in that our hot cognition often reflects or manifests our coalitional instincts. For what it’s worth, I believe that the concepts of coalitional instincts and (coalition-driven) hot cognition are two of the most powerful concepts for understanding human behavior in the realms of politics and morality.
- ^
Of course, values are by no means the only such coalition-defining feature. Other examples may include shared geographical location, long-term familiarity (e.g. with certain individuals or groups), and empirical beliefs. Indeed, it is my impression that empirical beliefs can be about as intense a source of coalitional identity and frictions as can value differences, even when we primarily hold the beliefs in question for epistemic rather than signaling reasons.
- ^
To be clear, I am not denying that there are also significant benefits to adversarial debate and discussion. But it still seems reasonable to make an effort to maximize the benefits while minimizing the risks.
- A selection of some writings and considerations on the cause of artificial sentience by 10 Aug 2023 18:23 UTC; 48 points) (
- EA & LW Forums Weekly Summary (28th Nov − 4th Dec 22′) by 6 Dec 2022 9:38 UTC; 36 points) (
- Love, Reverence, and Life by 12 Dec 2023 21:49 UTC; 36 points) (LessWrong;
- EA & LW Forums Weekly Summary (28th Nov − 4th Dec 22′) by 6 Dec 2022 9:38 UTC; 10 points) (LessWrong;
Great post! In addition to biases that increase antagonism, there are also biases that reduce antagonism. For example, the fact that most EAs see each other as friends can blind us to the fact that we may in fact be quite opposed on some important questions. Plausibly this is a good thing, because friendship is a form of cooperation that tends to work in the real world. But I think friendship does make us less likely to notice or worry about large value differences.
As an example, it’s plausible to me that the EA movement overall somewhat increases expected suffering in the far future, though there’s huge uncertainty about that. Because EAs tend to be friends with one another and admire each other’s intellectual contributions, most negative-utilitarian EAs don’t worry much about this fact and don’t seem to, e.g., try to avoid promoting EA to new people out of concern doing so may be net bad. It’s much easier to just get along with your friends and not rock the boat, especially when people with values opposed to yours are the “cool kids” in EA. Overall, I think this friendliness is good, and it would be worse if EAs with different values spent more time trying to fight each other. I myself don’t worry much about helping the EA movement, in part because it seems more cooperative to not worry about it too much. But I think it’s sensible to at least check every once in a while that you’re not massively harming your own values or being taken advantage of.
I think a lot of this comes down to one’s personality. If you’re extremely agreeable and conflict-averse, you probably shouldn’t update even more in that direction from Magnus’s article. Meanwhile, if you tend to get into fights a lot, you probably should lower your temperature, as Magnus suggests.
Post summary (feel free to suggest edits!):
Differing values creates risks of uncooperative behavior within the EA community, such as failing to update on good arguments because they come from the “other side”, failing to achieve common moral aims (eg. avoiding worst case outcomes), failing to compromise, or committing harmful acts out of spite / tribalism.
The author suggests mitigating these risks by assuming good intent, looking for positive-sum compromises, actively noticing and reducing our tendency to promote / like our ingroup more, and validating that the situation is challenging and it’s normal to feel some tension.
(If you’d like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)
Nice post, I mostly agree.
It’s worth noting that the specific different moral belief used in the study was that the “perpetrator holds the belief that striking a weak person to toughen him up is praiseworthy”, which seems quite different from e.g. a utilitarianism/deontology divide. Like, that view may just seem completely implausible to most people, and therefore not at all extenuating. Other moral views may be more plausible and so you’d be judged less harshly for acting according to them. I’m speculating here, of course.
Yes, this strikes me as an important point. It’s a bit like how ideologically-motivated hate crimes are (I think correctly) regarded as worse than comparable “intentional” (but non-ideologically-motivated) violence, perhaps in part because it raises the risks of systematic harms.
Many moral differences are innocuous, but some really aren’t. For an extreme example: the “true believer” Nazi is in some ways worse than the cowardly citizen who goes along with the regime out of fear and self-interest. But that’s very different from everyday “value disagreements” which tend to involve values that we recognize as (at least to some extent) worthy of respect, even if we judge them ultimately mistaken.
Thanks for highlighting that. :)
I agree that this is relevant and I probably should have included it in the post (I’ve now made an edit). It was part of the reason that I wrote “it is unclear whether this pattern in moral judgment necessarily applies to all or even most kinds of acts inspired by different moral beliefs”. But I still find it somewhat striking that such actions seemed to be considered as bad as, or even slightly worse than, intentional harm. But I guess subjects could also understand “intentional harm” in a variety of ways. In any case, I think it’s important to reiterate that this study is in itself just suggestive evidence that value differences may be psychologically fraught.
Truly a sensitive subject this is. Just read this post twice in a row. Thank you! It would be great to see a few real-life experiences mentioned in the text too.
Great post. I’d love to see an entire post on this:
Acknowledging good-faith intentions and attempts to help others
Maybe have it pinned to the top of every page. :-)