Beware frictions from altruistic value differences

I believe value differences pose some underappreciated challenges in large-scale altruistic efforts. My aim in this post is to outline what I see as the main such challenges, and to present a few psychological reasons as to why we should expect these challenges to be significant and difficult to overcome.[1]

To clarify, my aim in this post is not to make a case against value differences per se, much less a case against vigorous debate over values (I believe that such debate is healthy and desirable). Instead, my aim is to highlight some of the challenges and pitfalls that are associated with value differences, in the hope that we can better mitigate these pitfalls. After all, value differences are sure to persist among people who are trying to help others, and hence a critical issue is how well — or how poorly — we are going to handle these differences.

Examples of challenges posed by value differences among altruists

A key challenge posed by value differences, in my view, is that they can make us prone to tribal or otherwise antagonistic dynamics that are suboptimal by the lights of our own moral values. Such values-related frictions may in turn lead to the following pitfalls and failure modes:

  • Failing to achieve moral aims that are already widely shared, such as avoiding worst-case outcomes (cf. “Common ground for longtermists”).

  • Failing to make mutually beneficial moral trades and compromises when possible (in ways that do not introduce problematic behavior such as dishonesty or censorship).

  • Failing to update on arguments, whether they be empirical or values-related, because the arguments are made by those who, to our minds, seem like they belong to the “other side”.[2]

  • Some people committing harmful acts out of spite or primitive tribal instincts. (The sections below give some sense as to why this might happen.)[3]

Of course, some of the failure modes listed above can have other causes beyond values- and coalition-related frictions. Yet poorly handled such frictions are probably still a key risk factor for these failure modes.

The following are some reasons to expect values-related frictions to be both common and quite difficult to handle by default.

Harmful actions based on different moral beliefs may be judged more harshly than intentional harm

One set of findings that seem relevant come from a 2016 anthropological study that examined the moral judgments of people across ten different cultures, eight of which were traditional small-scale societies (Barrett et al., 2016).

The study specifically asked people how they would evaluate a harmful act in light of a range of potentially extenuating circumstances, such as different moral beliefs, a mistake of fact, or self-defense. While there was significant variation in people’s moral judgments across cultures, there was nevertheless unanimous agreement that committing a harmful act based on different moral beliefs was not an extenuating circumstance. Indeed, on average across cultures, committing a harmful act based on different moral beliefs was considered worse than was committing the harmful act intentionally (see Barrett et al., 2016, fig. 5). [Edit: The particular moral belief used in the study was that “striking a weak person to toughen him up is praiseworthy”.]

It is unclear whether this pattern in moral judgment necessarily applies to all or even most kinds of acts inspired by different moral beliefs. Yet these results still tentatively suggest that we may be inclined to see value differences as a uniquely aggravating factor in our moral judgments of people’s actions — as something that tends to inspire harsher judgments rather than understanding. (See also Delton et al., 2020.)

Another relevant finding is that our minds appear to reflexively process moral and political groups and issues in ways that are strongly emotionally charged — an instance of “hot cognition”. Specifically, we appear to affectively process our own groups and beliefs in a favorable light while similarly processing the “outgroup” and their beliefs in an unfavorable light. And what is striking about this affectively charged processing is that it appears to be swift and automatic, occurring prior to conscious thought, which suggests that we are mostly unaware that it happens (Lodge & Taber, 2005; see also Kunda, 1990; Haidt, 2001).

These findings give us reason to expect that our reflexive processing of those who hold different altruistic values will tend to be affectively charged in ways that we are not aware of, and in ways that are not so easily changed (cf. Lodge & Taber, 2005, p. 476).

Coalitional instincts

A related reason to expect values-driven tensions to be significant and difficult to avoid is that the human mind plausibly has strong coalitional instincts, i.e. instincts for carving the world into, and smoothly navigating among, competing coalitions (Tooby & Cosmides, 2010; Pinsof et al, 2023).[4]

As John Tooby notes, these instincts may dispose us to blindly flatter and protect our own groups while misrepresenting and attacking other groups and coalitions. He likewise suggests that our coalitional instincts may push our public discourse less toward substance and more toward displaying loyalty to our own groups (see also Hannon, 2021).

In general, it seems that “team victory” is a strong yet often hidden motive in human behavior. And these coalitional instincts and “team victory” motives arguably further highlight the psychological challenges posed by value differences, not least since value differences often serve as the defining features of contrasting coalitions.[5]

Below are a few suggestions for how one might address the challenges and risks associated with values-related frictions. More suggestions are welcome.[6]

Acknowledging good-faith intentions and attempts to help others

It seems helpful to remind ourselves that altruists who have different values from ourselves are generally acting in good faith, and are trying to help others based on what they sincerely believe to be the best or most plausible views.

Keeping in mind shared goals and potential gains from compromise

Another helpful strategy may be to keep in mind the shared goals and the important points of agreement that we have with our fellow altruists — e.g. a strong emphasis on impartiality, a strong focus on sentient welfare, a wide agreement on the importance of avoiding the very worst future outcomes, etc.

Likewise, it might be helpful to think of the positive-sum gains that people with different values may achieve by cooperating. After all, contrary to what our intuitions might suggest, it is quite conceivable that some of our greatest counterfactual gains can be found in the realm of cooperation with agents who hold different values from ourselves — e.g. by steering clear of “fights” and by instead collaborating to expand our Pareto frontier (cf. Hanson on “Expand vs Fight”). It would be tragic to lose out on such gains due to unwittingly navigating more by our coalitional instincts and identities than by impartial impact.

Making an effort to become aware of, and to actively reduce, the tendency to engage in reflexive ingroup liking and promotion

It is to be expected that we are prone to ingroup liking and ingroup promotion to a somewhat excessive degree (relative to what our impartial values would recommend). In that case, it may be helpful to become more aware of these reflexive tendencies, and to try to reduce them through deliberate “system-2” reasoning that is cautiously skeptical of our most immediate coalitional drives and intuitions, in effect adding a cooling element to our hot cognition.

Validating the difficulty of the situation

Finally, it may be helpful to take a step back and to validate how eminently understandable it is that strong reactions can emerge in the context of altruistic value differences.

After all, beyond the psychological reasons reviewed above, it is worth remembering that there is often a lot of identity on the line when value differences come up among altruists. Indeed, it is not only identity that is on the line, but also individual and collective priorities, plans, visions, and so on.

These are all quite foundational elements that touch virtually every level of our cognitive and emotional processing. And when all these elements effectively become condensed into a single conversation with a person who appears to have significant disagreements with us on just about all of these consequential issues, and our minds are under the influence of a fair dose of coalition-driven hot cognition, then no wonder that things start to feel a little tense and challenging.

Validating the full magnitude of this challenge might help lower the temperature, and in turn open the door to more fruitful engagements and collaborations going forward.


  1. ^

    By “value differences”, I mean differences in underlying axiological and moral views relating to altruism. I don’t have in mind anything that involves, say, hateful values or overt failures of moral character. Such moral failures are obviously worth being acutely aware of, too, but mostly for other reasons than the ones I explore here.

  2. ^

    By analogy to how discriminatory hiring practices can cause economic inefficiencies, it seems plausible that values- and coalition-driven antagonisms can likewise cause “epistemic inefficiencies” (cf. Simler’s “Crony beliefs”).

  3. ^

    That is, not only can values-driven antagonisms prevent us from capitalizing on potential gains, but they may in the worst case lead some people to actively sabotage and undermine just about everyone’s moral aims, including the reflective moral aims of the emotion-driven actors themselves.

  4. ^

    This point is closely related to the previous point, in that our hot cognition often reflects or manifests our coalitional instincts. For what it’s worth, I believe that the concepts of coalitional instincts and (coalition-driven) hot cognition are two of the most powerful concepts for understanding human behavior in the realms of politics and morality.

  5. ^

    Of course, values are by no means the only such coalition-defining feature. Other examples may include shared geographical location, long-term familiarity (e.g. with certain individuals or groups), and empirical beliefs. Indeed, it is my impression that empirical beliefs can be about as intense a source of coalitional identity and frictions as can value differences, even when we primarily hold the beliefs in question for epistemic rather than signaling reasons.

  6. ^

    To be clear, I am not denying that there are also significant benefits to adversarial debate and discussion. But it still seems reasonable to make an effort to maximize the benefits while minimizing the risks.