A couple of recent posts by other academics put me in mind of my old take on reactive vs goal-directed ethics. First, Setiya writes, in On Being Reactive:
Philosophers often write as if means-end reason were the factory setting for human agency… It’s not my experience and I doubt it’s yours… [Arational action] pervades our interaction with others. We are often guided by emotion, not beliefs about the best means to our ends. Instrumental reason is not a default possession but a hard-won aspiration.
I think this is at least as true of much moral action as it is of the rest of our lives. The perennial complaint motivating effective altruism is that most people don’t bother to think enough about how to do good. Many give to a charity when asked, without any apparent concern for whether a better alternative was available. (And many others, of course, aren’t willing to donate at all—even as they claim to care about the bad outcomes they could easily avert.)
Being at all strategic or goal-directed in one’s moral efforts seems incredibly rare, which is part of what makes effective altruism so non-trivial (alongside how unusual it is to be open to any non-trivial degree of genuinely impartial concern—extending even to non-human animals and to distant future generations). Many moralists have lamented others’ lack of altruism. The distinctive lament of EAs is that good intentions are not enough—most people are also missing instrumental rationality.
This brings me to Robin Hanson’s question, Why Don’t Gamers Win at Life?:
We humans inherit many unconscious habits and strategies, from both DNA and culture. We have many (often “sacred”) norms saying to execute these habits “authentically”, without much conscious or strategic reflection. (“Feel the force, Luke.”) Having rules be implicit makes it easier to follow these norms, and typical life social relations are complex and opaque enough to also make this easier.
Good gamers then have two options: defy these norms to consciously calculate life as a game, or follow the usual norm to not play life as a game.
This suggests a novel explanation of why some people hate effective altruism. EA is all about making ethics explicit, insofar as is possible. (I don’t think it’s always possible. Longtermist longshots obviously depend on judgment calls and not just simple calculations. Even GiveWell just use their cost-effectiveness models as one consideration among many. That’s all good and reasonable. Both still differ strikingly from folks who refuse to consider numbers at all.)
Notoriously, EA appeals disproportionately to nerdy analytic thinkers—i.e., the sorts of people who are good at board games. Others may be generally suspicious of this style of thinking, or specifically hostile to replacing implicit norms with explicit ones. One can hypothesize obvious cynical reasons that could motivate such hostility. What I’m curious to consider now is: do you think there are principled reasons to think that the more “explicit” ethics of effective altruists is actually a bad thing? Or should we take this causal explanation to be, in effect, a debunking explanation of why many people are unreasonably opposed to EA (and to goal-directed ethics more generally)?
Thoughts welcome.
I think part of the concern is that when you try to make ethics explicit you are very likely to miss something, or a lot of things, in the ‘rules’ you explicitly lay down. Some people will take the rules as gospel, and then there will also be a risk of Goodharting.
In most games there are soft rules beyond the explicit rules that include features that are not strictly part of the game and are very hard to define, such as good sportsmanship, but really are a core part of the game and why it is appreciated. Many viewers don’t enjoy when a player does something that is technically allowed but is just taking advantage of a loophole in the explicit rules and not in the spirit of the game, or misses the point of the game (an example from non-human game players is that AI speedboat that stopped doing the actual race and starts driving round in circles to maximise the reward. We like it as an example of reinforcement learning gone wrong, but it’s not what we actually want to watch in a race). People who only stick to the exactly explicit laws tend to be missing something/be social pariahs who take advantage of the fact that not all rules are or can be written down.
Yeah, that seems right as a potential ‘failure mode’ for explicit ethics taken to extremes. But of course it needs to be weighed against the potential failures of implicit ethics, like providing cover for not actually doing any good.
We discuss this in our preprint.
We find that people evaluate those who deliberate about their donations less positively (e.g. less moral, less desirable as social partners) than those who make their donations based on an empathic response. But a possible explanation of this response is that people take these different approaches to be signals about the character of the other person:
I think this suggests that individuals may have good reasons for their negative evaluations, as people who deliberate about the cost-effectiveness of their aid may be less likely to provide aid in the kinds of typical cases which people normally care about, than someone who aids due to an empathic response (e.g. they may be less likely to help the person themselves or someone close to them if they are in need). But, of course, this doesn’t show that deliberators are worse, all things considered, so I think this remains quite viable as a debunking explanation.
Interesting, thanks for the link! I agree that being a useful social ally and doing what’s morally best can come apart, and that people are often (lamentably) more interested in the former.