Explicit Ethics

Link post

A couple of recent posts by other academics put me in mind of my old take on reactive vs goal-directed ethics. First, Setiya writes, in On Being Reactive:

Philosophers often write as if means-end reason were the factory setting for human agency… It’s not my experience and I doubt it’s yours… [Arational action] pervades our interaction with others. We are often guided by emotion, not beliefs about the best means to our ends. Instrumental reason is not a default possession but a hard-won aspiration.

I think this is at least as true of much moral action as it is of the rest of our lives. The perennial complaint motivating effective altruism is that most people don’t bother to think enough about how to do good. Many give to a charity when asked, without any apparent concern for whether a better alternative was available. (And many others, of course, aren’t willing to donate at all—even as they claim to care about the bad outcomes they could easily avert.)

Being at all strategic or goal-directed in one’s moral efforts seems incredibly rare, which is part of what makes effective altruism so non-trivial (alongside how unusual it is to be open to any non-trivial degree of genuinely impartial concern—extending even to non-human animals and to distant future generations). Many moralists have lamented others’ lack of altruism. The distinctive lament of EAs is that good intentions are not enough—most people are also missing instrumental rationality.

This brings me to Robin Hanson’s question, Why Don’t Gamers Win at Life?:

We humans inherit many unconscious habits and strategies, from both DNA and culture. We have many (often “sacred”) norms saying to execute these habits “authentically”, without much conscious or strategic reflection. (“Feel the force, Luke.”) Having rules be implicit makes it easier to follow these norms, and typical life social relations are complex and opaque enough to also make this easier.

Good gamers then have two options: defy these norms to consciously calculate life as a game, or follow the usual norm to not play life as a game.

This suggests a novel explanation of why some people hate effective altruism. EA is all about making ethics explicit, insofar as is possible. (I don’t think it’s always possible. Longtermist longshots obviously depend on judgment calls and not just simple calculations. Even GiveWell just use their cost-effectiveness models as one consideration among many. That’s all good and reasonable. Both still differ strikingly from folks who refuse to consider numbers at all.)

Is ethics relevantly like a game?

Notoriously, EA appeals disproportionately to nerdy analytic thinkers—i.e., the sorts of people who are good at board games. Others may be generally suspicious of this style of thinking, or specifically hostile to replacing implicit norms with explicit ones. One can hypothesize obvious cynical reasons that could motivate such hostility. What I’m curious to consider now is: do you think there are principled reasons to think that the more “explicit” ethics of effective altruists is actually a bad thing? Or should we take this causal explanation to be, in effect, a debunking explanation of why many people are unreasonably opposed to EA (and to goal-directed ethics more generally)?

Thoughts welcome.