I think EA will make it through stronger

status: kind of rambly but I wanted to get this out there in case it helps

This week’s events triggered in me some soul-searching, wondering whether effective altruism even makes sense as a coherent thing anymore.

The reason I thought EA might break up or dissolve was something like: EA mostly attracted naive maximizer-types (“do the Most Good, using reasoning”), but now it’s obvious that the idea of maximizing goodness doesn’t work in practice—we have a really clear example of where trying to do that fails (SBF if you attribute pure motives to him); as well as a lot of recent quotes from EA luminaries saying that you shouldn’t do that. I didn’t see what else holds us together besides the maximizing thing.

But I was kind of ignoring the reasoning thing! I thought about it, and I think that we can make minimal changes: The framing I like is “Do good, using REASONING”. With capital letters :)

I think deleting “the most” is a change we should have made a long time ago; few of the important people in EA were claiming that they were doing the most good anyway. And EA at its core is about reasoning: reasoning carefully, using evidence; thinking about first-order and second-order effects; comparing options in front of you; argument and debate. The simpler phrasing of this new mission is intended to make reasoning stand out.

If this direction is adopted, I have the following hopes:

  • that EA will become a “bigger tent,” accepting of more types of people doing more types of good things in the world and reasoning about them. e.g., we’ll welcome anyone who is trying to do good, and is open to talking through the ‘why’ behind what they are doing

  • that naive utilitarian maximizers will go away or be a bit more humble :)

  • that people will put more emphasis on developing and relying on their own reasoning processes, and rely less on the reasoning of others to make big decisions in their lives.

  • that cause prioritization will get less emphasis, especially career cause prioritization (I think the maximizing thingy regularly causes people to make bad career decisions)

(Some color on the final one: I’ve had a blog post brewing for a long time against strong career cause prio but haven’t really managed to write it up in a convincing way. e.g., I think AI is a bad career direction for a lot of people, but young EAs are convinced to try it anyway because AI is held up as the priority path and they’ll have so much more impact if they make it. This seems bad for lots of reasons which I will try to write up in a post if I can ever figure out how to articulate them.)

Anyway, I think the above hopes, if they pan out, will make the community stronger. And, though I am normally loath to argue about optics, I do think this change would counter most of the arguments that you regularly see in news media against EA principles (such as that EA is about dangerous maximizing, or that it’s only for elites, or that young people’s careers are affected in unstable/​chaotic ways when they encounter EA).