I think EA will make it through stronger
status: kind of rambly but I wanted to get this out there in case it helps
This week’s events triggered in me some soul-searching, wondering whether effective altruism even makes sense as a coherent thing anymore.
The reason I thought EA might break up or dissolve was something like: EA mostly attracted naive maximizer-types (“do the Most Good, using reasoning”), but now it’s obvious that the idea of maximizing goodness doesn’t work in practice—we have a really clear example of where trying to do that fails (SBF if you attribute pure motives to him); as well as a lot of recent quotes from EA luminaries saying that you shouldn’t do that. I didn’t see what else holds us together besides the maximizing thing.
But I was kind of ignoring the reasoning thing! I thought about it, and I think that we can make minimal changes: The framing I like is “Do good, using REASONING”. With capital letters :)
I think deleting “the most” is a change we should have made a long time ago; few of the important people in EA were claiming that they were doing the most good anyway. And EA at its core is about reasoning: reasoning carefully, using evidence; thinking about first-order and second-order effects; comparing options in front of you; argument and debate. The simpler phrasing of this new mission is intended to make reasoning stand out.
If this direction is adopted, I have the following hopes:
that EA will become a “bigger tent,” accepting of more types of people doing more types of good things in the world and reasoning about them. e.g., we’ll welcome anyone who is trying to do good, and is open to talking through the ‘why’ behind what they are doing
that naive utilitarian maximizers will go away or be a bit more humble :)
that people will put more emphasis on developing and relying on their own reasoning processes, and rely less on the reasoning of others to make big decisions in their lives.
that cause prioritization will get less emphasis, especially career cause prioritization (I think the maximizing thingy regularly causes people to make bad career decisions)
(Some color on the final one: I’ve had a blog post brewing for a long time against strong career cause prio but haven’t really managed to write it up in a convincing way. e.g., I think AI is a bad career direction for a lot of people, but young EAs are convinced to try it anyway because AI is held up as the priority path and they’ll have so much more impact if they make it. This seems bad for lots of reasons which I will try to write up in a post if I can ever figure out how to articulate them.)
Anyway, I think the above hopes, if they pan out, will make the community stronger. And, though I am normally loath to argue about optics, I do think this change would counter most of the arguments that you regularly see in news media against EA principles (such as that EA is about dangerous maximizing, or that it’s only for elites, or that young people’s careers are affected in unstable/chaotic ways when they encounter EA).
Thanks for writing this! I want to push back a bit. There’s a big middle ground between (i) naive, unconstrained welfare maximization and (ii) putting little to no emphasis on how much good one does. I think “do good, using reasoning” is somewhat too quick to jump to (ii) while passing over intermediate options, like:
“Do lots of good, using reasoning” (roughly as in this post)
“be a good citizen, while ambitiously working towards a better world” (as in this post)
“maximize good under constraints or with constraints incorporated into the notion of goodness”
There are lots of people out there (e.g. many researchers, policy professionals, entrepreneurs) who do good using reasoning; this community’s concern for scope seems rare, important, and totally compatible with integrity. Given the large amounts of good you’ve done, I’d guess you’re sympathetic to considering scope. Still, it seems important enough to include in the tagline.
Also, a nitpick:
This feels a bit fast; the fact that this example had to include a (dubious) “if” clause means it’s not a really clear example, and maximizing goodness is compatible with constraints if we incorporate constraints into our notion of goodness (just by the fact that any behavior can be thought of as maximizing some notion of goodness).
(Made minor edits.)
Thanks for writing!
To be clear, I don’t think we as a community should be scope insensitive. But here’s the FAQ I would write about this...
Q: does EA mean I should only work on the most important cause areas?
no! being in EA means you choose to do good with your life, and think about those choices. We hope that you’ll choose to improve your life / career / donations in more-altruistic ways, and we might talk with you and discover ideas for making your altruistic life even better.
Q: does EA mean I should do or support [crazy thing X] to improve the world?
Probably not: if it sounds crazy to you, trust your reasoning! However, EA is a big umbrella and we’re nurturing lots of weird ideas; some ideas that seem crazy to one person might make sense to another. We’re committed to reasoning about ideas that might actually help the world even if they sound absurd at first. Contribute to this reasoning process and you might well make a big impact.
Q: does ea’s “big umbrella” mean that I should avoid criticizing people for not reaching their potential or doing as much good as I think they could do?
This is very nuanced! You’ll see lots of internal feedback and criticism in EA spaces. We do have a norm against loudly and unsolicitedly critiquing people’s plans for not doing enough good, but this is overridden in cases where a) the person has asked for the feedback first or b) the person making the critique has a deep and nuanced understanding of the existing plan, as well as a strong relationship with the recipient of the feedback. Our advice, if you see something you want to critique, is to ask if they want feedback before offering.
Q: what about widely-recommended canonical public posts listing EA priorities, implicitly condemning anything that’s not on the priority list?
...yeah this feels like a big part of the problem to me. I think it makes sense to write up a standard disclaimer for such posts, saying “there’s lots of good things not on this list” (GiveWell had something like this for a while I think?) but I don’t know if it is enough.
Q: So is EA scope sensitive or not?
We are definitely scope sensitive. One of the best ways that reasoning can help figure out how to make the world better is by comparing different things, putting numbers on stuff, and/or figuring out other reasons why path A is better than path B.
I like this comment, but also genuinely think that this Q&A would indicate that EA had lost a lot of what I think makes it valuable, and I would likely be much less interested in being engaged.
Can you say a bit more about what you think EA has lost that makes it valuable?
Useful input. Can you give a bit more color about your feelings? In particular whether this is a disagreement with the core direction being proposed vs. just something I wrote down that seems off? (if the latter—i wrote this quickly trying to give a gist so not surprised. if the former i’m more surprised and interested in what I am missing.)
I am not fully sure, and it’s a bit late. Here are some thoughts that came to mind on thinking more about this:
I think I do personally believe if you actually think hard about the impact, few things matter, and also that the world is confusing and lots of stuff turns out to be net-negative (like, I think if you take AI X-risk seriously a lot of stuff that seemed previously good in terms of accelerating technological progress now suddenly looks quite bad).
And so, I don’t even know whether a community that just broadly encourages people to do things that seem ambitious and good ends up net-positive for the world, since the world does indeed strike me as the kind of place that has lots of crucial considerations that suddenly invert the sign on various things, and I am primarily excited about EA as a place that can collectively orient towards those crucial considerations and create incentives and systems that align with those crucial considerations.
I am also separately excited about a community that just helps people reason better, but indeed one of the key things I would try to get across in such a community is the contingency of the goodness of various actions in the world, and that the world is confusing and heavy-tailed. Making for a world where you really have to make the right decisions, or you might very well end up having caused great harm, or ended up missing out on extremely great benefits.
Useful perspective. (I’m excited about this debate because I think you’re wrong, but feel free to stop responding anytime obviously! You’ve already helped me a ton, to clarify my thoughts on this.)
First, what I agree with: I am excited by your last paragraph—my ideal EA community also helps people reason better, and the topics you listed definitely seem like part of the ‘curriculum’. I only think it needs to be introduced gently, and with low expectations (e.g. in my envisioned EA world, the ~bottom 75% of engaged EAs will probably not change their careers).
I even agree with this:
I have two main disagreements:
most stuff that seems good is good
siphoning people into AI without supporting the ones left behind leaves behind a hollow, inauthentic movement
Most stuff that seems good is good
You wrote:
I don’t really agree with this, but I don’t really expect to make much progress in a debate. I interpret this as you being generally against ‘progress studies’ also? I have pretty low priors on someone thinking they are working on something useful/innovative/altruistic, and then putting a lot of thought/effort behind it, and it ending up net negative.
Siphoning people into AI
A thing that I perceive is: EA is an important onramp into AI safety stuff. (How does this work? EA is broadly acceptable, incontrovertible—it gets a lot of people talking about it positively. Then the EA onboarding process is shaped to attracting and identifying people good at working on weird ideas, and pushes those people into AI.)
(To be clear, I may be misinterpreting you—you didn’t say this explicitly but I kind of get it from the “orient towards those crucial considerations” thing and so I’m addressing it directly.)
This is an ok thing to do on its own, and I think is a valid reason-to-exist of the community. But not the only one! I don’t think it will work in the long run without the community being able to exist on its own, independently of being a feeder into important projects. It has worked for a while “under the radar” of the recruitment process. I expect this to stop working for various reasons.
One major point of the changes I’m proposing is to make that more explicit, and one optional way that people can engage with EA.
There’s going to be a lot of suggestions for how to improve EA. For sake of organisation, it would be nice if you choose a title that says something specific about the nature of the improvement, e.g. “Suggested change: broaden to doing good using reasoning”. Whereas the current title is really only one possible consequence of that change.
I believe there is a real opportunity to come out stronger if EA demonstrates it has learned a prioritization lesson by doing its part to make the financial fraud victims whole before returning to its regular program. Correct me if I’m wrong, but if the EA community returns to victims an amount totaling twice the amount they received from FTX, would that be a historical first?
(And I don’t expect unconditional return of FTX grant money when it’s not legally required. Search your own heart; if you are not guilty of contempt for the greater fools that FTX thought it was profiting off of, I see no moral obligation.)
As for reasoning, I’d like to call attention to the first paragraph of https://www.lesswrong.com/posts/BZ6XaCwN4QGgH9CxF/the-kelly-criterion?commentId=3JEaWCYG2B5ocBJg7 , especially the last sentence.
Do you mean that FTX grantees should attempt to make the victims whole by paying the amount they received from FTX back to the estate, or that “EA” at large—so organizations and people with no relation to FTX, but who consider themselves “EA”—should do so?