I am a bit nervous to actually post something on this forum (although it is just a simple question, not really an opinion or an analysis).
Context
I have been engaging with EA content for a while now, read most of the foundational posts and the handbook, and been at several in-person events, started donating and taking EA considerations into account for my future career choices. I have been completely and utterly convinced by the principles of EA very early on. However, I also happen to almost perfectly fit the stereotype of people who join EA the way I did: white male, medium-length hair, groomed beard, academic background with a side of tech skills, ambition⊠(At least thatâs what many EA people look like in France. To quote my girlfriend glancing over my shoulder as I started a Zoom meeting: âOh, five copies of youâ). I could not help but wonder why so many people with whom I shared the same initial motivations and ideas failed to stick around. I asked around and tried to reach out both to highly invested people and to people who left or kept some distance with the community. One of the big reasons was a disagreement about the âconclusionsâ offered by the community (the choice of causes, as well as the dismissal of some topics that were important to the newcomers).
Issue
Here is the object of my question: people agreeing with EAâs principles genuinely think that it is important to lay out carefully whatâs important and relevant, to evaluate what we think should be prioritized, and then act upon it. However, people who actually take the time to do this process are very rare⊠Most of the people I know discovered the principles, started the reasoning process, and ended up convinced that they would reach the same conclusion as the community in a kind of âyeah that seems rightâ-cognitive-load-saving process.
Question
It seems to me that people who do not think that EA principles âseem rightâ from the beginning will face a much harder time being included in the community. I do think that individual people do respect the time it takes to integrate new knowledge and shift oneâs beliefs. However some communication does not happen in 1-1 or informal chats, but goes through the choice of curated content on this forum, through the responses that are sometimes tougher than they ought to be, especially on questions that approach the thin line between people who genuinely want to understand and people criticizing EA blindly, and through implicit appearances such as the relative uniformity of the backgrounds people come from. As a result, I wonder if EA as a group does not appear way more object-level focused than we may want it to, for people who are not yet convinced that the principles would lead them to the same object-level conclusions. If I had to sum up in one question: Do we, as a community, sometimes lean more toward unconsciously advocating specific outcomes rather than encouraging people to discover their own conclusions through the EA framework?
(Feel free to tackle and challenge every aspect of the question, context, or my views, form and content! Please be gentler if you want to criticize the motivation, or the person who posted :) As stated above, I hesitated for a long time before gathering enough courage to post for the first time.)
Kudos for bringing this up, I think itâs an important area!
Thereâs a lot to this question.
I think that many prestigious/âimportant EAs have come to similar conclusions. If youâve come to think that X is important, it can seem very reasonable to focus on promoting and working with people to improve X.
Youâll see some discussions of âgrowing the tentââthis can often mean âpartnering with groups that agree with the conclusions, not necessarily with the principlesâ.
One question here is something like, âHow effective is it to spend dedicated effort on explorations that follow the EA principles, instead of just optimizing for the best-considered conclusions?â This is something that arguably there would need to be more dedicated effort in order to really highlight. I think we just donât have all too much work in this area now, compared to more object-level work.
Perhaps another factor seemed to have been that FTX has stained the reputation of EA and hurt CEAâafter which there was a period where there seemed to be less attention on EA, and more on specific causes like AI safety.
In terms of âWhat should the EA community doâ, Iâd flag that a lot of the decisions are really made by funders and high-level leaders. Itâs not super clear to me how much agency the âEA communityâ has, in ways that arenât very aligned with these groups.
All that said, I think itâs easy for us to generally be positive towards people who take the principles in ways that donât match the specific current conclusions.
I personally am on the side that thinks that current conclusions are probably overconfident and lacking in some very important considerations.
Thanks for the answer, and for splitting the issue into several parts, it really makes some things clearer in my mind!
Iâll keep thinking about it (and take a look at your posts, you seem to have spent quite some time thinking about meta EA, I realize there might be a lot of past discussions to catch up on before I start looking for a solution by myself!)
Can you give specifics? Any crucial considerations that EA is not considering or under-weighting?
I agree with you that EA often implicitly endorses conclusions, and that this can be pernicious and sometimes confusing to newcomers. Hereâs a really interesting debate on whether biodiversity loss should be an EA cause area, for example.
A lot of forms of global utilitarianism do seem to tend to converge on the âbig 3â cause areas of Global Health & Development, Animal Welfare, and Global Catastrophic Risks. If you generally value things like âsaving livesâ or âreducing sufferingâ, youâll usually end up at one of these (and most people seem to decide between them based on risk tolerance, assumptions about non-human moral values, or tractabilityârather than outcome values). Under this perspective, it could be reasonable to dismiss cause areas that donât fit into this value framework.
But this highlights where I think part of the problem lies, which is that value systems that lie outside of this can be good targets for effective altruism. If you value biodiversity for its own sake, itâs not unreasonable to ask âhow can we save the greatest number of valuable species from going extinct?â. Or you might be a utilitarian, but only interested in a highly specific outcome, and ask âhow can I prevent the most deaths from suicide?â. Or âhow can I prevent the most suffering in my country?ââwhich you might not even do for value-system reasons, but because you have tax credits to maximise!
I wish EA were more open to this, especially as a movement that recognises the value of moral uncertainty. IMHO, some people in that biodiversity loss thread are bit too dismissive, and I think weâve probably lost some valuable partners because of it! But I understand the appeal of wanting easy answers, and not spending too much time overthinking your value system (I feel the same!).
Thanks for the link ! The person who posted may not have been a newcomer to EA, but it is a perfect example of the kind of threads that I was thinking may repel newbies, or slightly discourage them to even ask.
I really agree with what you say, there really is something to dig into there.
Re: agency of the community itself, Iâve been trying to get to this âpureâ form of EA in my university group, and to be honest, it felt extremely hard.
-People who want to learn about EA often feel confused and suspicious until you get to object-level examples. âOk, impactful career, but concretely, where would that get me? Can you give me an example?â. Iâve faced real resistance when trying to stay abstract.
-Itâs hard to keep peopleâs attention without talking about object-level examples, be it for teaching abstract concepts. Itâs even harder once you get to the âprojectsâ phase of the year.
-People anchor hard on some specific object-level examples after that. âOh, EA ? The malaria thing?â (Despite my go-to examples included things as diverse as shrimp welfare and pandemic preparedness)
-When itâs not an object-level example, itâs usually âutilitarianismâ or âPeter Singerâ, which act a lot as thought stoppers and have an âeekâ vibe for many people.
-People who care about non-typical causes actually have a hard time finding data and making estimates.
-In addition to that, agency for really making estimates is hard to build up. One member I knew thought the most Impactful career choice he had was potentially working on nuclear fusion. I suggested him to find out about the Impact-Tractability-Neglectedness of it to compare to another option he had (even rough OOMs) as well as more traditional ones. I canât remember him giving any numbers even months later. When he just mentioned he felt sure about the difference, I didnât feel comfortable arguing about the robustness of his justification. Itâs a tough balance to strike between respecting preferences and probing reasons.
-A lot of it comes down to career 1:1s. Completing the ~8 or so parts is already demanding. You have to provide estimates that are nowhere to be found if your center of interest is ânicheâ in EA. You then have to find academic and professional opportunities as well as relations that are not referenced anywhere in the EA community (I had to reach back to the big brother of a primary school friend I had lost track of to get a fusion engineer he could talk to!). If you need funding, even if your idea is promising, you need excellent communication skills for writing a convincing blog post, plausibly enough research skills to get non-air-plucked estimates for ITN /â cost-effectiveness analysis, and a desire to go to EAGs and convince people who could just not care. Moreover a lot of people expressly limit themselves to their own country or continent. Itâs often easier to stick to the usual topics (I get call for applications for AIS fellowships almost every months, of course I never had ones about niche topics)
-Another point about career 1:1s, the initial list of options to compare is hard to negotiate. Some people will neglect non-EA options, others will neglect EA options, and I had issues with artificially adding options to help them truly compare options.
-Another other point, some people barely have the time to come to a few sessions. Itâs hard to get them to actually rely on the methodological tools they havenât learned about in order to compare their options during career 1:1s.
-A good way to cope with all of this is to encourage students to start things out -to create an org rather than joining one. But not everyone has the necessary motivation for this.
Iâm still happy with having started the year with epistemics, rationality, ethics and meta-ethics, and to have done other sessions on intervention and policy evaluation, suffering and consciousness, and population ethics. I didnât desperately need to have sessions on GHD /â Animal Welfare/â AI Safety, thought theyâre definitely âin demandâ.
Iâm glad you mustered the courage to post this! I think itâs a great post.
I agree that, in practice, people advocating for effective altruism can implicitly argue for the set of popular EA causes (and they do this quite often?), which could repel people with useful insight. Additionally, it seems to be the case that people in the EA community can be dismissive of newcomersâ cause prioritization (or their arguments for causes that are less popular in EA). Again, this could repel people from EA.
I have a couple of hypotheses for these observations. (I donât think either is a sufficient explanation, but theyâre both plausibly contributing factors.)
First, people might feel compelled to make EA less âabstractâ by trying to provide concrete examples of how people in the EA community are âtrying to do the most good they can,â possibly giving the impression that the causes, instead of the principles, are most characteristic of EA.
Second, people may be more subconsciously dismissive of new cause proposals because theyâve invested time/âmoney into causes that are currently popular in the EA community. Itâs psychologically easier to reject a new cause prioritization proposal than it is to accept it and thereby feel as though your resources have not been used with optimal effectiveness.
Thanks for those insights ! I had not really thought about âwhyâ the situation might be as it is, focused on the question on âwhatâ it entails. Iâm really glad I posted, I feel like I feel like my understanding of the topic has progressed as much in 24 hours as it had since the beginning.