At the core of my project is the idea that people can disagree and still cooperate.
I agree with you that the people who currently control the talent infrastructure that flows from CG, i.e. CEA and 80k, have for the most part become uninterested in views on cause prio that don’t buy into the TAI hypothesis. They are not however, completely uninterested. As you say, they invite people working on global health, animals, and other causes to EAG; they support groups which discuss and invite speakers on these topics.
I understand that this is not much support in material terms compared to AI and this does stack the deck against non-AI causes for people making EA career choices. The question is what you want to do with it.
For my part, I am choosing to leverage that small amount of support to strengthen free-ranging discourse about how to do the most good. The bar may be higher for non-AI projects and people to get CG funding as they emerge from this discourse, but I don’t think it is insurmountable. Further, I hope people who engage with my project will use it as a launching pad to reach out to non-CG funders and non-”EA” collaborators for their non-AI projects.
Both of these will be more challenging, but I personally resolve to support people doing what they endorse doing based on how thoughtful and ambitious they are, where I measure neither of those things in terms of how much they agree with me or my funders in substance. It’ll be tough re bias, but idk, liberalism conquered the world last century, maybe it’ll do it again this century.
At the core of my project is the idea that people can disagree and still cooperate.
I think this is the crux of the issue. A lot of EAs working on non-AI projects don’t even disagree that AI is the most important issue of our time. The problem is relevance/interest. Many non-AI-safety people have already spent years building careers in other areas. We have PhDs, social networks, and jobs that aren’t oriented around AI safety—in many cases because we were explicitly following EA principles/ideas/advice from a decade ago—and don’t plan to start from scratch. (I agree that the average college student encountering EA today should focus on issues related to AI safety.)
It used to be the case that engaging with the EA community was a good way to facilitate doing important work in non-AI areas, but this is becoming less true; if the average EA has the bandwidth/resources to go to two conferences a year, the global health EA may find it higher yield to go to global health conferences, the GPR EA may find it higher yield to go to philosophy/econ conferences, and so on. Non-EA global health people (etc) are also thinking about how to do the most good within the field of global health, and global health EAs may benefit more professionally (and socially?) from engaging directly with them. It doesn’t seem like a great use of our limited time (for us, or for the AI safety people, probably) to professionally network with EAs working on things totally unrelated to what we’re working on.
Anyways, I’m not sure I fully understand what your proposal is, but I’m just trying to articulate what I see as a fundamental barrier to getting those of us who don’t do AI safety work to be more actively engaged in EA: many of us agree that AI is the most important issue of our time, but that doesn’t mean it makes sense for us to re-focus our careers on AI, given our existing backgrounds and skills. Correspondingly, it makes less and less sense for us to spend our professional time engaging with a community that is focused on AI safety. (I think there’s perhaps a better case to be made that it would be socially fulfilling to do this; I don’t feel socially compatible with the average AI safety EA, but maybe others do.)
I agree that the average college student encountering EA today should focus on issues related to AI safety
I broadly nodded along to your OP, but strong disagree here. There are tonnes of people working in AI safety, to the extent that it’s already hypercompetitive and the marginal value of one more person getting in the long queue for such a job seems very low.
Meanwhile I continue to find the case for AI safety, as least as envisioned by EA doomers, highly speculative. That’s not to say it shouldn’t get any attention, but there’s a far better evidenced path from e.g. ‘nuclear bombs or major pandemics cause the fall of civilisation’ than from ‘LLMs cause the fall of civilisation’.
And if you’re sufficiently pessimistic on the doomer narrative, we’re all screwed and there’s likely at least as much EV in short-term improvement of the lives of existing beings as in fighting an impossible struggle to prevent AGI from ever being developed. So there’s a credence window in which AI safety as top priority belongs. That window might be reasonably wide, but I don’t think it’s anywhere near wide enough to justify abandoning all other causes.
I didn’t mean this to be that deep; I meant (1) the average college student EA (i.e., many EAs should still pursue other kinds of careers) and (2) AI safety broadly construed (to include issues related to biorisk, policy, and many issues unrelated to x-risk). I don’t know much about how competitive jobs are throughout this space, but at least in some spheres (eg, academic philosophy) there is growing interest in AI, so much so that it’d be prudent for a philosophy PhD student to work on issues related to AI solely to get a job (i.e., bracketing any interest in EA/having a socially valuable career). I assume that’s true in at least some other spheres as well (policy?), and while I could see that changing in the next few years, it feels like the entire job market will change a lot in the next few years, such that I doubt the advice “don’t go into AI safety because it’s oversaturated; do X instead” will be reliable advice for most X.
We might just fully agree. I don’t think there were ever career-long professional benefits to EA for people specializing in specific cause areas that outweigh those at cause-specific conferences (but please come teach community builders/members/young people about your work at EAG).
I think EA has always been for:
figuring out where you want to specialize, and
building/maintaining your knowledge and motivation around the world’s needs generally
The first is professionally relevant early in your career (or for generalists looking to lateral), but not so much later. The latter is personal/social/intellectual and perhaps a broad way that a specialist can give back by helping people working on the first thing.
If it is settled that AI is the thing to do, maybe point one has become irrelevant. I dispute this,[1] but less so than point two, which I think has strong independent value.
It may also be helpful context that I personally am not an expected utiltity maximizer. I’m doing my project because I want people to engage with EA arguments and then do what they want to do with them, as opposed to doing what I want them to do in a more superficial sense.
For example, it may just be critical to understand what else might be ITN in the world to understand why AI is important, or to think clearly about what its implications for welfare are. If those other problems aren’t really in the room or fully explored, it’s easy to miss crucial considerations. Similarly, what do we mean by “AI” and “settled?” Lots of EA epistemology can help here. Relatedly, the moral context of everything happening in the world can provide motivation that might otherwise be lacking.
Hm interesting. One reaction I have is that in-person communities have different functions, and it might be worth specifying more precisely what function you envision in-person EA communities having in 2026 (and how this has maybe changed?). Here are some different models I could see; 2 and 3 seem more promising to me than 1 or 4:
EA as a professional community (like a professional society with professional conferences). This is historically what most in-person EA events have been, but as I’ve argued, I think this kind of in-person community makes less and less sense (though it probably continues to make sense for large sub-groups of EAs, like AI safety EAs, GPR EAs, and so on).
EA as a moral/spiritual community (like Unitarian Universalism). I suspect some people will bristle at the word “spiritual,” but I think what you’ve said about motivation is true/important, and EA would do well to lean into this. As a kid, I always liked religious services—despite not believing in God—because I enjoyed the music, (some of) the sermons/stories, and the quiet meditation. It would be culty to lean too hard in the direction of an “EA service,” but it could be cool to design social events that explicitly try to get at this (i.e., leave people feeling hopeful/reflective/recharged, rather than doom-y). I suspect a lot of EAs—including myself, lol—would eye roll at the concept though, so it could be hard to get off the ground.
EA as a social community organized around a shared interest (like a debate team). Debaters don’t formally debate with each other when they socialize (ie, in a structured way), but the things that make them like debate also make them socially compatible. Similarly, maybe we could think of EAs as actually practicing their EA separately, but uniting over the things that make them like EA. I suspect this is a fairly promising model.
EA as a social community organized around a shared activity (like a hiking club). A hiking club exists so people can hike together and, as my dad often notes after spending the day with his, may not be that socially compatible in other ways. I could see EA being like this too—maybe I don’t vibe with the AI safety people, but we could have interesting/fun convos about EA? I’m also not sure this works though.
Good breakdown. I agree on 2 & 3 being promising too. One of the first event models I came up with for my project was EA reading + sermon-or-constructive debate related to the reading. It’s not cultish if there are no rites/titles/statements of faith/garb/iconography.
I have a much more positive feelings about EAs than rationalists, and I think this is quite normal for people who came to EA from outside rationalism. I mean, I actually liked the vast majority of rationalists I’ve met a lot-when I worked in a rationalist office in Prague it had a lovely culture-but I think only about .5 of rationalists like EA as an idea, and my suspicion is that “dislikes EA” amongst rationalists correlates fairly heavily with “has political views that make me uncomfortable”.
At the core of my project is the idea that people can disagree and still cooperate.
I agree with you that the people who currently control the talent infrastructure that flows from CG, i.e. CEA and 80k, have for the most part become uninterested in views on cause prio that don’t buy into the TAI hypothesis. They are not however, completely uninterested. As you say, they invite people working on global health, animals, and other causes to EAG; they support groups which discuss and invite speakers on these topics.
I understand that this is not much support in material terms compared to AI and this does stack the deck against non-AI causes for people making EA career choices. The question is what you want to do with it.
For my part, I am choosing to leverage that small amount of support to strengthen free-ranging discourse about how to do the most good. The bar may be higher for non-AI projects and people to get CG funding as they emerge from this discourse, but I don’t think it is insurmountable. Further, I hope people who engage with my project will use it as a launching pad to reach out to non-CG funders and non-”EA” collaborators for their non-AI projects.
Both of these will be more challenging, but I personally resolve to support people doing what they endorse doing based on how thoughtful and ambitious they are, where I measure neither of those things in terms of how much they agree with me or my funders in substance. It’ll be tough re bias, but idk, liberalism conquered the world last century, maybe it’ll do it again this century.
I think this is the crux of the issue. A lot of EAs working on non-AI projects don’t even disagree that AI is the most important issue of our time. The problem is relevance/interest. Many non-AI-safety people have already spent years building careers in other areas. We have PhDs, social networks, and jobs that aren’t oriented around AI safety—in many cases because we were explicitly following EA principles/ideas/advice from a decade ago—and don’t plan to start from scratch. (I agree that the average college student encountering EA today should focus on issues related to AI safety.)
It used to be the case that engaging with the EA community was a good way to facilitate doing important work in non-AI areas, but this is becoming less true; if the average EA has the bandwidth/resources to go to two conferences a year, the global health EA may find it higher yield to go to global health conferences, the GPR EA may find it higher yield to go to philosophy/econ conferences, and so on. Non-EA global health people (etc) are also thinking about how to do the most good within the field of global health, and global health EAs may benefit more professionally (and socially?) from engaging directly with them. It doesn’t seem like a great use of our limited time (for us, or for the AI safety people, probably) to professionally network with EAs working on things totally unrelated to what we’re working on.
Anyways, I’m not sure I fully understand what your proposal is, but I’m just trying to articulate what I see as a fundamental barrier to getting those of us who don’t do AI safety work to be more actively engaged in EA: many of us agree that AI is the most important issue of our time, but that doesn’t mean it makes sense for us to re-focus our careers on AI, given our existing backgrounds and skills. Correspondingly, it makes less and less sense for us to spend our professional time engaging with a community that is focused on AI safety. (I think there’s perhaps a better case to be made that it would be socially fulfilling to do this; I don’t feel socially compatible with the average AI safety EA, but maybe others do.)
I broadly nodded along to your OP, but strong disagree here. There are tonnes of people working in AI safety, to the extent that it’s already hypercompetitive and the marginal value of one more person getting in the long queue for such a job seems very low.
Meanwhile I continue to find the case for AI safety, as least as envisioned by EA doomers, highly speculative. That’s not to say it shouldn’t get any attention, but there’s a far better evidenced path from e.g. ‘nuclear bombs or major pandemics cause the fall of civilisation’ than from ‘LLMs cause the fall of civilisation’.
And if you’re sufficiently pessimistic on the doomer narrative, we’re all screwed and there’s likely at least as much EV in short-term improvement of the lives of existing beings as in fighting an impossible struggle to prevent AGI from ever being developed. So there’s a credence window in which AI safety as top priority belongs. That window might be reasonably wide, but I don’t think it’s anywhere near wide enough to justify abandoning all other causes.
I didn’t mean this to be that deep; I meant (1) the average college student EA (i.e., many EAs should still pursue other kinds of careers) and (2) AI safety broadly construed (to include issues related to biorisk, policy, and many issues unrelated to x-risk). I don’t know much about how competitive jobs are throughout this space, but at least in some spheres (eg, academic philosophy) there is growing interest in AI, so much so that it’d be prudent for a philosophy PhD student to work on issues related to AI solely to get a job (i.e., bracketing any interest in EA/having a socially valuable career). I assume that’s true in at least some other spheres as well (policy?), and while I could see that changing in the next few years, it feels like the entire job market will change a lot in the next few years, such that I doubt the advice “don’t go into AI safety because it’s oversaturated; do X instead” will be reliable advice for most X.
We might just fully agree. I don’t think there were ever career-long professional benefits to EA for people specializing in specific cause areas that outweigh those at cause-specific conferences (but please come teach community builders/members/young people about your work at EAG).
I think EA has always been for:
figuring out where you want to specialize, and
building/maintaining your knowledge and motivation around the world’s needs generally
The first is professionally relevant early in your career (or for generalists looking to lateral), but not so much later. The latter is personal/social/intellectual and perhaps a broad way that a specialist can give back by helping people working on the first thing.
If it is settled that AI is the thing to do, maybe point one has become irrelevant. I dispute this,[1] but less so than point two, which I think has strong independent value.
It may also be helpful context that I personally am not an expected utiltity maximizer. I’m doing my project because I want people to engage with EA arguments and then do what they want to do with them, as opposed to doing what I want them to do in a more superficial sense.
For example, it may just be critical to understand what else might be ITN in the world to understand why AI is important, or to think clearly about what its implications for welfare are. If those other problems aren’t really in the room or fully explored, it’s easy to miss crucial considerations. Similarly, what do we mean by “AI” and “settled?” Lots of EA epistemology can help here. Relatedly, the moral context of everything happening in the world can provide motivation that might otherwise be lacking.
Hm interesting. One reaction I have is that in-person communities have different functions, and it might be worth specifying more precisely what function you envision in-person EA communities having in 2026 (and how this has maybe changed?). Here are some different models I could see; 2 and 3 seem more promising to me than 1 or 4:
EA as a professional community (like a professional society with professional conferences). This is historically what most in-person EA events have been, but as I’ve argued, I think this kind of in-person community makes less and less sense (though it probably continues to make sense for large sub-groups of EAs, like AI safety EAs, GPR EAs, and so on).
EA as a moral/spiritual community (like Unitarian Universalism). I suspect some people will bristle at the word “spiritual,” but I think what you’ve said about motivation is true/important, and EA would do well to lean into this. As a kid, I always liked religious services—despite not believing in God—because I enjoyed the music, (some of) the sermons/stories, and the quiet meditation. It would be culty to lean too hard in the direction of an “EA service,” but it could be cool to design social events that explicitly try to get at this (i.e., leave people feeling hopeful/reflective/recharged, rather than doom-y). I suspect a lot of EAs—including myself, lol—would eye roll at the concept though, so it could be hard to get off the ground.
EA as a social community organized around a shared interest (like a debate team). Debaters don’t formally debate with each other when they socialize (ie, in a structured way), but the things that make them like debate also make them socially compatible. Similarly, maybe we could think of EAs as actually practicing their EA separately, but uniting over the things that make them like EA. I suspect this is a fairly promising model.
EA as a social community organized around a shared activity (like a hiking club). A hiking club exists so people can hike together and, as my dad often notes after spending the day with his, may not be that socially compatible in other ways. I could see EA being like this too—maybe I don’t vibe with the AI safety people, but we could have interesting/fun convos about EA? I’m also not sure this works though.
Good breakdown. I agree on 2 & 3 being promising too. One of the first event models I came up with for my project was EA reading + sermon-or-constructive debate related to the reading. It’s not cultish if there are no rites/titles/statements of faith/garb/iconography.
“There is already an option for people who want to hang out with the people who are currently attracted to effective altruism. It is called “the rationalist community.””
I have a much more positive feelings about EAs than rationalists, and I think this is quite normal for people who came to EA from outside rationalism. I mean, I actually liked the vast majority of rationalists I’ve met a lot-when I worked in a rationalist office in Prague it had a lovely culture-but I think only about .5 of rationalists like EA as an idea, and my suspicion is that “dislikes EA” amongst rationalists correlates fairly heavily with “has political views that make me uncomfortable”.