I don’t think it’s possible to create a stable EA comprised of people who (1) “believe in the tools more than the conclusions” and (2) treat AI safety as a settled conclusion.
I think the evolution of Forethought is perhaps emblematic of some of the problems with 2026 EA. Before Forethought became “a research nonprofit focused on how to navigate the transition to a world with superintelligent AI systems,” it was the Forethought Foundation for Global Priorities Research, which aimed “to promote academic work that addresses the question of how to use our scarce resources to improve the world by as much as possible.” FFGPR funded an annual fellowship that dozens of doctoral students from a range of fields participated in, and funded them to spend a month together in Oxford thinking through their GPR projects together.
New Forethought seems to: (1) focus on a narrower range of projects (those related to navigating the transition to a world with superintelligent AI systems) and (2) not explicitly fund/prioritize community building (e.g., the research supported by Forethought seems to come from Forethought employees, many of whom are based at Oxford, versus academics based around the world).
These changes parallel an issue that affects the EA community more generally: talented people who don’t work on issues related to AI—the very people most well-equipped to help EA course-correct—are less and less likely to be brought in. Several things have contributed to this—funding for non-AI projects has dried up; for non-AI-safety EAs, EAGs increasingly consist of conversations spent discussing one’s work with people who can rarely help (or worse, look down on it); it is difficult to build connections/friendships with people whose fundamental beliefs diverge more and more from one’s own. In short, the average non-AI-safety EA gains less—professionally, socially, and otherwise—from being actively engaged in the EA community in 2026 than they did in 2022. As one of these people, I want to be clear that my values and goals haven’t changed—I still want to use evidence and reason to do a lot of good—but it has ceased to feel like being part of the EA community facilitates this.
EA is a philosophy trying to find the most effective ways to help others, and a social movement that aims to put those ideas into practice; the social movement follows from the philosophy. So if EA has answered the question of how to most effectively help others—work on issues related to AI safety—then why should those of us who don’t focus on these issues be involved in this social movement?
When the theoretical question of whether we still need EA, if the answer to EA is AI safety, and an AI safety community exists gets posed, people tend to point to all of the non-AI-safety EA things that still exist (“look how much money is still going to GiveWell,” or “Coefficient Giving devotes a lot of resources to animal welfare,” and so on). But this isn’t an answer. And on a personal level, the question has already de facto been answered: as EA orgs like Forethought (and 80k, and others) increasingly shift from focusing on GPR → AI safety, the fact that CG devotes resources to animal welfare legislation isn’t very relevant to the experience that I have when I go to EAGs, or read the Forum, or try to have conversations with AI safety researchers, or peruse 80k’s recent episodes, or cease to see grant opportunities relevant to my projects.
The orientation you want is: “what kind of person do I want to be?” I suspect that to even ask the question immediately pushes you towards some tentative answers. You want to do good. You want to help others. You want to be fair minded about that, maybe even impartial. You want to do more good rather than less. You want to believe true things and understand the world.
All of these things are still true about me. But EA doesn’t just aim to do good; it aims to do the most good. And EA is no longer agnostic about the answer to the question of how to do the most good. The loss of that agnosticism has been—perhaps rightly—accompanied by a change in the structure of the EA community. This seems like a bullet new EA may just have to bite.
https://jobs.probablygood.org/ has 148 roles published in the last 4 days, only 10 of which are explicitly categorized as AI safety (although a few more involve AI)
Yes, I overstated this a bit (“has dried up”), but I kind of think we’re both right. On a large scale, orgs like GiveWell are still getting a lot of funding. But on an individual level, the funding environment feels really different to me than it did five years ago, when there were more fellowship and grant and award opportunities than I could possibly apply to. It does not feel like that today.
orgs like GiveWell are still getting a lot of funding
It’s not just that these orgs are still getting a lot of funding:
their funding is significantly increasing
there’s many more of them
many of them are making more and more varied grants themselves, e.g. GiveWell making 2 <$100k grants in 2026 which they didn’t use to do 5 years ago, Founders Pledge brand new Catalytic Impact Fund
there were more fellowship and grant and award opportunities than I could possibly apply to. It does not feel like that today.
I’m surprised by this, I think there’s a ton today. I’m not following this space actively but, besides the >100 job openings and >3 AIM programs mentioned above, here’s some off the top of my head:
You can also have a look at the most recent posts tagged “opportunities to take action” and the EA opportunities board, there’s lots of non-AI stuff, enough to overwhelm newcomers as much as EA in 2021, and likely way more than EA in 2017.
Also in general if Coefficient Giving and others are making more grants to more things, it likely means that there are more opportunities.
At the core of my project is the idea that people can disagree and still cooperate.
I agree with you that the people who currently control the talent infrastructure that flows from CG, i.e. CEA and 80k, have for the most part become uninterested in views on cause prio that don’t buy into the TAI hypothesis. They are not however, completely uninterested. As you say, they invite people working on global health, animals, and other causes to EAG; they support groups which discuss and invite speakers on these topics.
I understand that this is not much support in material terms compared to AI and this does stack the deck against non-AI causes for people making EA career choices. The question is what you want to do with it.
For my part, I am choosing to leverage that small amount of support to strengthen free-ranging discourse about how to do the most good. The bar may be higher for non-AI projects and people to get CG funding as they emerge from this discourse, but I don’t think it is insurmountable. Further, I hope people who engage with my project will use it as a launching pad to reach out to non-CG funders and non-”EA” collaborators for their non-AI projects.
Both of these will be more challenging, but I personally resolve to support people doing what they endorse doing based on how thoughtful and ambitious they are, where I measure neither of those things in terms of how much they agree with me or my funders in substance. It’ll be tough re bias, but idk, liberalism conquered the world last century, maybe it’ll do it again this century.
At the core of my project is the idea that people can disagree and still cooperate.
I think this is the crux of the issue. A lot of EAs working on non-AI projects don’t even disagree that AI is the most important issue of our time. The problem is relevance/interest. Many non-AI-safety people have already spent years building careers in other areas. We have PhDs, social networks, and jobs that aren’t oriented around AI safety—in many cases because we were explicitly following EA principles/ideas/advice from a decade ago—and don’t plan to start from scratch. (I agree that the average college student encountering EA today should focus on issues related to AI safety.)
It used to be the case that engaging with the EA community was a good way to facilitate doing important work in non-AI areas, but this is becoming less true; if the average EA has the bandwidth/resources to go to two conferences a year, the global health EA may find it higher yield to go to global health conferences, the GPR EA may find it higher yield to go to philosophy/econ conferences, and so on. Non-EA global health people (etc) are also thinking about how to do the most good within the field of global health, and global health EAs may benefit more professionally (and socially?) from engaging directly with them. It doesn’t seem like a great use of our limited time (for us, or for the AI safety people, probably) to professionally network with EAs working on things totally unrelated to what we’re working on.
Anyways, I’m not sure I fully understand what your proposal is, but I’m just trying to articulate what I see as a fundamental barrier to getting those of us who don’t do AI safety work to be more actively engaged in EA: many of us agree that AI is the most important issue of our time, but that doesn’t mean it makes sense for us to re-focus our careers on AI, given our existing backgrounds and skills. Correspondingly, it makes less and less sense for us to spend our professional time engaging with a community that is focused on AI safety. (I think there’s perhaps a better case to be made that it would be socially fulfilling to do this; I don’t feel socially compatible with the average AI safety EA, but maybe others do.)
I agree that the average college student encountering EA today should focus on issues related to AI safety
I broadly nodded along to your OP, but strong disagree here. There are tonnes of people working in AI safety, to the extent that it’s already hypercompetitive and the marginal value of one more person getting in the long queue for such a job seems very low.
Meanwhile I continue to find the case for AI safety, as least as envisioned by EA doomers, highly speculative. That’s not to say it shouldn’t get any attention, but there’s a far better evidenced path from e.g. ‘nuclear bombs or major pandemics cause the fall of civilisation’ than from ‘LLMs cause the fall of civilisation’.
And if you’re sufficiently pessimistic on the doomer narrative, we’re all screwed and there’s likely at least as much EV in short-term improvement of the lives of existing beings as in fighting an impossible struggle to prevent AGI from ever being developed. So there’s a credence window in which AI safety as top priority belongs. That window might be reasonably wide, but I don’t think it’s anywhere near wide enough to justify abandoning all other causes.
I didn’t mean this to be that deep; I meant (1) the average college student EA (i.e., many EAs should still pursue other kinds of careers) and (2) AI safety broadly construed (to include issues related to biorisk, policy, and many issues unrelated to x-risk). I don’t know much about how competitive jobs are throughout this space, but at least in some spheres (eg, academic philosophy) there is growing interest in AI, so much so that it’d be prudent for a philosophy PhD student to work on issues related to AI solely to get a job (i.e., bracketing any interest in EA/having a socially valuable career). I assume that’s true in at least some other spheres as well (policy?), and while I could see that changing in the next few years, it feels like the entire job market will change a lot in the next few years, such that I doubt the advice “don’t go into AI safety because it’s oversaturated; do X instead” will be reliable advice for most X.
We might just fully agree. I don’t think there were ever career-long professional benefits to EA for people specializing in specific cause areas that outweigh those at cause-specific conferences (but please come teach community builders/members/young people about your work at EAG).
I think EA has always been for:
figuring out where you want to specialize, and
building/maintaining your knowledge and motivation around the world’s needs generally
The first is professionally relevant early in your career (or for generalists looking to lateral), but not so much later. The latter is personal/social/intellectual and perhaps a broad way that a specialist can give back by helping people working on the first thing.
If it is settled that AI is the thing to do, maybe point one has become irrelevant. I dispute this,[1] but less so than point two, which I think has strong independent value.
It may also be helpful context that I personally am not an expected utiltity maximizer. I’m doing my project because I want people to engage with EA arguments and then do what they want to do with them, as opposed to doing what I want them to do in a more superficial sense.
For example, it may just be critical to understand what else might be ITN in the world to understand why AI is important, or to think clearly about what its implications for welfare are. If those other problems aren’t really in the room or fully explored, it’s easy to miss crucial considerations. Similarly, what do we mean by “AI” and “settled?” Lots of EA epistemology can help here. Relatedly, the moral context of everything happening in the world can provide motivation that might otherwise be lacking.
Hm interesting. One reaction I have is that in-person communities have different functions, and it might be worth specifying more precisely what function you envision in-person EA communities having in 2026 (and how this has maybe changed?). Here are some different models I could see; 2 and 3 seem more promising to me than 1 or 4:
EA as a professional community (like a professional society with professional conferences). This is historically what most in-person EA events have been, but as I’ve argued, I think this kind of in-person community makes less and less sense (though it probably continues to make sense for large sub-groups of EAs, like AI safety EAs, GPR EAs, and so on).
EA as a moral/spiritual community (like Unitarian Universalism). I suspect some people will bristle at the word “spiritual,” but I think what you’ve said about motivation is true/important, and EA would do well to lean into this. As a kid, I always liked religious services—despite not believing in God—because I enjoyed the music, (some of) the sermons/stories, and the quiet meditation. It would be culty to lean too hard in the direction of an “EA service,” but it could be cool to design social events that explicitly try to get at this (i.e., leave people feeling hopeful/reflective/recharged, rather than doom-y). I suspect a lot of EAs—including myself, lol—would eye roll at the concept though, so it could be hard to get off the ground.
EA as a social community organized around a shared interest (like a debate team). Debaters don’t formally debate with each other when they socialize (ie, in a structured way), but the things that make them like debate also make them socially compatible. Similarly, maybe we could think of EAs as actually practicing their EA separately, but uniting over the things that make them like EA. I suspect this is a fairly promising model.
EA as a social community organized around a shared activity (like a hiking club). A hiking club exists so people can hike together and, as my dad often notes after spending the day with his, may not be that socially compatible in other ways. I could see EA being like this too—maybe I don’t vibe with the AI safety people, but we could have interesting/fun convos about EA? I’m also not sure this works though.
Good breakdown. I agree on 2 & 3 being promising too. One of the first event models I came up with for my project was EA reading + sermon-or-constructive debate related to the reading. It’s not cultish if there are no rites/titles/statements of faith/garb/iconography.
I have a much more positive feelings about EAs than rationalists, and I think this is quite normal for people who came to EA from outside rationalism. I mean, I actually liked the vast majority of rationalists I’ve met a lot-when I worked in a rationalist office in Prague it had a lovely culture-but I think only about .5 of rationalists like EA as an idea, and my suspicion is that “dislikes EA” amongst rationalists correlates fairly heavily with “has political views that make me uncomfortable”.
“EA is no longer agnostic about the answer to the question of how to do the most good.” I’m interested in this assertion—to what extent does it make sense to say that “EA” (the movement? Key organizations?) has taken a firm stance on the question of how the most good can be done? Is there pretty clear evidence of that, in your opinion? These matter to me quite a bit as someone who at present thinks that EA as a movement should provide epistemic tools and a community for people working on many important causes, AI safety amongst them.
I no longer tell people thinking about careers in an EA way to go to 80,000 hours. I tell them to go to Probably Good, that has taken over the foundational generalist career guidance work.
80k has narrowed itself into increasing irrelevance to broader-tent EA. I understand that there are reasons why they believe a specialist AI safety careers navigator is a better use of their time.
I don’t think it’s possible to create a stable EA comprised of people who (1) “believe in the tools more than the conclusions” and (2) treat AI safety as a settled conclusion.
I think the evolution of Forethought is perhaps emblematic of some of the problems with 2026 EA. Before Forethought became “a research nonprofit focused on how to navigate the transition to a world with superintelligent AI systems,” it was the Forethought Foundation for Global Priorities Research, which aimed “to promote academic work that addresses the question of how to use our scarce resources to improve the world by as much as possible.” FFGPR funded an annual fellowship that dozens of doctoral students from a range of fields participated in, and funded them to spend a month together in Oxford thinking through their GPR projects together.
New Forethought seems to: (1) focus on a narrower range of projects (those related to navigating the transition to a world with superintelligent AI systems) and (2) not explicitly fund/prioritize community building (e.g., the research supported by Forethought seems to come from Forethought employees, many of whom are based at Oxford, versus academics based around the world).
These changes parallel an issue that affects the EA community more generally: talented people who don’t work on issues related to AI—the very people most well-equipped to help EA course-correct—are less and less likely to be brought in. Several things have contributed to this—funding for non-AI projects has dried up; for non-AI-safety EAs, EAGs increasingly consist of conversations spent discussing one’s work with people who can rarely help (or worse, look down on it); it is difficult to build connections/friendships with people whose fundamental beliefs diverge more and more from one’s own. In short, the average non-AI-safety EA gains less—professionally, socially, and otherwise—from being actively engaged in the EA community in 2026 than they did in 2022. As one of these people, I want to be clear that my values and goals haven’t changed—I still want to use evidence and reason to do a lot of good—but it has ceased to feel like being part of the EA community facilitates this.
EA is a philosophy trying to find the most effective ways to help others, and a social movement that aims to put those ideas into practice; the social movement follows from the philosophy. So if EA has answered the question of how to most effectively help others—work on issues related to AI safety—then why should those of us who don’t focus on these issues be involved in this social movement?
When the theoretical question of whether we still need EA, if the answer to EA is AI safety, and an AI safety community exists gets posed, people tend to point to all of the non-AI-safety EA things that still exist (“look how much money is still going to GiveWell,” or “Coefficient Giving devotes a lot of resources to animal welfare,” and so on). But this isn’t an answer. And on a personal level, the question has already de facto been answered: as EA orgs like Forethought (and 80k, and others) increasingly shift from focusing on GPR → AI safety, the fact that CG devotes resources to animal welfare legislation isn’t very relevant to the experience that I have when I go to EAGs, or read the Forum, or try to have conversations with AI safety researchers, or peruse 80k’s recent episodes, or cease to see grant opportunities relevant to my projects.
All of these things are still true about me. But EA doesn’t just aim to do good; it aims to do the most good. And EA is no longer agnostic about the answer to the question of how to do the most good. The loss of that agnosticism has been—perhaps rightly—accompanied by a change in the structure of the EA community. This seems like a bullet new EA may just have to bite.
What are you basing this on? I think the opposite is going on. Some datapoints that come to mind:
Coefficient Giving more than doubled their funding for GiveWell for 2026, adding $175M on top of the existing $100M. They also started two new funds
GiveWell’s funding from non-Coefficient Giving donors is also increasing
Founders Pledge went from $25M money moved in 2022 → $80M in 2023 → $140M in 2024, and other major funders are emerging
Giving Green influences >$17M/year in climate donations, and recently started research into biodiversity projects
The EA Animal Welfare fund raised >$10M/y last year and is now targeting $20M/y
https://jobs.probablygood.org/ has 148 roles published in the last 4 days, only 10 of which are explicitly categorized as AI safety (although a few more involve AI)
Charity Entrepreneurship is launching more and more charities per year, and AIM as a whole has more programs
Yes, I overstated this a bit (“has dried up”), but I kind of think we’re both right. On a large scale, orgs like GiveWell are still getting a lot of funding. But on an individual level, the funding environment feels really different to me than it did five years ago, when there were more fellowship and grant and award opportunities than I could possibly apply to. It does not feel like that today.
It’s not just that these orgs are still getting a lot of funding:
their funding is significantly increasing
there’s many more of them
many of them are making more and more varied grants themselves, e.g. GiveWell making 2 <$100k grants in 2026 which they didn’t use to do 5 years ago, Founders Pledge brand new Catalytic Impact Fund
I’m surprised by this, I think there’s a ton today. I’m not following this space actively but, besides the >100 job openings and >3 AIM programs mentioned above, here’s some off the top of my head:
High Impact Professionals Impact Accelerator Program
CEA bootcamp (which as far as I know is not mainly about AI)
School for Moral Ambition fellowships and circles
Magnify Mentoring mentee applications (I think it now accepts more people than WANBAM did five years ago, but can’t quickly find numbers. I see it got $371k from Coefficient Giving in August 2025, and their revenue seems to be increasing)
Animal Advocacy Careers course and career advising
Their Job Board has 21 job openings from last week
You can also have a look at the most recent posts tagged “opportunities to take action” and the EA opportunities board, there’s lots of non-AI stuff, enough to overwhelm newcomers as much as EA in 2021, and likely way more than EA in 2017.
Also in general if Coefficient Giving and others are making more grants to more things, it likely means that there are more opportunities.
At the core of my project is the idea that people can disagree and still cooperate.
I agree with you that the people who currently control the talent infrastructure that flows from CG, i.e. CEA and 80k, have for the most part become uninterested in views on cause prio that don’t buy into the TAI hypothesis. They are not however, completely uninterested. As you say, they invite people working on global health, animals, and other causes to EAG; they support groups which discuss and invite speakers on these topics.
I understand that this is not much support in material terms compared to AI and this does stack the deck against non-AI causes for people making EA career choices. The question is what you want to do with it.
For my part, I am choosing to leverage that small amount of support to strengthen free-ranging discourse about how to do the most good. The bar may be higher for non-AI projects and people to get CG funding as they emerge from this discourse, but I don’t think it is insurmountable. Further, I hope people who engage with my project will use it as a launching pad to reach out to non-CG funders and non-”EA” collaborators for their non-AI projects.
Both of these will be more challenging, but I personally resolve to support people doing what they endorse doing based on how thoughtful and ambitious they are, where I measure neither of those things in terms of how much they agree with me or my funders in substance. It’ll be tough re bias, but idk, liberalism conquered the world last century, maybe it’ll do it again this century.
I think this is the crux of the issue. A lot of EAs working on non-AI projects don’t even disagree that AI is the most important issue of our time. The problem is relevance/interest. Many non-AI-safety people have already spent years building careers in other areas. We have PhDs, social networks, and jobs that aren’t oriented around AI safety—in many cases because we were explicitly following EA principles/ideas/advice from a decade ago—and don’t plan to start from scratch. (I agree that the average college student encountering EA today should focus on issues related to AI safety.)
It used to be the case that engaging with the EA community was a good way to facilitate doing important work in non-AI areas, but this is becoming less true; if the average EA has the bandwidth/resources to go to two conferences a year, the global health EA may find it higher yield to go to global health conferences, the GPR EA may find it higher yield to go to philosophy/econ conferences, and so on. Non-EA global health people (etc) are also thinking about how to do the most good within the field of global health, and global health EAs may benefit more professionally (and socially?) from engaging directly with them. It doesn’t seem like a great use of our limited time (for us, or for the AI safety people, probably) to professionally network with EAs working on things totally unrelated to what we’re working on.
Anyways, I’m not sure I fully understand what your proposal is, but I’m just trying to articulate what I see as a fundamental barrier to getting those of us who don’t do AI safety work to be more actively engaged in EA: many of us agree that AI is the most important issue of our time, but that doesn’t mean it makes sense for us to re-focus our careers on AI, given our existing backgrounds and skills. Correspondingly, it makes less and less sense for us to spend our professional time engaging with a community that is focused on AI safety. (I think there’s perhaps a better case to be made that it would be socially fulfilling to do this; I don’t feel socially compatible with the average AI safety EA, but maybe others do.)
I broadly nodded along to your OP, but strong disagree here. There are tonnes of people working in AI safety, to the extent that it’s already hypercompetitive and the marginal value of one more person getting in the long queue for such a job seems very low.
Meanwhile I continue to find the case for AI safety, as least as envisioned by EA doomers, highly speculative. That’s not to say it shouldn’t get any attention, but there’s a far better evidenced path from e.g. ‘nuclear bombs or major pandemics cause the fall of civilisation’ than from ‘LLMs cause the fall of civilisation’.
And if you’re sufficiently pessimistic on the doomer narrative, we’re all screwed and there’s likely at least as much EV in short-term improvement of the lives of existing beings as in fighting an impossible struggle to prevent AGI from ever being developed. So there’s a credence window in which AI safety as top priority belongs. That window might be reasonably wide, but I don’t think it’s anywhere near wide enough to justify abandoning all other causes.
I didn’t mean this to be that deep; I meant (1) the average college student EA (i.e., many EAs should still pursue other kinds of careers) and (2) AI safety broadly construed (to include issues related to biorisk, policy, and many issues unrelated to x-risk). I don’t know much about how competitive jobs are throughout this space, but at least in some spheres (eg, academic philosophy) there is growing interest in AI, so much so that it’d be prudent for a philosophy PhD student to work on issues related to AI solely to get a job (i.e., bracketing any interest in EA/having a socially valuable career). I assume that’s true in at least some other spheres as well (policy?), and while I could see that changing in the next few years, it feels like the entire job market will change a lot in the next few years, such that I doubt the advice “don’t go into AI safety because it’s oversaturated; do X instead” will be reliable advice for most X.
We might just fully agree. I don’t think there were ever career-long professional benefits to EA for people specializing in specific cause areas that outweigh those at cause-specific conferences (but please come teach community builders/members/young people about your work at EAG).
I think EA has always been for:
figuring out where you want to specialize, and
building/maintaining your knowledge and motivation around the world’s needs generally
The first is professionally relevant early in your career (or for generalists looking to lateral), but not so much later. The latter is personal/social/intellectual and perhaps a broad way that a specialist can give back by helping people working on the first thing.
If it is settled that AI is the thing to do, maybe point one has become irrelevant. I dispute this,[1] but less so than point two, which I think has strong independent value.
It may also be helpful context that I personally am not an expected utiltity maximizer. I’m doing my project because I want people to engage with EA arguments and then do what they want to do with them, as opposed to doing what I want them to do in a more superficial sense.
For example, it may just be critical to understand what else might be ITN in the world to understand why AI is important, or to think clearly about what its implications for welfare are. If those other problems aren’t really in the room or fully explored, it’s easy to miss crucial considerations. Similarly, what do we mean by “AI” and “settled?” Lots of EA epistemology can help here. Relatedly, the moral context of everything happening in the world can provide motivation that might otherwise be lacking.
Hm interesting. One reaction I have is that in-person communities have different functions, and it might be worth specifying more precisely what function you envision in-person EA communities having in 2026 (and how this has maybe changed?). Here are some different models I could see; 2 and 3 seem more promising to me than 1 or 4:
EA as a professional community (like a professional society with professional conferences). This is historically what most in-person EA events have been, but as I’ve argued, I think this kind of in-person community makes less and less sense (though it probably continues to make sense for large sub-groups of EAs, like AI safety EAs, GPR EAs, and so on).
EA as a moral/spiritual community (like Unitarian Universalism). I suspect some people will bristle at the word “spiritual,” but I think what you’ve said about motivation is true/important, and EA would do well to lean into this. As a kid, I always liked religious services—despite not believing in God—because I enjoyed the music, (some of) the sermons/stories, and the quiet meditation. It would be culty to lean too hard in the direction of an “EA service,” but it could be cool to design social events that explicitly try to get at this (i.e., leave people feeling hopeful/reflective/recharged, rather than doom-y). I suspect a lot of EAs—including myself, lol—would eye roll at the concept though, so it could be hard to get off the ground.
EA as a social community organized around a shared interest (like a debate team). Debaters don’t formally debate with each other when they socialize (ie, in a structured way), but the things that make them like debate also make them socially compatible. Similarly, maybe we could think of EAs as actually practicing their EA separately, but uniting over the things that make them like EA. I suspect this is a fairly promising model.
EA as a social community organized around a shared activity (like a hiking club). A hiking club exists so people can hike together and, as my dad often notes after spending the day with his, may not be that socially compatible in other ways. I could see EA being like this too—maybe I don’t vibe with the AI safety people, but we could have interesting/fun convos about EA? I’m also not sure this works though.
Good breakdown. I agree on 2 & 3 being promising too. One of the first event models I came up with for my project was EA reading + sermon-or-constructive debate related to the reading. It’s not cultish if there are no rites/titles/statements of faith/garb/iconography.
“There is already an option for people who want to hang out with the people who are currently attracted to effective altruism. It is called “the rationalist community.””
I have a much more positive feelings about EAs than rationalists, and I think this is quite normal for people who came to EA from outside rationalism. I mean, I actually liked the vast majority of rationalists I’ve met a lot-when I worked in a rationalist office in Prague it had a lovely culture-but I think only about .5 of rationalists like EA as an idea, and my suspicion is that “dislikes EA” amongst rationalists correlates fairly heavily with “has political views that make me uncomfortable”.
“EA is no longer agnostic about the answer to the question of how to do the most good.” I’m interested in this assertion—to what extent does it make sense to say that “EA” (the movement? Key organizations?) has taken a firm stance on the question of how the most good can be done? Is there pretty clear evidence of that, in your opinion? These matter to me quite a bit as someone who at present thinks that EA as a movement should provide epistemic tools and a community for people working on many important causes, AI safety amongst them.
At least the 80k pivot to narrow focus on AI seems to back this point.
I no longer tell people thinking about careers in an EA way to go to 80,000 hours. I tell them to go to Probably Good, that has taken over the foundational generalist career guidance work.
80k has narrowed itself into increasing irrelevance to broader-tent EA. I understand that there are reasons why they believe a specialist AI safety careers navigator is a better use of their time.