Would be nice to know what you are basing these diagrams on, other than intuition. If you are very present on the forum and mainly focused on AI of course that is going to be your intuition. Here are the dangers of this intuition I find to exist about this topic :
It’s a self-reenforcing thing : people deep into AI or newly converted are much more likely to think that EA revolves essentially around AI, and people outside of AI might think ‘Oh that’s what the community is about now’ and don’t feel like they belong here. Someone who just lurks out there and see that the forum is now almost exclusively filled with posts on AI will now think that EA is definitely about longtermism.
Funding is also a huge signal. With OpenPhil funding essentially AI and other longtermist projects, for someone who is struggling to find a job (yes we have a few talents who are being aked out everywhere but that’s not the case of the majority even for highly-educated EAs), it is easy to think in a opportunistic way and switch to AI out of necessity instead of conviction, see McAskill quote very relevantly cited by someone in the comments.
And finally, the message given by people at the top. If CEA focuses a lot of Ai career switches and think of other career switches as neutral, of course community builders will focus on AI people. Which means, factually, more men with a STEM background (we have excellent women working at visible and prestigious jobs in AI that’s true, but unless we consciously try to find a concrete way of making women entering the field it is going to be difficult to maintain this, and this is not a priority so far) since the ratio men/women in STEM is still very not in favor of women. The community might thus become even more masculine, and even more STEM (exception made to philosophers and policy-makers but the funds for such jobs are still scarce). I know this isn’t a problem for some here as many of the posts about diversity and their comments attest it, but for those who do see the problem with narrowing down even further, the point is made. And it’s just dangerous to focus on helping people switching to AI if in the need the number of jobs doesn’t grow as expected.
So all the ingredients are there for EA to turn into a practically exclusively AI community, but as D. Nash said, differentiating between the two might actually be more fruitful.
Also I’m not sure that I want to look back in five years and realize that what made the strength of EA—a highly diverse community in terms of interests and centers of impact, and a measurable impact in the world (I might be very wrong here but so far measuring impact for all these new AI orgs is difficult as we clearly lack data and it’s a new field of understanding--, has just disappeared. It’s OK to be seen as nerds and elitist (because let’s face it, that is how EA is seen in the mainstream) is fine as long as we have concrete impact to show for it, but if we become an exclusively technical community that is all about ML and AI governance, it is going to be even more difficult to get traction outside of EA (and we might want to care about that, as explained in a recent post on AI advocacy).
I know I’m going against the grain here, but I like to think that all these ‘EA open to criticism’ thinggy is not a thing of the past. And I truly think that these points need to be addressed, instead of being drown under the new enthusiasm for AI. And if needed to be said : I do realize how important AI is, and how impactful working on it is. I just think that it is not enough to go all-AI, and that many here tend to forget other dynamics and factors playing because of the AI takeover in the community.
Would be nice to know what you are basing these diagrams on, other than intuition. If you are very present on the forum and mainly focused on AI of course that is going to be your intuition. Here are the dangers of this intuition I find to exist about this topic :
It’s a self-reenforcing thing : people deep into AI or newly converted are much more likely to think that EA revolves essentially around AI, and people outside of AI might think ‘Oh that’s what the community is about now’ and don’t feel like they belong here. Someone who just lurks out there and see that the forum is now almost exclusively filled with posts on AI will now think that EA is definitely about longtermism.
Funding is also a huge signal. With OpenPhil funding essentially AI and other longtermist projects, for someone who is struggling to find a job (yes we have a few talents who are being aked out everywhere but that’s not the case of the majority even for highly-educated EAs), it is easy to think in a opportunistic way and switch to AI out of necessity instead of conviction, see McAskill quote very relevantly cited by someone in the comments.
And finally, the message given by people at the top. If CEA focuses a lot of Ai career switches and think of other career switches as neutral, of course community builders will focus on AI people. Which means, factually, more men with a STEM background (we have excellent women working at visible and prestigious jobs in AI that’s true, but unless we consciously try to find a concrete way of making women entering the field it is going to be difficult to maintain this, and this is not a priority so far) since the ratio men/women in STEM is still very not in favor of women. The community might thus become even more masculine, and even more STEM (exception made to philosophers and policy-makers but the funds for such jobs are still scarce). I know this isn’t a problem for some here as many of the posts about diversity and their comments attest it, but for those who do see the problem with narrowing down even further, the point is made. And it’s just dangerous to focus on helping people switching to AI if in the need the number of jobs doesn’t grow as expected.
So all the ingredients are there for EA to turn into a practically exclusively AI community, but as D. Nash said, differentiating between the two might actually be more fruitful.
Also I’m not sure that I want to look back in five years and realize that what made the strength of EA—a highly diverse community in terms of interests and centers of impact, and a measurable impact in the world (I might be very wrong here but so far measuring impact for all these new AI orgs is difficult as we clearly lack data and it’s a new field of understanding--, has just disappeared. It’s OK to be seen as nerds and elitist (because let’s face it, that is how EA is seen in the mainstream) is fine as long as we have concrete impact to show for it, but if we become an exclusively technical community that is all about ML and AI governance, it is going to be even more difficult to get traction outside of EA (and we might want to care about that, as explained in a recent post on AI advocacy).
I know I’m going against the grain here, but I like to think that all these ‘EA open to criticism’ thinggy is not a thing of the past. And I truly think that these points need to be addressed, instead of being drown under the new enthusiasm for AI. And if needed to be said : I do realize how important AI is, and how impactful working on it is. I just think that it is not enough to go all-AI, and that many here tend to forget other dynamics and factors playing because of the AI takeover in the community.