I mostly just want to join the chorus of people welcoming you here and repudiate the negative reaction that a very reasonable question is getting. It’s worth adding 3 things:
Anecdotally, the EA forum skews whiter, more male, and more Bay Area. I personally feel that the forum is increasingly out of touch with ‘EA on the ground’, especially in cause areas such as global health. Implementing EAs very rarely post here, unfortunately. Don’t take any reaction here as representative of broader EA.
Anecdotally (and I believe I’ve seen some stats to back this up), some cause areas are more diverse than others. Global health and animal welfare seem to be substantially more diverse, specifically.
Anecdotally, the EA forum skews [...] more Bay Area.
For what it’s worth, this is not my impression at all. Bay Area EAs (e.g. me) mostly consider the EA Forum to be very unrepresentative of their perspective, to the extent that it’s very rarely worthwhile to post here (which is why they often post on LessWrong instead).
There’s a social and professional community of Bay Area EAs who work on issues related to transformative AI. People in this cluster tend to have median timelines to transformative AI of 5 to 15 years, tend to think that AI takeover is 5-70% likely, tend to think that we should be fairly cosmopolitan in our altruism.
People in this cluster mostly don’t post on the EA Forum for a variety of reasons:
Many users here don’t seem very well-informed.
Lots of users here disagree with me on some of the opinions about AI that I stated above. Obviously it’s totally reasonable for people to disagree on those points, at least before people tell them arguments about them. But it usually doesn’t feel worth my time to argue about those points. I want to spend much more of my time discussing the implications of these basic beliefs than arguing about their probabilities. LessWrong is a much better place for this.
The culture here seems pretty toxic. I don’t really feel welcome here. I expect people to treat me with hostility as a result of being moderately influential as an AI safety researcher and executive.
To be clear, I think it’s a shame that the EA Forum isn’t a better place for people like me to post and comment.
You can check for yourself that the Bay Area EAs don’t really want to post here by looking up examples of prominent Bay Area EAs and noting that they commented here much more several years ago than they do today.
I mostly just want to join the chorus of people welcoming you here and repudiate the negative reaction that a very reasonable question is getting. It’s worth adding 3 things:
The demographic balance is improving over time, with new EAs in 2023 and 2024 being substantially more diverse.
Anecdotally, the EA forum skews whiter, more male, and more Bay Area. I personally feel that the forum is increasingly out of touch with ‘EA on the ground’, especially in cause areas such as global health. Implementing EAs very rarely post here, unfortunately. Don’t take any reaction here as representative of broader EA.
Anecdotally (and I believe I’ve seen some stats to back this up), some cause areas are more diverse than others. Global health and animal welfare seem to be substantially more diverse, specifically.
For what it’s worth, this is not my impression at all. Bay Area EAs (e.g. me) mostly consider the EA Forum to be very unrepresentative of their perspective, to the extent that it’s very rarely worthwhile to post here (which is why they often post on LessWrong instead).
In what way do you find it unrepresentative? Just curious because I am unfamiliar with the dynamics here.
There’s a social and professional community of Bay Area EAs who work on issues related to transformative AI. People in this cluster tend to have median timelines to transformative AI of 5 to 15 years, tend to think that AI takeover is 5-70% likely, tend to think that we should be fairly cosmopolitan in our altruism.
People in this cluster mostly don’t post on the EA Forum for a variety of reasons:
Many users here don’t seem very well-informed.
Lots of users here disagree with me on some of the opinions about AI that I stated above. Obviously it’s totally reasonable for people to disagree on those points, at least before people tell them arguments about them. But it usually doesn’t feel worth my time to argue about those points. I want to spend much more of my time discussing the implications of these basic beliefs than arguing about their probabilities. LessWrong is a much better place for this.
The culture here seems pretty toxic. I don’t really feel welcome here. I expect people to treat me with hostility as a result of being moderately influential as an AI safety researcher and executive.
To be clear, I think it’s a shame that the EA Forum isn’t a better place for people like me to post and comment.
You can check for yourself that the Bay Area EAs don’t really want to post here by looking up examples of prominent Bay Area EAs and noting that they commented here much more several years ago than they do today.