From another tweet on the same thread with @nonmayorpete:
“Some people get a bad “i can be more virtuous by being smarter haha” impression. it also has a rep for being very utilitarian and putting a too much weight into world-ending AI risk.
this is the first time i’ve seen negative vibes about it being money-only though”
I’m just showing this as an example, not because I’ve heard of this criticism before. This is my first time hearing the criticism about “being virtuous by being smarter”, and “putting too much weight into world-ending AI risk”. I’ve heard the latter from people within the community, but not from someone outside of it.
But I’m not from SF or the U.S., so I’m not really exposed to people who have these low resolution definitions of effective altruism. I think here in the Philippines, we thankfully don’t have any negative, low resolution versions of EA circulating around yet.
I think SF, in general, is not representative because a lot more people (non-EAs) are aware of EA, AI risk etc.
Also, the EA community in SF is also different from most other EA communities, including other US EA communities because of the overlap with the SF tech scene and rationality community.
So although I haven’t seen that criticism from non-EAs as much, I think it’s a reasonable low-res version that someone in SF might get if they just hear about EA.
From another tweet on the same thread with @nonmayorpete:
“Some people get a bad “i can be more virtuous by being smarter haha” impression. it also has a rep for being very utilitarian and putting a too much weight into world-ending AI risk.
this is the first time i’ve seen negative vibes about it being money-only though”
I’m just showing this as an example, not because I’ve heard of this criticism before. This is my first time hearing the criticism about “being virtuous by being smarter”, and “putting too much weight into world-ending AI risk”. I’ve heard the latter from people within the community, but not from someone outside of it.
But I’m not from SF or the U.S., so I’m not really exposed to people who have these low resolution definitions of effective altruism. I think here in the Philippines, we thankfully don’t have any negative, low resolution versions of EA circulating around yet.
I think SF, in general, is not representative because a lot more people (non-EAs) are aware of EA, AI risk etc.
Also, the EA community in SF is also different from most other EA communities, including other US EA communities because of the overlap with the SF tech scene and rationality community.
So although I haven’t seen that criticism from non-EAs as much, I think it’s a reasonable low-res version that someone in SF might get if they just hear about EA.