Many practitioners strike me as being dogmatic and closed-minded. They maintain a short internal whitelist of things that are considered ‘EA’—e.g., working at an EA-branded organization, or working directly on AI safety. If an activity isn’t on the whitelist, the dogmatic (and sometimes wrong) conclusion is that it must not be highly effective. I think that EA-associated organizations and AI safety are great, but they’re not the only approaches that could make a monumental difference. If you find yourself instinctively disagreeing, then you might be in the group I’m talking about. :)
People’s natural response should instead be something like: ‘Hmm, at first blush this doesn’t seem effective to me, and I have a strong prior that most things aren’t effective, but maybe there’s something here I don’t understand yet. Let’s see if I can figure out what it is.’
Level of personal involvement in effective altruism: medium-high. But I wouldn’t be proud to identify myself as EA.
I wish to register my emphatic partial agreement with much of this one, though I do still identify as EA, and have also talked with many people who are quite curious and interested in getting value from learning about new perspectives.
Anonymous #27:
I wish to register my emphatic partial agreement with much of this one, though I do still identify as EA, and have also talked with many people who are quite curious and interested in getting value from learning about new perspectives.