These are largely anecdotal, and are NOT endorsements of all listed critiques, just an acknowledgement that they exist, and may contribute to negative shifts in EA’s public image. This skews towards L) leaning views, and isn’t representative of all critiques out there, just a selection of from people I’ve talked to and commentary I’ve seen / what’s front of mind due to recent conversations.
FTX events are clearly a net negative to EA’s reputation from the outside. This probably was a more larger reputational hit to longtermism than animal welfare or GHW (though not necessarily a larger harm : benefit ratio). But even before this, a lot of left leaning folks view EA’s ties to crypto with skepticism (often this is usually around views of whether crypto is net positive for humanity, not on the extent to which crypto is a sound investment).
EA generally is subject to critiques around measuring impact through a utilitarian lens by those who deem the value of lives “above” measurement, as well as by those who think EA undervalues non-utilitarian moral views or person-affecting views. There are also general criticisms of it being an insufficiently diverse place (usually something like: too white / too western / too male / too elitist) for a movement that cares about doing the most good it can.
EA global health + wellbeing / development is subject to critiques around top-down Western aid (e.g., Easterly, Deaton), and general critiques around the merits of randomista development. Some conclusions are seen as unintuitive e.g. by those who think donating to local causes or your local community is preferable because of some moral obligation to those closer to them or those responsible for their success in some way.
Within the animal welfare space, there’s discussion around whether continued involvement with EA and its principles is a good thing or not for similar reasons (lacking diversity, too top-down, too utilitarian) - these voices largely come from the more left leaning / social justice inclined (e.g. placing value on intersectionality). Accordingly, some within the farmed animal advocacy space also think involvement with EA is contributing to a splintering within the FAAM movement. I’m not sure why this seems more relevant in FAAM than GHW, but some possibilities might be if EA funders are a more important player in the animal space than the GHD space, and if FAAM members are generally more left-leaning and see a larger divide between social justice approaches and EA’s utilitarian EV-maximising approaches. Some conclusions are seen as unintuitive e.g. shrimp welfare (“wait you guys actually care about shrimp?”), or wild animal suffering.
Longtermism is subject to critiques from those uncomfortable with valuing the future at the cost of people today, valuing artificial sentience more than humans, a perceived reliance on EV-maximising views, “tech bros” valuing science fiction ideas over real suffering and justifying such spending as “saving the world”, and the extent to which the future they are wanting to preserve actually involves all of humanity, or just a version of humanity that a limited subculture cares about. Unintuitive conclusions may involve anything ranging from thinking about the future more than 1000 years from now, outside of the solar system, or artificial sentience. The general critiques around diversity and a lean towards tech fixes instead of systematic approaches are perhaps most pronounced in longtermism, perhaps in part due to AI safety being a large focus of longtermism, and in part due to associations with the Bay Area. The general critiques around utilitarianism are perhaps also most pronounced in longtermism, and the media attention around WWOTF probably made more people engage with longtermism and its critiques. On recent EA involvement in politics as a pushback RE: favouring tech fixes > systemic approaches, the Flynn campaign was seen as a negative update for some left-leaning outsiders in terms of EA’s ability to engage in this space.
Outside of cause area considerations, some people get the impression that EA leans young, unprofessional, too skeptical to defer to existing expertise / too eager to defer to a smart generalist to first-principles their way through a complicated problem, that EA is a community that is too closely knit and subject to nepotism, that EA unfairly favours “EA insiders” or “EA alignment”. Other people think EA is too fervent with outreach, and consider university and high school messaging or even 80,000 hours akin to cult recruitment. On a similar vein, some think that the EA movement is too morally demanding, and this may lead to burnout, or insufficiently values individuals’ flourishing. Some others think that EA lacks direction, isn’t well steered, or has an inconsistent theory of change.
These are largely anecdotal, and are NOT endorsements of all listed critiques, just an acknowledgement that they exist, and may contribute to negative shifts in EA’s public image. This skews towards L) leaning views, and isn’t representative of all critiques out there, just a selection of from people I’ve talked to and commentary I’ve seen / what’s front of mind due to recent conversations.
FTX events are clearly a net negative to EA’s reputation from the outside. This probably was a more larger reputational hit to longtermism than animal welfare or GHW (though not necessarily a larger harm : benefit ratio). But even before this, a lot of left leaning folks view EA’s ties to crypto with skepticism (often this is usually around views of whether crypto is net positive for humanity, not on the extent to which crypto is a sound investment).
EA generally is subject to critiques around measuring impact through a utilitarian lens by those who deem the value of lives “above” measurement, as well as by those who think EA undervalues non-utilitarian moral views or person-affecting views. There are also general criticisms of it being an insufficiently diverse place (usually something like: too white / too western / too male / too elitist) for a movement that cares about doing the most good it can.
EA global health + wellbeing / development is subject to critiques around top-down Western aid (e.g., Easterly, Deaton), and general critiques around the merits of randomista development. Some conclusions are seen as unintuitive e.g. by those who think donating to local causes or your local community is preferable because of some moral obligation to those closer to them or those responsible for their success in some way.
Within the animal welfare space, there’s discussion around whether continued involvement with EA and its principles is a good thing or not for similar reasons (lacking diversity, too top-down, too utilitarian) - these voices largely come from the more left leaning / social justice inclined (e.g. placing value on intersectionality). Accordingly, some within the farmed animal advocacy space also think involvement with EA is contributing to a splintering within the FAAM movement. I’m not sure why this seems more relevant in FAAM than GHW, but some possibilities might be if EA funders are a more important player in the animal space than the GHD space, and if FAAM members are generally more left-leaning and see a larger divide between social justice approaches and EA’s utilitarian EV-maximising approaches. Some conclusions are seen as unintuitive e.g. shrimp welfare (“wait you guys actually care about shrimp?”), or wild animal suffering.
Longtermism is subject to critiques from those uncomfortable with valuing the future at the cost of people today, valuing artificial sentience more than humans, a perceived reliance on EV-maximising views, “tech bros” valuing science fiction ideas over real suffering and justifying such spending as “saving the world”, and the extent to which the future they are wanting to preserve actually involves all of humanity, or just a version of humanity that a limited subculture cares about. Unintuitive conclusions may involve anything ranging from thinking about the future more than 1000 years from now, outside of the solar system, or artificial sentience. The general critiques around diversity and a lean towards tech fixes instead of systematic approaches are perhaps most pronounced in longtermism, perhaps in part due to AI safety being a large focus of longtermism, and in part due to associations with the Bay Area. The general critiques around utilitarianism are perhaps also most pronounced in longtermism, and the media attention around WWOTF probably made more people engage with longtermism and its critiques. On recent EA involvement in politics as a pushback RE: favouring tech fixes > systemic approaches, the Flynn campaign was seen as a negative update for some left-leaning outsiders in terms of EA’s ability to engage in this space.
Outside of cause area considerations, some people get the impression that EA leans young, unprofessional, too skeptical to defer to existing expertise / too eager to defer to a smart generalist to first-principles their way through a complicated problem, that EA is a community that is too closely knit and subject to nepotism, that EA unfairly favours “EA insiders” or “EA alignment”. Other people think EA is too fervent with outreach, and consider university and high school messaging or even 80,000 hours akin to cult recruitment. On a similar vein, some think that the EA movement is too morally demanding, and this may lead to burnout, or insufficiently values individuals’ flourishing. Some others think that EA lacks direction, isn’t well steered, or has an inconsistent theory of change.