Thanks for writing this—I was planning to but as usual my Forum post reach exceeded my grasp. I also just found it to be a bad piece tbh, and DeBoer to be quite nasty (see Freddie’s direct “response” for example)
But I think what I find most astounding about this, and the wave of EA critiques in the wider blogosphere (inc big media) over the last few months is how much they make big claims that seem obviously false, or provide no evidence to back up their sweeping claims. Take this example:
If you’d like a more widely-held EA belief that amounts to angels dancing on the head of a pin, you could consider effective altruism’s turn to an obsessive focus on “longtermism,” in theory an embrace of future lives over present ones and in practice a fixation on the potential dangers of apocalyptic artificial intelligence.
Like this paragraph isn’t argued for. It just states and provides no evidence for EA focusing on longtermism, that focus being obsessive, that theoretically it leads one to embrace future lives over the present, and than in practice it leads to a fixation of AI. Even if you think he’s right, you have to provide some goddamn evidence.
Then later:
Still, utilitarianism has always been subject to simple hypotheticals that demonstrate its moral failure. Utilitarianism insists...
What follows, I’m afraid, is not a thorough literature review of Meta/Normative/Applied Ethics, or intellectual histories of arguments for Utilitarianism, or as you (and Richard Chappell) point out the difference between Utilitarianism, Consequentialism, and Beneficentrism.
Or later:
I will, however, continue to oppose the tendentious insistence that any charitable dollars for the arts, culture, and beauty are misspent.
I have no idea what claim this is responding to or the idea that this is representative of EA somehow?
All in all Freddie seems to mix up some just simply false empirical claims (in practice EAs do mostly X, or that trying to do good impartially is super common behaviour for everyone around the world) or just appeals to his own moral intuition (this set of stuff Y that EA does is good but you don’t need EA, and all this other stuff is just self-evidently ridiculous and wrong)
Funnily enough, it’s an example of EA criticism being a shell game, rather than EA. Like in the comments of DeBoer’s article and in Scott’s reply people are having tons of arguments about EA, but very few are litigating the merits of DeBoer’s actual claims. It reminds me of the inconsistency between Silicon Valley e/acc critiques of EA and more academic/leftist critiques of EA. The former calls us communists and the other useful idiots of the expoitative capitalist class. We can’t be both!
Anyway, just wanted to add some ammunition to your already good post. Keep on pushing back on rubbish critiques like this where you can Omnizoid, I’ll back you up where I can.
Thanks for writing this—I was planning to but as usual my Forum post reach exceeded my grasp. I also just found it to be a bad piece tbh, and DeBoer to be quite nasty (see Freddie’s direct “response” for example)
But I think what I find most astounding about this, and the wave of EA critiques in the wider blogosphere (inc big media) over the last few months is how much they make big claims that seem obviously false, or provide no evidence to back up their sweeping claims. Take this example:
Like this paragraph isn’t argued for. It just states and provides no evidence for EA focusing on longtermism, that focus being obsessive, that theoretically it leads one to embrace future lives over the present, and than in practice it leads to a fixation of AI. Even if you think he’s right, you have to provide some goddamn evidence.
Then later:
What follows, I’m afraid, is not a thorough literature review of Meta/Normative/Applied Ethics, or intellectual histories of arguments for Utilitarianism, or as you (and Richard Chappell) point out the difference between Utilitarianism, Consequentialism, and Beneficentrism.
Or later:
I have no idea what claim this is responding to or the idea that this is representative of EA somehow?
All in all Freddie seems to mix up some just simply false empirical claims (in practice EAs do mostly X, or that trying to do good impartially is super common behaviour for everyone around the world) or just appeals to his own moral intuition (this set of stuff Y that EA does is good but you don’t need EA, and all this other stuff is just self-evidently ridiculous and wrong)
Funnily enough, it’s an example of EA criticism being a shell game, rather than EA. Like in the comments of DeBoer’s article and in Scott’s reply people are having tons of arguments about EA, but very few are litigating the merits of DeBoer’s actual claims. It reminds me of the inconsistency between Silicon Valley e/acc critiques of EA and more academic/leftist critiques of EA. The former calls us communists and the other useful idiots of the expoitative capitalist class. We can’t be both!
Anyway, just wanted to add some ammunition to your already good post. Keep on pushing back on rubbish critiques like this where you can Omnizoid, I’ll back you up where I can.