Thanks for writing thisâI was planning to but as usual my Forum post reach exceeded my grasp. I also just found it to be a bad piece tbh, and DeBoer to be quite nasty (see Freddieâs direct âresponseâ for example)
But I think what I find most astounding about this, and the wave of EA critiques in the wider blogosphere (inc big media) over the last few months is how much they make big claims that seem obviously false, or provide no evidence to back up their sweeping claims. Take this example:
If youâd like a more widely-held EA belief that amounts to angels dancing on the head of a pin, you could consider effective altruismâs turn to an obsessive focus on âlongtermism,â in theory an embrace of future lives over present ones and in practice a fixation on the potential dangers of apocalyptic artificial intelligence.
Like this paragraph isnât argued for. It just states and provides no evidence for EA focusing on longtermism, that focus being obsessive, that theoretically it leads one to embrace future lives over the present, and than in practice it leads to a fixation of AI. Even if you think heâs right, you have to provide some goddamn evidence.
Then later:
Still, utilitarianism has always been subject to simple hypotheticals that demonstrate its moral failure. Utilitarianism insists...
What follows, Iâm afraid, is not a thorough literature review of Meta/âNormative/âApplied Ethics, or intellectual histories of arguments for Utilitarianism, or as you (and Richard Chappell) point out the difference between Utilitarianism, Consequentialism, and Beneficentrism.
Or later:
I will, however, continue to oppose the tendentious insistence that any charitable dollars for the arts, culture, and beauty are misspent.
I have no idea what claim this is responding to or the idea that this is representative of EA somehow?
All in all Freddie seems to mix up some just simply false empirical claims (in practice EAs do mostly X, or that trying to do good impartially is super common behaviour for everyone around the world) or just appeals to his own moral intuition (this set of stuff Y that EA does is good but you donât need EA, and all this other stuff is just self-evidently ridiculous and wrong)
Funnily enough, itâs an example of EA criticism being a shell game, rather than EA. Like in the comments of DeBoerâs article and in Scottâs reply people are having tons of arguments about EA, but very few are litigating the merits of DeBoerâs actual claims. It reminds me of the inconsistency between Silicon Valley e/âacc critiques of EA and more academic/âleftist critiques of EA. The former calls us communists and the other useful idiots of the expoitative capitalist class. We canât be both!
Anyway, just wanted to add some ammunition to your already good post. Keep on pushing back on rubbish critiques like this where you can Omnizoid, Iâll back you up where I can.
Thanks for writing thisâI was planning to but as usual my Forum post reach exceeded my grasp. I also just found it to be a bad piece tbh, and DeBoer to be quite nasty (see Freddieâs direct âresponseâ for example)
But I think what I find most astounding about this, and the wave of EA critiques in the wider blogosphere (inc big media) over the last few months is how much they make big claims that seem obviously false, or provide no evidence to back up their sweeping claims. Take this example:
Like this paragraph isnât argued for. It just states and provides no evidence for EA focusing on longtermism, that focus being obsessive, that theoretically it leads one to embrace future lives over the present, and than in practice it leads to a fixation of AI. Even if you think heâs right, you have to provide some goddamn evidence.
Then later:
What follows, Iâm afraid, is not a thorough literature review of Meta/âNormative/âApplied Ethics, or intellectual histories of arguments for Utilitarianism, or as you (and Richard Chappell) point out the difference between Utilitarianism, Consequentialism, and Beneficentrism.
Or later:
I have no idea what claim this is responding to or the idea that this is representative of EA somehow?
All in all Freddie seems to mix up some just simply false empirical claims (in practice EAs do mostly X, or that trying to do good impartially is super common behaviour for everyone around the world) or just appeals to his own moral intuition (this set of stuff Y that EA does is good but you donât need EA, and all this other stuff is just self-evidently ridiculous and wrong)
Funnily enough, itâs an example of EA criticism being a shell game, rather than EA. Like in the comments of DeBoerâs article and in Scottâs reply people are having tons of arguments about EA, but very few are litigating the merits of DeBoerâs actual claims. It reminds me of the inconsistency between Silicon Valley e/âacc critiques of EA and more academic/âleftist critiques of EA. The former calls us communists and the other useful idiots of the expoitative capitalist class. We canât be both!
Anyway, just wanted to add some ammunition to your already good post. Keep on pushing back on rubbish critiques like this where you can Omnizoid, Iâll back you up where I can.