Thanks for sharing this! Iām not too optimistic, though, as the editorsā introduction on the OUP blog doesnāt inspire confidence. E.g. they write:
To step inside the utilitarian frame is to accept that values that count as āgoodā can be abstractly quantified. Its methods leave it incapable of addressing historically sedimented structural injustices and intergenerational injuries, since these arenāt the sorts of things that can be quantified by EA-style metrics.
This seems conceptually confused. Any kind of injustice or injury could, in principle, be associated with an estimated welfare cost. (Unless it was literally harmless, but thatās surely not what they intend.)
I know there have been past methodological critiques of the particular vein of classic (GiveWell-style) EA that was addressed to aid skeptics and focused on just the most robustly-evidenced global health interventions. But obviously thereās nothing in utilitarianism (or EA more broadly) that rules out making use of more speculative evidence and expected-value reasoning.
It sounds to me like their real complaint is something like: How dare EA/āutilitarianism prioritize other things over my pet causes, just because thereās no reason to think that my pet causes are optimal?
E.g.: āTo grasp how disastrously an apparently altruistic movement has run off course, consider thatā¦ covering the costs of caring for survivors of industrial animal farming in sanctuaries is seen as a bad use of funds.ā
Note that they donāt even attempt to offer reasons for thinking that animal sanctuaries are a better use of funds than existing EA priorities. Indeed, they donāt seem to acknowledge the reality of tradeoffs at all. Itās just supposed to be obvious that refusing funding to them and their allies is āgrievous harmā.
Hopefully some of the papers in the volume will offer some actual arguments that are worth engaging with.
This comment reads to me as unnecessarily adversarial and as a strawman of the authorsā position.
It sounds to me like their real complaint is something like: How dare EA/āutilitarianism prioritize other things over my pet causes, just because thereās no reason to think that my pet causes are optimal?
I think a more likely explanation of the authorsā position includes cruxes like:
disagreeing with the assumption of maximization (and underlying assumptions about the aggregation of utility), such that arguments about optimality are not relevant
moral partiality, e.g. a view that people have special obligations towards those in their local community
weighting (in)justice much more strongly than the average EA, such that the correction of (certain) historical wrongs is a very high priority
disagreements about the (non-consequentialist) badness of e.g. foreign philanthropic interventions
Your description of their position may very well be compatible with mine, they do write with a somewhat disparaging tone, and I expect to strongly disagree with many of the bookās arguments (including for some of the reasons you point out). However, it doesnāt feel like youāre engaging with their position in good faith.
Additionally, EA comprises a lot of nuanced ideas (e.g. distinguishing āclassic (GiveWell-style) EAā from other strains of EA) and there isnāt a canonical description of those ideas (though the EA Handbook does a decent job). While they might be obvious to community members, many of those nuances, counterarguments to naive objections, etc. arenāt in easy-to-find descriptions of EA. While in an ideal world all critics would pass their subjectsā ITT, Iām wary of creating too high of a bar for how much people need to understand EA ideas before they feel able to criticize them.
Iām responding to published academic work by (at least some) professional academics, published in the top academic press. The appropriate norms for professional academic criticism are not the same as for (say) making a newcomer feel welcome on the forum. It is (IMO) absolutely appropriate to clearly state oneās opinion when academic work is of low quality, and explain why, as I did in my comment.
Youāre certainly welcome to form a different opinion of their work. But you shouldnāt accuse me of ābad faithā just because I assessed their work more negatively than you do. Itās my honest opinion, and I offered supporting reasons for it.
IMO, this would be a worse forum if people werenāt allowed to clearly express their honest opinion of shoddy academic work, including (when textually supported) reasons for thinking that their targets were engaging in motivated reasoning.
Finally, I should clarify that I was not addressing the question of whether someone could construct a valuable steelman of the authorsā positions. Many have offered critiques along the lines you suggest, and you could certainly attribute those to the authors to make them sound more reasonable. But in that case you might as well skip this text and go straight to the critiques that have been better expressed elsewhere. What I was assessing was the value of this particular text. And, as I said, what Iāve seen so far strikes me as low quality. Hopefully some of the included essays by other authors are better.
Just to be clear, you are assessing the quality of the text based on the 1 page editorās introduction and what you believe the authors will write, and without having actually read it?
Iām assessing the text thatās currently available, yes. I think my original comment was perfectly clear on that. I hope the book itself is better than the editorsā introduction would indicate, but itās not unreasonable to assess what theyāve shared so far.
The comment I replied to sounds like youāre critiquing the main academic work rather than a description of it, so I wanted to check if you had read an advance copy or something.
How dare EA/āutilitarianism prioritize other things
I think a more likely explanation of the authorsā position includes cruxes like: ...
Speaking generally, it does seem like EA critics often equivocate between these two positions. For example, saying EA is bad for diverting money from soup kitchens to bednets but not being willing to say money should be diverted the other way. IMO focus on philosophical issues like utilitarianism can have the effect of equivocating further by implying more specific disagreements without really defending them.
(I donāt have any opinions about this book in particular).
Thanks for sharing this! Iām not too optimistic, though, as the editorsā introduction on the OUP blog doesnāt inspire confidence. E.g. they write:
This seems conceptually confused. Any kind of injustice or injury could, in principle, be associated with an estimated welfare cost. (Unless it was literally harmless, but thatās surely not what they intend.)
I know there have been past methodological critiques of the particular vein of classic (GiveWell-style) EA that was addressed to aid skeptics and focused on just the most robustly-evidenced global health interventions. But obviously thereās nothing in utilitarianism (or EA more broadly) that rules out making use of more speculative evidence and expected-value reasoning.
It sounds to me like their real complaint is something like: How dare EA/āutilitarianism prioritize other things over my pet causes, just because thereās no reason to think that my pet causes are optimal?
E.g.: āTo grasp how disastrously an apparently altruistic movement has run off course, consider thatā¦ covering the costs of caring for survivors of industrial animal farming in sanctuaries is seen as a bad use of funds.ā
Note that they donāt even attempt to offer reasons for thinking that animal sanctuaries are a better use of funds than existing EA priorities. Indeed, they donāt seem to acknowledge the reality of tradeoffs at all. Itās just supposed to be obvious that refusing funding to them and their allies is āgrievous harmā.
Hopefully some of the papers in the volume will offer some actual arguments that are worth engaging with.
This comment reads to me as unnecessarily adversarial and as a strawman of the authorsā position.
I think a more likely explanation of the authorsā position includes cruxes like:
disagreeing with the assumption of maximization (and underlying assumptions about the aggregation of utility), such that arguments about optimality are not relevant
moral partiality, e.g. a view that people have special obligations towards those in their local community
weighting (in)justice much more strongly than the average EA, such that the correction of (certain) historical wrongs is a very high priority
disagreements about the (non-consequentialist) badness of e.g. foreign philanthropic interventions
Your description of their position may very well be compatible with mine, they do write with a somewhat disparaging tone, and I expect to strongly disagree with many of the bookās arguments (including for some of the reasons you point out). However, it doesnāt feel like youāre engaging with their position in good faith.
Additionally, EA comprises a lot of nuanced ideas (e.g. distinguishing āclassic (GiveWell-style) EAā from other strains of EA) and there isnāt a canonical description of those ideas (though the EA Handbook does a decent job). While they might be obvious to community members, many of those nuances, counterarguments to naive objections, etc. arenāt in easy-to-find descriptions of EA. While in an ideal world all critics would pass their subjectsā ITT, Iām wary of creating too high of a bar for how much people need to understand EA ideas before they feel able to criticize them.
Iām responding to published academic work by (at least some) professional academics, published in the top academic press. The appropriate norms for professional academic criticism are not the same as for (say) making a newcomer feel welcome on the forum. It is (IMO) absolutely appropriate to clearly state oneās opinion when academic work is of low quality, and explain why, as I did in my comment.
Youāre certainly welcome to form a different opinion of their work. But you shouldnāt accuse me of ābad faithā just because I assessed their work more negatively than you do. Itās my honest opinion, and I offered supporting reasons for it.
IMO, this would be a worse forum if people werenāt allowed to clearly express their honest opinion of shoddy academic work, including (when textually supported) reasons for thinking that their targets were engaging in motivated reasoning.
Finally, I should clarify that I was not addressing the question of whether someone could construct a valuable steelman of the authorsā positions. Many have offered critiques along the lines you suggest, and you could certainly attribute those to the authors to make them sound more reasonable. But in that case you might as well skip this text and go straight to the critiques that have been better expressed elsewhere. What I was assessing was the value of this particular text. And, as I said, what Iāve seen so far strikes me as low quality. Hopefully some of the included essays by other authors are better.
Just to be clear, you are assessing the quality of the text based on the 1 page editorās introduction and what you believe the authors will write, and without having actually read it?
Iām assessing the text thatās currently available, yes. I think my original comment was perfectly clear on that. I hope the book itself is better than the editorsā introduction would indicate, but itās not unreasonable to assess what theyāve shared so far.
The comment I replied to sounds like youāre critiquing the main academic work rather than a description of it, so I wanted to check if you had read an advance copy or something.
Speaking generally, it does seem like EA critics often equivocate between these two positions. For example, saying EA is bad for diverting money from soup kitchens to bednets but not being willing to say money should be diverted the other way. IMO focus on philosophical issues like utilitarianism can have the effect of equivocating further by implying more specific disagreements without really defending them.
(I donāt have any opinions about this book in particular).