Critics like Srinivasan, Crary, etc., pretty explicitly combine a political stance with criticism of EA’s “utilitarian” foundations
Yes, they’re hostile to utilitarianism and to some extent agent-neutrality in general, but the account of “EA principles” you give earlier in the paper is much broader.
Effective altruism is sometimes confused with utilitarianism. It shares with utilitarianism the innocuous claim that, all else equal,it’s better to do more good than less. But EA does not entail utilitarianism’s more controversial claims. It does not entail hegemonic impartial maximizing: the EA project may just be one among many in a well-rounded life …
I’ve elsewhere described the underlying philosophy of effective altruism as“beneficentrism”—or “utilitarianism minus the controversial bits”—that is, “the view that promoting the general welfare is deeply important, and should be amongst one’s central life projects.” Utilitarians take beneficentrism to be exhaustive of fundamental morality, but others may freely combine it with other virtues, prerogatives, or pro tanto duties.
Critics like Crary and Srinivasan (and this particular virtue-ethicsy line of critique should not be conflated with “political” critique in general) are not interested in discussing “EA principles” in this sense. When they say something like “I object to EA principles” they’re objecting to what they judge to be the actual principles animating EA discourse, not the ones the community “officially” endorses.
They might be wrong about what those principles are—personally my impression is that EA is very strongly committed to consequentialism but in practice not all that utilitarian—but it’s an at least partially empirical question, not something that can be resolved in the abstract.
Yes, they’re hostile to utilitarianism and to some extent agent-neutrality in general, but the account of “EA principles” you give earlier in the paper is much broader.
Critics like Crary and Srinivasan (and this particular virtue-ethicsy line of critique should not be conflated with “political” critique in general) are not interested in discussing “EA principles” in this sense. When they say something like “I object to EA principles” they’re objecting to what they judge to be the actual principles animating EA discourse, not the ones the community “officially” endorses.
They might be wrong about what those principles are—personally my impression is that EA is very strongly committed to consequentialism but in practice not all that utilitarian—but it’s an at least partially empirical question, not something that can be resolved in the abstract.