Critics like Srinivasan, Crary, etc., pretty explicitly combine a political stance with criticism of EA’s “utilitarian” foundations, so I’m not sure what’s uncharitable about this? If they said something like, “EA has great principles, but we think the current orgs aren’t doing a great job of implementing their own principles”, that would be very different from what they actually say! (It would also mean I didn’t need to address them in this paper, since I’m purely concerned with evaluating EA principles, not orgs etc.)
But I guess it wouldn’t hurt to flag the broader point that one could think current EA orgs are messing up in various ways while agreeing with the broader principles and wishing well for future iterations of EA (that better achieve their stated goals). Are there any other specific changes to my paper that you’d recommend here?
Critics like Srinivasan, Crary, etc., pretty explicitly combine a political stance with criticism of EA’s “utilitarian” foundations
Yes, they’re hostile to utilitarianism and to some extent agent-neutrality in general, but the account of “EA principles” you give earlier in the paper is much broader.
Effective altruism is sometimes confused with utilitarianism. It shares with utilitarianism the innocuous claim that, all else equal,it’s better to do more good than less. But EA does not entail utilitarianism’s more controversial claims. It does not entail hegemonic impartial maximizing: the EA project may just be one among many in a well-rounded life …
I’ve elsewhere described the underlying philosophy of effective altruism as“beneficentrism”—or “utilitarianism minus the controversial bits”—that is, “the view that promoting the general welfare is deeply important, and should be amongst one’s central life projects.” Utilitarians take beneficentrism to be exhaustive of fundamental morality, but others may freely combine it with other virtues, prerogatives, or pro tanto duties.
Critics like Crary and Srinivasan (and this particular virtue-ethicsy line of critique should not be conflated with “political” critique in general) are not interested in discussing “EA principles” in this sense. When they say something like “I object to EA principles” they’re objecting to what they judge to be the actual principles animating EA discourse, not the ones the community “officially” endorses.
They might be wrong about what those principles are—personally my impression is that EA is very strongly committed to consequentialism but in practice not all that utilitarian—but it’s an at least partially empirical question, not something that can be resolved in the abstract.
Haven’t read the draft, just this comment thread, but it seems to me the quoted section is somewhat unclear and that clearing it up might reduce the commenter’s concerns.
You write here about interpreting some objections so that they become “empirical disagreements”. But I don’t see you saying exactly what the disagreement is. The claim explicitly stated is that “true claims might be used to ill effect in the world”—but that’s obviously not something you (or EAs generally) disagree with.
Then you suggest that people on the anti-EA side of the disagreement are “discouraging people from trying to do good effectively,” which may be a true description of their behavior, but may also be interpreted to include seemingly evil things that they wouldn’t actually do (like opposing whatever political reforms they actually support, on the basis that they would help people too well). That’s presumably a misinterpretation of what you’ve written, but that interpretation is facilitated by the fact that the disagreement at hand hasn’t been explicitly articulated.
Critics like Srinivasan, Crary, etc., pretty explicitly combine a political stance with criticism of EA’s “utilitarian” foundations, so I’m not sure what’s uncharitable about this? If they said something like, “EA has great principles, but we think the current orgs aren’t doing a great job of implementing their own principles”, that would be very different from what they actually say! (It would also mean I didn’t need to address them in this paper, since I’m purely concerned with evaluating EA principles, not orgs etc.)
But I guess it wouldn’t hurt to flag the broader point that one could think current EA orgs are messing up in various ways while agreeing with the broader principles and wishing well for future iterations of EA (that better achieve their stated goals). Are there any other specific changes to my paper that you’d recommend here?
Yes, they’re hostile to utilitarianism and to some extent agent-neutrality in general, but the account of “EA principles” you give earlier in the paper is much broader.
Critics like Crary and Srinivasan (and this particular virtue-ethicsy line of critique should not be conflated with “political” critique in general) are not interested in discussing “EA principles” in this sense. When they say something like “I object to EA principles” they’re objecting to what they judge to be the actual principles animating EA discourse, not the ones the community “officially” endorses.
They might be wrong about what those principles are—personally my impression is that EA is very strongly committed to consequentialism but in practice not all that utilitarian—but it’s an at least partially empirical question, not something that can be resolved in the abstract.
Haven’t read the draft, just this comment thread, but it seems to me the quoted section is somewhat unclear and that clearing it up might reduce the commenter’s concerns.
You write here about interpreting some objections so that they become “empirical disagreements”. But I don’t see you saying exactly what the disagreement is. The claim explicitly stated is that “true claims might be used to ill effect in the world”—but that’s obviously not something you (or EAs generally) disagree with.
Then you suggest that people on the anti-EA side of the disagreement are “discouraging people from trying to do good effectively,” which may be a true description of their behavior, but may also be interpreted to include seemingly evil things that they wouldn’t actually do (like opposing whatever political reforms they actually support, on the basis that they would help people too well). That’s presumably a misinterpretation of what you’ve written, but that interpretation is facilitated by the fact that the disagreement at hand hasn’t been explicitly articulated.