In this paper, I’ve argued that there are no good intellectual critiques of effective altruist principles. We should all agree that the latter are straightforwardly correct. But it’s always possible that true claims might be used to ill effect in the world. Many objections to effective altruism, such as the charge that it provides “moral cover” to the wealthy, may best be understood in these political terms.
I don’t think philosophers have any special expertise in adjudicating such empirical disagreements, so will not attempt to do so here. I’ll just note two general reasons for being wary of such politicized objections to moral claims.
First, I think we should have a strong default presumption in favour of truth and transparency. While it’s always conceivable that esotericism or “noble lies” could be justified, we should generally be very skeptical that lying about morality would actually be for the best. In this particular case, it seems especially implausible that discouraging people from trying to do good effectively is a good idea. I can’t rule it out—it’s a logical possibility—but it sure would be surprising. So there’s a high bar for allowing political judgments to override intellectual ones.
This is pretty uncharitable. Someone somewhere has probably sincerely argued for claiming helping people is bad on the grounds that doing so helps people, but “political” critics of EA are critics of EA, the particular subculture/professional network/cluster of organizations that exists right now,not “EA principles”. This is somewhat obscured by the fact that the loudest and best-networked ones come from “low decoupling” intellectual cultures, and often don’t take talk of principles qua principles seriously enough to bother indicating that they’re talking about something else—but it’s not obscure to them, and they’re not going to give you any partial credit here.
Critics like Srinivasan, Crary, etc., pretty explicitly combine a political stance with criticism of EA’s “utilitarian” foundations, so I’m not sure what’s uncharitable about this? If they said something like, “EA has great principles, but we think the current orgs aren’t doing a great job of implementing their own principles”, that would be very different from what they actually say! (It would also mean I didn’t need to address them in this paper, since I’m purely concerned with evaluating EA principles, not orgs etc.)
But I guess it wouldn’t hurt to flag the broader point that one could think current EA orgs are messing up in various ways while agreeing with the broader principles and wishing well for future iterations of EA (that better achieve their stated goals). Are there any other specific changes to my paper that you’d recommend here?
Critics like Srinivasan, Crary, etc., pretty explicitly combine a political stance with criticism of EA’s “utilitarian” foundations
Yes, they’re hostile to utilitarianism and to some extent agent-neutrality in general, but the account of “EA principles” you give earlier in the paper is much broader.
Effective altruism is sometimes confused with utilitarianism. It shares with utilitarianism the innocuous claim that, all else equal,it’s better to do more good than less. But EA does not entail utilitarianism’s more controversial claims. It does not entail hegemonic impartial maximizing: the EA project may just be one among many in a well-rounded life …
I’ve elsewhere described the underlying philosophy of effective altruism as“beneficentrism”—or “utilitarianism minus the controversial bits”—that is, “the view that promoting the general welfare is deeply important, and should be amongst one’s central life projects.” Utilitarians take beneficentrism to be exhaustive of fundamental morality, but others may freely combine it with other virtues, prerogatives, or pro tanto duties.
Critics like Crary and Srinivasan (and this particular virtue-ethicsy line of critique should not be conflated with “political” critique in general) are not interested in discussing “EA principles” in this sense. When they say something like “I object to EA principles” they’re objecting to what they judge to be the actual principles animating EA discourse, not the ones the community “officially” endorses.
They might be wrong about what those principles are—personally my impression is that EA is very strongly committed to consequentialism but in practice not all that utilitarian—but it’s an at least partially empirical question, not something that can be resolved in the abstract.
Haven’t read the draft, just this comment thread, but it seems to me the quoted section is somewhat unclear and that clearing it up might reduce the commenter’s concerns.
You write here about interpreting some objections so that they become “empirical disagreements”. But I don’t see you saying exactly what the disagreement is. The claim explicitly stated is that “true claims might be used to ill effect in the world”—but that’s obviously not something you (or EAs generally) disagree with.
Then you suggest that people on the anti-EA side of the disagreement are “discouraging people from trying to do good effectively,” which may be a true description of their behavior, but may also be interpreted to include seemingly evil things that they wouldn’t actually do (like opposing whatever political reforms they actually support, on the basis that they would help people too well). That’s presumably a misinterpretation of what you’ve written, but that interpretation is facilitated by the fact that the disagreement at hand hasn’t been explicitly articulated.
This is pretty uncharitable. Someone somewhere has probably sincerely argued for claiming helping people is bad on the grounds that doing so helps people, but “political” critics of EA are critics of EA, the particular subculture/professional network/cluster of organizations that exists right now, not “EA principles”. This is somewhat obscured by the fact that the loudest and best-networked ones come from “low decoupling” intellectual cultures, and often don’t take talk of principles qua principles seriously enough to bother indicating that they’re talking about something else—but it’s not obscure to them, and they’re not going to give you any partial credit here.
Critics like Srinivasan, Crary, etc., pretty explicitly combine a political stance with criticism of EA’s “utilitarian” foundations, so I’m not sure what’s uncharitable about this? If they said something like, “EA has great principles, but we think the current orgs aren’t doing a great job of implementing their own principles”, that would be very different from what they actually say! (It would also mean I didn’t need to address them in this paper, since I’m purely concerned with evaluating EA principles, not orgs etc.)
But I guess it wouldn’t hurt to flag the broader point that one could think current EA orgs are messing up in various ways while agreeing with the broader principles and wishing well for future iterations of EA (that better achieve their stated goals). Are there any other specific changes to my paper that you’d recommend here?
Yes, they’re hostile to utilitarianism and to some extent agent-neutrality in general, but the account of “EA principles” you give earlier in the paper is much broader.
Critics like Crary and Srinivasan (and this particular virtue-ethicsy line of critique should not be conflated with “political” critique in general) are not interested in discussing “EA principles” in this sense. When they say something like “I object to EA principles” they’re objecting to what they judge to be the actual principles animating EA discourse, not the ones the community “officially” endorses.
They might be wrong about what those principles are—personally my impression is that EA is very strongly committed to consequentialism but in practice not all that utilitarian—but it’s an at least partially empirical question, not something that can be resolved in the abstract.
Haven’t read the draft, just this comment thread, but it seems to me the quoted section is somewhat unclear and that clearing it up might reduce the commenter’s concerns.
You write here about interpreting some objections so that they become “empirical disagreements”. But I don’t see you saying exactly what the disagreement is. The claim explicitly stated is that “true claims might be used to ill effect in the world”—but that’s obviously not something you (or EAs generally) disagree with.
Then you suggest that people on the anti-EA side of the disagreement are “discouraging people from trying to do good effectively,” which may be a true description of their behavior, but may also be interpreted to include seemingly evil things that they wouldn’t actually do (like opposing whatever political reforms they actually support, on the basis that they would help people too well). That’s presumably a misinterpretation of what you’ve written, but that interpretation is facilitated by the fact that the disagreement at hand hasn’t been explicitly articulated.