Critics like Srinivasan, Crary, etc., pretty explicitly combine a political stance with criticism of EAâs âutilitarianâ foundations, so Iâm not sure whatâs uncharitable about this? If they said something like, âEA has great principles, but we think the current orgs arenât doing a great job of implementing their own principlesâ, that would be very different from what they actually say! (It would also mean I didnât need to address them in this paper, since Iâm purely concerned with evaluating EA principles, not orgs etc.)
But I guess it wouldnât hurt to flag the broader point that one could think current EA orgs are messing up in various ways while agreeing with the broader principles and wishing well for future iterations of EA (that better achieve their stated goals). Are there any other specific changes to my paper that youâd recommend here?
Critics like Srinivasan, Crary, etc., pretty explicitly combine a political stance with criticism of EAâs âutilitarianâ foundations
Yes, theyâre hostile to utilitarianism and to some extent agent-neutrality in general, but the account of âEA principlesâ you give earlier in the paper is much broader.
Effective altruism is sometimes confused with utilitarianism. It shares with utilitarianism the innocuous claim that, all else equal,itâs better to do more good than less. But EA does not entail utilitarianismâs more controversial claims. It does not entail hegemonic impartial maximizing: the EA project may just be one among many in a well-rounded life âŚ
Iâve elsewhere described the underlying philosophy of effective altruism asâbeneficentrismââor âutilitarianism minus the controversial bitsââthat is, âthe view that promoting the general welfare is deeply important, and should be amongst oneâs central life projects.â Utilitarians take beneficentrism to be exhaustive of fundamental morality, but others may freely combine it with other virtues, prerogatives, or pro tanto duties.
Critics like Crary and Srinivasan (and this particular virtue-ethicsy line of critique should not be conflated with âpoliticalâ critique in general) are not interested in discussing âEA principlesâ in this sense. When they say something like âI object to EA principlesâ theyâre objecting to what they judge to be the actual principles animating EA discourse, not the ones the community âofficiallyâ endorses.
They might be wrong about what those principles areâpersonally my impression is that EA is very strongly committed to consequentialism but in practice not all that utilitarianâbut itâs an at least partially empirical question, not something that can be resolved in the abstract.
Havenât read the draft, just this comment thread, but it seems to me the quoted section is somewhat unclear and that clearing it up might reduce the commenterâs concerns.
You write here about interpreting some objections so that they become âempirical disagreementsâ. But I donât see you saying exactly what the disagreement is. The claim explicitly stated is that âtrue claims might be used to ill effect in the worldââbut thatâs obviously not something you (or EAs generally) disagree with.
Then you suggest that people on the anti-EA side of the disagreement are âdiscouraging people from trying to do good effectively,â which may be a true description of their behavior, but may also be interpreted to include seemingly evil things that they wouldnât actually do (like opposing whatever political reforms they actually support, on the basis that they would help people too well). Thatâs presumably a misinterpretation of what youâve written, but that interpretation is facilitated by the fact that the disagreement at hand hasnât been explicitly articulated.
Critics like Srinivasan, Crary, etc., pretty explicitly combine a political stance with criticism of EAâs âutilitarianâ foundations, so Iâm not sure whatâs uncharitable about this? If they said something like, âEA has great principles, but we think the current orgs arenât doing a great job of implementing their own principlesâ, that would be very different from what they actually say! (It would also mean I didnât need to address them in this paper, since Iâm purely concerned with evaluating EA principles, not orgs etc.)
But I guess it wouldnât hurt to flag the broader point that one could think current EA orgs are messing up in various ways while agreeing with the broader principles and wishing well for future iterations of EA (that better achieve their stated goals). Are there any other specific changes to my paper that youâd recommend here?
Yes, theyâre hostile to utilitarianism and to some extent agent-neutrality in general, but the account of âEA principlesâ you give earlier in the paper is much broader.
Critics like Crary and Srinivasan (and this particular virtue-ethicsy line of critique should not be conflated with âpoliticalâ critique in general) are not interested in discussing âEA principlesâ in this sense. When they say something like âI object to EA principlesâ theyâre objecting to what they judge to be the actual principles animating EA discourse, not the ones the community âofficiallyâ endorses.
They might be wrong about what those principles areâpersonally my impression is that EA is very strongly committed to consequentialism but in practice not all that utilitarianâbut itâs an at least partially empirical question, not something that can be resolved in the abstract.
Havenât read the draft, just this comment thread, but it seems to me the quoted section is somewhat unclear and that clearing it up might reduce the commenterâs concerns.
You write here about interpreting some objections so that they become âempirical disagreementsâ. But I donât see you saying exactly what the disagreement is. The claim explicitly stated is that âtrue claims might be used to ill effect in the worldââbut thatâs obviously not something you (or EAs generally) disagree with.
Then you suggest that people on the anti-EA side of the disagreement are âdiscouraging people from trying to do good effectively,â which may be a true description of their behavior, but may also be interpreted to include seemingly evil things that they wouldnât actually do (like opposing whatever political reforms they actually support, on the basis that they would help people too well). Thatâs presumably a misinterpretation of what youâve written, but that interpretation is facilitated by the fact that the disagreement at hand hasnât been explicitly articulated.