For example, the following passages seem to use these terms as though they must imply consequentialism
I donât understand why you think this, sorry. âAccounting for all our most significant impacts on all moral patientsâ doesnât imply consequentialism. Indeed Iâve deliberately avoiding saying unawareness is a problem for âconsequentialistsâ, precisely because non-consequentialists can still take net consequences across the cosmos to be the reasonfor their preferred intervention. My target audience practically never appeals to distributive impartiality, or impartial application of deontological principles, when justifying EA interventions (and I would be surprised if many people would use the word âaltruismâ for either of those things). I suppose I could have said âimpartial beneficenceâ, but thatâs not as standard.
Those claims seem to assume that all the alternatives are wholly implausible (including consequentialist views that involve weaker or time-adjusted forms of impartiality). But that would be a very strong claim.
Can you say more why you think itâs very strong? Itâs standard within EA to dismiss (e.g.) pure time discounting as deeply morally implausible/âarbitrary, and I concur with that near-consensus.[1] (Even if we do allow for views like this, we face the problem that different discount rates will often give opposite verdicts, and itâs arbitrary how much meta-normative weight we put on each discount rate.) And I donât expect a sizable fraction of my target audience to appeal to the views you mention as the reasons why they work on EA causes. If you think otherwise, Iâm curious for pointers to evidence of this.
Some EAs are sympathetic to discounting in ways that are meant to avoid infinite ethics problems. But I explained in footnote 4 that such views are also vulnerable to cluelessness.
To clarify, what I object to here is not a claim like âvery strong consequence-focused impartiality is most plausible all things consideredâ, or âalternative views also have serious problemsâ. What I push back against is what I see as an implied brittleness of the general project of effective altruism (broadly construed), along the lines of âitâs either very strong consequence-focused impartiality or total bustâ when it comes to working on EA causes/âpursuing impartial altruism in some form.
On the first point, youâre right, I should have phrased this differently: itâs not that those passages imply that impartiality entails consequentialism (âan act is right iff it brings about the best consequencesâ). What I should have said is that they seem to imply that impartiality at a minimum entails strong forms of consequence-focused impartiality, i.e. the impartiality component of (certain forms of) consequentialism (âimpartiality entails that we account for all moral patients, and all the most significant impactsâ). My point was that thatâs not the case; there are forms of impartiality that donât â both weaker consequence-focused notions of impartiality as well as more rule-based notions of impartiality (etc), and these can be relevant to, and potentially help guide, ethics in general and altruism in particular.
Can you say more why you think itâs very strong?
I think itâs an extremely strong claim both because thereâs a broad set of alternative views that could potentially justify varieties of impartial altruism and work on EA causes â other than very strong forms of consequence-focused impartiality that require us to account for ~all consequences till the end of time. And the claim isnât just that all those alternative views are somewhat implausible, but that they are all wholly implausible (as seems implied by their exclusion and dismissal in passages like âimpartial altruism would lose action-guiding forceâ).
One could perhaps make a strong case for that claim, and maybe most readers on the EA Forum endorse that strong claim. But I think itâs an extremely strong claim nevertheless.
I donât understand why you think this, sorry. âAccounting for all our most significant impacts on all moral patientsâ doesnât imply consequentialism. Indeed Iâve deliberately avoiding saying unawareness is a problem for âconsequentialistsâ, precisely because non-consequentialists can still take net consequences across the cosmos to be the reason for their preferred intervention. My target audience practically never appeals to distributive impartiality, or impartial application of deontological principles, when justifying EA interventions (and I would be surprised if many people would use the word âaltruismâ for either of those things). I suppose I could have said âimpartial beneficenceâ, but thatâs not as standard.
Can you say more why you think itâs very strong? Itâs standard within EA to dismiss (e.g.) pure time discounting as deeply morally implausible/âarbitrary, and I concur with that near-consensus.[1] (Even if we do allow for views like this, we face the problem that different discount rates will often give opposite verdicts, and itâs arbitrary how much meta-normative weight we put on each discount rate.) And I donât expect a sizable fraction of my target audience to appeal to the views you mention as the reasons why they work on EA causes. If you think otherwise, Iâm curious for pointers to evidence of this.
Some EAs are sympathetic to discounting in ways that are meant to avoid infinite ethics problems. But I explained in footnote 4 that such views are also vulnerable to cluelessness.
To clarify, what I object to here is not a claim like âvery strong consequence-focused impartiality is most plausible all things consideredâ, or âalternative views also have serious problemsâ. What I push back against is what I see as an implied brittleness of the general project of effective altruism (broadly construed), along the lines of âitâs either very strong consequence-focused impartiality or total bustâ when it comes to working on EA causes/âpursuing impartial altruism in some form.
On the first point, youâre right, I should have phrased this differently: itâs not that those passages imply that impartiality entails consequentialism (âan act is right iff it brings about the best consequencesâ). What I should have said is that they seem to imply that impartiality at a minimum entails strong forms of consequence-focused impartiality, i.e. the impartiality component of (certain forms of) consequentialism (âimpartiality entails that we account for all moral patients, and all the most significant impactsâ). My point was that thatâs not the case; there are forms of impartiality that donât â both weaker consequence-focused notions of impartiality as well as more rule-based notions of impartiality (etc), and these can be relevant to, and potentially help guide, ethics in general and altruism in particular.
I think itâs an extremely strong claim both because thereâs a broad set of alternative views that could potentially justify varieties of impartial altruism and work on EA causes â other than very strong forms of consequence-focused impartiality that require us to account for ~all consequences till the end of time. And the claim isnât just that all those alternative views are somewhat implausible, but that they are all wholly implausible (as seems implied by their exclusion and dismissal in passages like âimpartial altruism would lose action-guiding forceâ).
One could perhaps make a strong case for that claim, and maybe most readers on the EA Forum endorse that strong claim. But I think itâs an extremely strong claim nevertheless.