For example, the following passages seem to use these terms as though they must imply consequentialism
I don’t understand why you think this, sorry. “Accounting for all our most significant impacts on all moral patients” doesn’t imply consequentialism. Indeed I’ve deliberately avoiding saying unawareness is a problem for “consequentialists”, precisely because non-consequentialists can still take net consequences across the cosmos to be the reasonfor their preferred intervention. My target audience practically never appeals to distributive impartiality, or impartial application of deontological principles, when justifying EA interventions (and I would be surprised if many people would use the word “altruism” for either of those things). I suppose I could have said “impartial beneficence”, but that’s not as standard.
Those claims seem to assume that all the alternatives are wholly implausible (including consequentialist views that involve weaker or time-adjusted forms of impartiality). But that would be a very strong claim.
Can you say more why you think it’s very strong? It’s standard within EA to dismiss (e.g.) pure time discounting as deeply morally implausible/arbitrary, and I concur with that near-consensus.[1] (Even if we do allow for views like this, we face the problem that different discount rates will often give opposite verdicts, and it’s arbitrary how much meta-normative weight we put on each discount rate.) And I don’t expect a sizable fraction of my target audience to appeal to the views you mention as the reasons why they work on EA causes. If you think otherwise, I’m curious for pointers to evidence of this.
Some EAs are sympathetic to discounting in ways that are meant to avoid infinite ethics problems. But I explained in footnote 4 that such views are also vulnerable to cluelessness.
To clarify, what I object to here is not a claim like “very strong consequence-focused impartiality is most plausible all things considered”, or “alternative views also have serious problems”. What I push back against is what I see as an implied brittleness of the general project of effective altruism (broadly construed), along the lines of “it’s either very strong consequence-focused impartiality or total bust” when it comes to working on EA causes/pursuing impartial altruism in some form.
On the first point, you’re right, I should have phrased this differently: it’s not that those passages imply that impartiality entails consequentialism (“an act is right iff it brings about the best consequences”). What I should have said is that they seem to imply that impartiality at a minimum entails strong forms of consequence-focused impartiality, i.e. the impartiality component of (certain forms of) consequentialism (“impartiality entails that we account for all moral patients, and all the most significant impacts”). My point was that that’s not the case; there are forms of impartiality that don’t — both weaker consequence-focused notions of impartiality as well as more rule-based notions of impartiality (etc), and these can be relevant to, and potentially help guide, ethics in general and altruism in particular.
Can you say more why you think it’s very strong?
I think it’s an extremely strong claim both because there’s a broad set of alternative views that could potentially justify varieties of impartial altruism and work on EA causes — other than very strong forms of consequence-focused impartiality that require us to account for ~all consequences till the end of time. And the claim isn’t just that all those alternative views are somewhat implausible, but that they are all wholly implausible (as seems implied by their exclusion and dismissal in passages like “impartial altruism would lose action-guiding force”).
One could perhaps make a strong case for that claim, and maybe most readers on the EA Forum endorse that strong claim. But I think it’s an extremely strong claim nevertheless.
I don’t understand why you think this, sorry. “Accounting for all our most significant impacts on all moral patients” doesn’t imply consequentialism. Indeed I’ve deliberately avoiding saying unawareness is a problem for “consequentialists”, precisely because non-consequentialists can still take net consequences across the cosmos to be the reason for their preferred intervention. My target audience practically never appeals to distributive impartiality, or impartial application of deontological principles, when justifying EA interventions (and I would be surprised if many people would use the word “altruism” for either of those things). I suppose I could have said “impartial beneficence”, but that’s not as standard.
Can you say more why you think it’s very strong? It’s standard within EA to dismiss (e.g.) pure time discounting as deeply morally implausible/arbitrary, and I concur with that near-consensus.[1] (Even if we do allow for views like this, we face the problem that different discount rates will often give opposite verdicts, and it’s arbitrary how much meta-normative weight we put on each discount rate.) And I don’t expect a sizable fraction of my target audience to appeal to the views you mention as the reasons why they work on EA causes. If you think otherwise, I’m curious for pointers to evidence of this.
Some EAs are sympathetic to discounting in ways that are meant to avoid infinite ethics problems. But I explained in footnote 4 that such views are also vulnerable to cluelessness.
To clarify, what I object to here is not a claim like “very strong consequence-focused impartiality is most plausible all things considered”, or “alternative views also have serious problems”. What I push back against is what I see as an implied brittleness of the general project of effective altruism (broadly construed), along the lines of “it’s either very strong consequence-focused impartiality or total bust” when it comes to working on EA causes/pursuing impartial altruism in some form.
On the first point, you’re right, I should have phrased this differently: it’s not that those passages imply that impartiality entails consequentialism (“an act is right iff it brings about the best consequences”). What I should have said is that they seem to imply that impartiality at a minimum entails strong forms of consequence-focused impartiality, i.e. the impartiality component of (certain forms of) consequentialism (“impartiality entails that we account for all moral patients, and all the most significant impacts”). My point was that that’s not the case; there are forms of impartiality that don’t — both weaker consequence-focused notions of impartiality as well as more rule-based notions of impartiality (etc), and these can be relevant to, and potentially help guide, ethics in general and altruism in particular.
I think it’s an extremely strong claim both because there’s a broad set of alternative views that could potentially justify varieties of impartial altruism and work on EA causes — other than very strong forms of consequence-focused impartiality that require us to account for ~all consequences till the end of time. And the claim isn’t just that all those alternative views are somewhat implausible, but that they are all wholly implausible (as seems implied by their exclusion and dismissal in passages like “impartial altruism would lose action-guiding force”).
One could perhaps make a strong case for that claim, and maybe most readers on the EA Forum endorse that strong claim. But I think it’s an extremely strong claim nevertheless.