At a conceptual level, I think it’s worth clarifying that “impartiality” and “impartial altruism” do not imply consequentialism. For example, the following passages seem to use these terms as though they must imply consequentialism. [Edit: Rather, these passages seem to use the terms as though “impartiality” and the like must be focused on consequences.]
impartiality entails that we account for all moral patients, and all the most significant impacts we could have on them. …
Perhaps it’s simply indeterminate whether any act has better expected consequences than the alternatives. If so, impartial altruism would lose action-guiding force — not because of an exact balance among all strategies, but because of widespread indeterminacy.
Yet there are forms of impartiality and impartial altruism that are not consequentialist in nature [or focused on consequences]. For example, one can be a deontologist who applies the same principles impartially toward everyone (e.g. be an impartial judge in court, treat everyone you meet with the same respect and standards). Such impartiality does not require us to account for all the future impacts we could have on all beings. Likewise, one can be impartially altruistic in a distributive sense — e.g. distributing a given resource equally among reachable recipients — which again does not entail that we account for all future impacts.
I don’t think this is merely a conceptual point. For example, most academic philosophers, including academic moral philosophers, are not consequentialists, and I believe many of them would disagree strongly with the claim that impartiality and impartial altruism imply consequentialism.[1] Similarly, while most people responding to the EA survey of 2019 leaned toward consequentialism, it seems that around 20 percent of them leaned toward non-consequentialism, and presumably many of them would also disagree with the above-mentioned claim.
Furthermore, as hinted in another comment, I think this point matters because it seems implied a number of times in this sequence that if we can’t ground altruism in very strong forms of consequentialist impartiality, then we have no reason for being altruists and impartial altruism cannot guide us (e.g. “if my arguments hold up, our reason to work on EA causes is undermined”; “impartial altruism would lose action-guiding force”). Those claims seem to assume that all the alternatives are wholly implausible (including consequentialist views that involve weaker or time-adjusted forms of impartiality). But that would be a very strong claim.
They’d probably also take issue with defining an “impartial perspective” as one that is consequentialist: “one that gives moral weight to all consequences, no matter how distant”. That seems to define away other kinds of impartial perspectives.
For example, the following passages seem to use these terms as though they must imply consequentialism
I don’t understand why you think this, sorry. “Accounting for all our most significant impacts on all moral patients” doesn’t imply consequentialism. Indeed I’ve deliberately avoiding saying unawareness is a problem for “consequentialists”, precisely because non-consequentialists can still take net consequences across the cosmos to be the reasonfor their preferred intervention. My target audience practically never appeals to distributive impartiality, or impartial application of deontological principles, when justifying EA interventions (and I would be surprised if many people would use the word “altruism” for either of those things). I suppose I could have said “impartial beneficence”, but that’s not as standard.
Those claims seem to assume that all the alternatives are wholly implausible (including consequentialist views that involve weaker or time-adjusted forms of impartiality). But that would be a very strong claim.
Can you say more why you think it’s very strong? It’s standard within EA to dismiss (e.g.) pure time discounting as deeply morally implausible/arbitrary, and I concur with that near-consensus.[1] (Even if we do allow for views like this, we face the problem that different discount rates will often give opposite verdicts, and it’s arbitrary how much meta-normative weight we put on each discount rate.) And I don’t expect a sizable fraction of my target audience to appeal to the views you mention as the reasons why they work on EA causes. If you think otherwise, I’m curious for pointers to evidence of this.
Some EAs are sympathetic to discounting in ways that are meant to avoid infinite ethics problems. But I explained in footnote 4 that such views are also vulnerable to cluelessness.
To clarify, what I object to here is not a claim like “very strong consequence-focused impartiality is most plausible all things considered”, or “alternative views also have serious problems”. What I push back against is what I see as an implied brittleness of the general project of effective altruism (broadly construed), along the lines of “it’s either very strong consequence-focused impartiality or total bust” when it comes to working on EA causes/pursuing impartial altruism in some form.
On the first point, you’re right, I should have phrased this differently: it’s not that those passages imply that impartiality entails consequentialism (“an act is right iff it brings about the best consequences”). What I should have said is that they seem to imply that impartiality at a minimum entails strong forms of consequence-focused impartiality, i.e. the impartiality component of (certain forms of) consequentialism (“impartiality entails that we account for all moral patients, and all the most significant impacts”). My point was that that’s not the case; there are forms of impartiality that don’t — both weaker consequence-focused notions of impartiality as well as more rule-based notions of impartiality (etc), and these can be relevant to, and potentially help guide, ethics in general and altruism in particular.
Can you say more why you think it’s very strong?
I think it’s an extremely strong claim both because there’s a broad set of alternative views that could potentially justify varieties of impartial altruism and work on EA causes — other than very strong forms of consequence-focused impartiality that require us to account for ~all consequences till the end of time. And the claim isn’t just that all those alternative views are somewhat implausible, but that they are all wholly implausible (as seems implied by their exclusion and dismissal in passages like “impartial altruism would lose action-guiding force”).
One could perhaps make a strong case for that claim, and maybe most readers on the EA Forum endorse that strong claim. But I think it’s an extremely strong claim nevertheless.
At a conceptual level, I think it’s worth clarifying that “impartiality” and “impartial altruism” do not imply consequentialism. For example, the following passages seem to use these terms as though they must imply consequentialism. [Edit: Rather, these passages seem to use the terms as though “impartiality” and the like must be focused on consequences.]
Yet there are forms of impartiality and impartial altruism that are not consequentialist in nature [or focused on consequences]. For example, one can be a deontologist who applies the same principles impartially toward everyone (e.g. be an impartial judge in court, treat everyone you meet with the same respect and standards). Such impartiality does not require us to account for all the future impacts we could have on all beings. Likewise, one can be impartially altruistic in a distributive sense — e.g. distributing a given resource equally among reachable recipients — which again does not entail that we account for all future impacts.
I don’t think this is merely a conceptual point. For example, most academic philosophers, including academic moral philosophers, are not consequentialists, and I believe many of them would disagree strongly with the claim that impartiality and impartial altruism imply consequentialism.[1] Similarly, while most people responding to the EA survey of 2019 leaned toward consequentialism, it seems that around 20 percent of them leaned toward non-consequentialism, and presumably many of them would also disagree with the above-mentioned claim.
Furthermore, as hinted in another comment, I think this point matters because it seems implied a number of times in this sequence that if we can’t ground altruism in very strong forms of consequentialist impartiality, then we have no reason for being altruists and impartial altruism cannot guide us (e.g. “if my arguments hold up, our reason to work on EA causes is undermined”; “impartial altruism would lose action-guiding force”). Those claims seem to assume that all the alternatives are wholly implausible (including consequentialist views that involve weaker or time-adjusted forms of impartiality). But that would be a very strong claim.
They’d probably also take issue with defining an “impartial perspective” as one that is consequentialist: “one that gives moral weight to all consequences, no matter how distant”. That seems to define away other kinds of impartial perspectives.
I don’t understand why you think this, sorry. “Accounting for all our most significant impacts on all moral patients” doesn’t imply consequentialism. Indeed I’ve deliberately avoiding saying unawareness is a problem for “consequentialists”, precisely because non-consequentialists can still take net consequences across the cosmos to be the reason for their preferred intervention. My target audience practically never appeals to distributive impartiality, or impartial application of deontological principles, when justifying EA interventions (and I would be surprised if many people would use the word “altruism” for either of those things). I suppose I could have said “impartial beneficence”, but that’s not as standard.
Can you say more why you think it’s very strong? It’s standard within EA to dismiss (e.g.) pure time discounting as deeply morally implausible/arbitrary, and I concur with that near-consensus.[1] (Even if we do allow for views like this, we face the problem that different discount rates will often give opposite verdicts, and it’s arbitrary how much meta-normative weight we put on each discount rate.) And I don’t expect a sizable fraction of my target audience to appeal to the views you mention as the reasons why they work on EA causes. If you think otherwise, I’m curious for pointers to evidence of this.
Some EAs are sympathetic to discounting in ways that are meant to avoid infinite ethics problems. But I explained in footnote 4 that such views are also vulnerable to cluelessness.
To clarify, what I object to here is not a claim like “very strong consequence-focused impartiality is most plausible all things considered”, or “alternative views also have serious problems”. What I push back against is what I see as an implied brittleness of the general project of effective altruism (broadly construed), along the lines of “it’s either very strong consequence-focused impartiality or total bust” when it comes to working on EA causes/pursuing impartial altruism in some form.
On the first point, you’re right, I should have phrased this differently: it’s not that those passages imply that impartiality entails consequentialism (“an act is right iff it brings about the best consequences”). What I should have said is that they seem to imply that impartiality at a minimum entails strong forms of consequence-focused impartiality, i.e. the impartiality component of (certain forms of) consequentialism (“impartiality entails that we account for all moral patients, and all the most significant impacts”). My point was that that’s not the case; there are forms of impartiality that don’t — both weaker consequence-focused notions of impartiality as well as more rule-based notions of impartiality (etc), and these can be relevant to, and potentially help guide, ethics in general and altruism in particular.
I think it’s an extremely strong claim both because there’s a broad set of alternative views that could potentially justify varieties of impartial altruism and work on EA causes — other than very strong forms of consequence-focused impartiality that require us to account for ~all consequences till the end of time. And the claim isn’t just that all those alternative views are somewhat implausible, but that they are all wholly implausible (as seems implied by their exclusion and dismissal in passages like “impartial altruism would lose action-guiding force”).
One could perhaps make a strong case for that claim, and maybe most readers on the EA Forum endorse that strong claim. But I think it’s an extremely strong claim nevertheless.