Why would sentient beings’ interests matter less intrinsically when those beings are more distant or harder to precisely foresee?
I agree with that sentiment :) But I don’t think one would be committed to saying that distant beings’ interests matter less intrinsically if one “practically cares/focuses” disproportionally on beings who are in some sense closer to us (e.g. as a kind of mid-level normative principle or stance). The latter view might simply reflect the fact that we inhabit a particular place in time and space, and that we can plausibly better help beings in our vicinity (e.g. the next few thousands of years) compared to those who might exist very far away (e.g. beyond a trillion years from now), without there being any sharp cut-off between our potential to help them.
FWIW, I don’t think it’s ad hoc or unmotivated. As an extreme example, one might consider a planet with sentient life that theoretically lies just inside our future light cone from time t_now, such that if we travelled out there today at the theoretical maximum speed, then we, or meaningful signals, could reach them just before cosmic expansion makes any further reach impossible. In theory, we could influence them, and in some sense merely wagging a finger right now has a theoretical influence on them. Yet it nevertheless seems to me quite defensible to practically disregard (or near-totally disregard, à la asymptotic discount) these effects given how remote they are (assuming a CDT framework).
Perhaps such a position can be viewed from the lens of an “applicability domain”: to a first approximation, the ideal of total impartiality is plausibly “practically morally applicable” on all of Earth and on and somewhat beyond our usual timescales. And we are right to strongly endorse it at this unusually large scale (i.e. unusual relative to prevailing values). But it also seems plausible that its applicability gradually breaks down when we approach extreme values.
Indeed, bracketing off “infinite ethics shenanigans” could be seen as an implicit acknowledgment of such a de-facto breakdown or boundary in the practical scope of impartiality. After all, there is a non-zero probability of an infinite future with sentient life, even if that’s not what our current cosmological models suggest (cf. Schwitzgebel’s Washout Argument Against Longtermism). Thus, it seems that if we limit infinite outcomes from dominating everything, we have already set some kind of practical boundary (even if it’s a practical boundary of asymptotic convergence toward zero across an in-theory infinite scope). If so, it seems that the question is to clarify the nature and scope of that practical boundary, not whether it’s there or not.
One might then say that infinite ethics considerations indeed count as an additional, perhaps also devastating challenge to any form of impartial altruism. But in that case, the core objection reduces to a fairly familiar objection about problems with infinities. If we make an alternative case, in which we assume that infinities can be set aside or practically limited, then it seems we have already de facto assumed some practical boundary.
In theory, we could influence them, and in some sense merely wagging a finger right now has a theoretical influence on them. Yet it nevertheless seems to me quite defensible to practically disregard (or near-totally disregard, à la asymptotic discount) these effects given how remote they are
Sorry, I’m having a hard time understanding why you think this is defensible. One view you might be gesturing at is:
If a given effect is not too remote, then we can model actions A and B’s causal connections to that effect with relatively high precision — enough to justify the claim that A is more/less likely to result in the effect than B.
If the effect is highly remote, we can’t do this. (Or, alternatively, we should treat A and B as precisely equally likely to result in the effect.)
Therefore, we can only systematically make a difference to effects of type (1). So only those effects are practically relevant.
But this reasoning doesn’t seem to hold up for the same reasons I’ve given in my critiques of Option 3 and Symmetry. So I’m not sure what your actual view is yet. Can you please clarify? (Or, if the above is your view, I can try to unpack why my critiques of Option 3 and Symmetry apply just as well here.)
(I unfortunately don’t have time to engage with the rest of this comment, just want to clarify the following:)
Indeed, bracketing off “infinite ethics shenanigans” could be seen as an implicit acknowledgment of such a de-facto breakdown or boundary in the practical scope of impartiality.
Sorry this wasn’t clear — I in fact don’t think we’re justified in ignoring infinite ethics. In the footnote you’re quoting, I was simply erring on the side of being generous to the non-clueless view, to make things easier to follow. So my core objection doesn’t reduce to “problems with infinities”, rather I object to ignoring considerations that dominate our impact for no particular reason other than practical expedience. :) (ETA: Which isn’t to say we need to solve infinite ethics to be justified in anything.)
I was simply erring on the side of being generous to the non-clueless view
Right, I suspected that — hence the remark about infinite ethics considerations counting as an additional problem to what’s addressed here. My point was that the non-clueless view addressed here (finite case) already implicitly entails scope limitations, so if one embraces that view, the question seems to be what the limitation (or discounting) in scope is, not whether there is one.
I agree with that sentiment :) But I don’t think one would be committed to saying that distant beings’ interests matter less intrinsically if one “practically cares/focuses” disproportionally on beings who are in some sense closer to us (e.g. as a kind of mid-level normative principle or stance). The latter view might simply reflect the fact that we inhabit a particular place in time and space, and that we can plausibly better help beings in our vicinity (e.g. the next few thousands of years) compared to those who might exist very far away (e.g. beyond a trillion years from now), without there being any sharp cut-off between our potential to help them.
FWIW, I don’t think it’s ad hoc or unmotivated. As an extreme example, one might consider a planet with sentient life that theoretically lies just inside our future light cone from time t_now, such that if we travelled out there today at the theoretical maximum speed, then we, or meaningful signals, could reach them just before cosmic expansion makes any further reach impossible. In theory, we could influence them, and in some sense merely wagging a finger right now has a theoretical influence on them. Yet it nevertheless seems to me quite defensible to practically disregard (or near-totally disregard, à la asymptotic discount) these effects given how remote they are (assuming a CDT framework).
Perhaps such a position can be viewed from the lens of an “applicability domain”: to a first approximation, the ideal of total impartiality is plausibly “practically morally applicable” on all of Earth and on and somewhat beyond our usual timescales. And we are right to strongly endorse it at this unusually large scale (i.e. unusual relative to prevailing values). But it also seems plausible that its applicability gradually breaks down when we approach extreme values.
Indeed, bracketing off “infinite ethics shenanigans” could be seen as an implicit acknowledgment of such a de-facto breakdown or boundary in the practical scope of impartiality. After all, there is a non-zero probability of an infinite future with sentient life, even if that’s not what our current cosmological models suggest (cf. Schwitzgebel’s Washout Argument Against Longtermism). Thus, it seems that if we limit infinite outcomes from dominating everything, we have already set some kind of practical boundary (even if it’s a practical boundary of asymptotic convergence toward zero across an in-theory infinite scope). If so, it seems that the question is to clarify the nature and scope of that practical boundary, not whether it’s there or not.
One might then say that infinite ethics considerations indeed count as an additional, perhaps also devastating challenge to any form of impartial altruism. But in that case, the core objection reduces to a fairly familiar objection about problems with infinities. If we make an alternative case, in which we assume that infinities can be set aside or practically limited, then it seems we have already de facto assumed some practical boundary.
Sorry, I’m having a hard time understanding why you think this is defensible. One view you might be gesturing at is:
If a given effect is not too remote, then we can model actions A and B’s causal connections to that effect with relatively high precision — enough to justify the claim that A is more/less likely to result in the effect than B.
If the effect is highly remote, we can’t do this. (Or, alternatively, we should treat A and B as precisely equally likely to result in the effect.)
Therefore, we can only systematically make a difference to effects of type (1). So only those effects are practically relevant.
But this reasoning doesn’t seem to hold up for the same reasons I’ve given in my critiques of Option 3 and Symmetry. So I’m not sure what your actual view is yet. Can you please clarify? (Or, if the above is your view, I can try to unpack why my critiques of Option 3 and Symmetry apply just as well here.)
(I unfortunately don’t have time to engage with the rest of this comment, just want to clarify the following:)
Sorry this wasn’t clear — I in fact don’t think we’re justified in ignoring infinite ethics. In the footnote you’re quoting, I was simply erring on the side of being generous to the non-clueless view, to make things easier to follow. So my core objection doesn’t reduce to “problems with infinities”, rather I object to ignoring considerations that dominate our impact for no particular reason other than practical expedience. :) (ETA: Which isn’t to say we need to solve infinite ethics to be justified in anything.)
Right, I suspected that — hence the remark about infinite ethics considerations counting as an additional problem to what’s addressed here. My point was that the non-clueless view addressed here (finite case) already implicitly entails scope limitations, so if one embraces that view, the question seems to be what the limitation (or discounting) in scope is, not whether there is one.