cluelessness about some effects (like those in the far future) doesn’t override the obligations given to us by the benefits we’re not clueless about, such as the immediate benefits of our donations to the global poor
What makes you think that? Are you embracing a non-consequentialist or non-impartial view to come to that conclusion? Or do you think it’s justified under impartial consequentialism?
I have mixed feelings about this. So, there are basically two reasons why bracketing isn’t orthodox impartial consequentialism:
My choice between A and B isn’t exactly determined by whether I think A is “better” than B. See Jesse’s discussion in this part of the appendix.
Even if we could interpret bracketing as a betterness ranking, the notion of “betterness” here requires assigning a weight of zero to consequences that I don’t think are precisely equally good under A vs. B.
I do think both of these are reasons to give less weight to bracketing in my decision-making than I give to standard non-consequentialist views.[1]
However:
It’s still clearly consequentialist in the sense that, well, we’re making our choice based only on the consequences, and in a scope-sensitive manner. I don’t think standard non-consequentialist views get you the conclusion that you should donate to AMF rather than MAWF, unless they’re defined such that they suffer from cluelessness too.
There’s an impartial reason why we “ignore” the consequences at some locations of value in our decision-making, namely, that those consequences don’t favor one action over the other. (I think the same is true if we don’t use the “locations of value” framework, but instead something more like what Jesse sketches here, though that’s harder to make precise.)
E.g. compare (i) “A reduces x more units of disutility than B within the maximal bracket-set I’, but I’m clueless about A vs. B when looking outside the maximal bracket-set”, with (ii) “A reduces x more units of disutility than B within I’, and A and B are equally good in expectation when looking outside the maximal bracket-set.” I find (i) to be a somewhat compelling reason to do A, but it doesn’t feel like as overwhelming a moral duty as the kind of reason given by (ii).
I’ll read those. Can I ask regarding this:
What makes you think that? Are you embracing a non-consequentialist or non-impartial view to come to that conclusion? Or do you think it’s justified under impartial consequentialism?
I have mixed feelings about this. So, there are basically two reasons why bracketing isn’t orthodox impartial consequentialism:
My choice between A and B isn’t exactly determined by whether I think A is “better” than B. See Jesse’s discussion in this part of the appendix.
Even if we could interpret bracketing as a betterness ranking, the notion of “betterness” here requires assigning a weight of zero to consequences that I don’t think are precisely equally good under A vs. B.
I do think both of these are reasons to give less weight to bracketing in my decision-making than I give to standard non-consequentialist views.[1]
However:
It’s still clearly consequentialist in the sense that, well, we’re making our choice based only on the consequences, and in a scope-sensitive manner. I don’t think standard non-consequentialist views get you the conclusion that you should donate to AMF rather than MAWF, unless they’re defined such that they suffer from cluelessness too.
There’s an impartial reason why we “ignore” the consequences at some locations of value in our decision-making, namely, that those consequences don’t favor one action over the other. (I think the same is true if we don’t use the “locations of value” framework, but instead something more like what Jesse sketches here, though that’s harder to make precise.)
E.g. compare (i) “A reduces x more units of disutility than B within the maximal bracket-set I’, but I’m clueless about A vs. B when looking outside the maximal bracket-set”, with (ii) “A reduces x more units of disutility than B within I’, and A and B are equally good in expectation when looking outside the maximal bracket-set.” I find (i) to be a somewhat compelling reason to do A, but it doesn’t feel like as overwhelming a moral duty as the kind of reason given by (ii).