I feel this post is just saying you can solve the problem of cluelessness by ignoring that it exists, even though you know it still does. It just doesn’t seem like a satisfactory response to me.
Wouldn’t the better response be to find things we aren’t clueless about—perhaps because we think the indirect effects are smaller in expected magnitude than the direct effects. I think this is probably the case with elevating the moral status of digital minds (for example).
cluelessness about some effects (like those in the far future) doesn’t override the obligations given to us by the benefits we’re not clueless about, such as the immediate benefits of our donations to the global poor
I don’t think that’s unreasonable. Personally, I strongly have the intuition expressed in that quote, though definitely not certain that I will endorse it on reflection.
Wouldn’t the better response be to find things we aren’t clueless about
The background assumption in this post is that there are no such interventions.
> We start from a place of cluelessness about the effects of our actions on aggregate, cosmos-wide value. Our uncertainty is so deep that we can’t even say whether we expect one action to be better than, worse than, or just as good as another, in terms of its effects on aggregate utility. (See Section 2 of the paper and resources here for arguments as to why we ought to regard ourselves as such.)
cluelessness about some effects (like those in the far future) doesn’t override the obligations given to us by the benefits we’re not clueless about, such as the immediate benefits of our donations to the global poor
I do reject this thinking because it seems to imply either:
Embracing non-consequentialist views: I don’t have zero credence in deontology or virtue ethics, but to just ignore far future effects I feel I would have to have very low credence in consequentialism, given the expected vastness of the future.
Rejecting impartiality: For example, saying that effects closer in time are inherently worth more than those farther away. For me, utility is utility regardless of who enjoys it or when.
The background assumption in this post is that there are no such interventions.
There’s certainly a lot of stuff out there I still need to read (thanks for sharing the resources), but I tend to agree with Hilary Greaves that the way to avoid cluelessness is to target interventions whose intended long-run impact dominates plausible unintended effects.
For example, I don’t think I am clueless about the value of spreading concern for digital sentience (in a thoughtful way). The intended effect is to materially reduce the probability of vast future suffering in scenarios that I assign non-trivial probability. Plausible negative effects, for example people feeling preached to about something they see as stupid leading to an even worse outcome, seem like they can be mitigated / just don’t compete overall with the possibility that we would be alerting society to a potentially devastating moral catastrophe. I’m not saying I’m certain it would go well (there is always ex-ante uncertainty), but I don’t feel clueless about whether it’s worth doing or not.
And if we are helplessly clueless about everything, then I honestly think the altruistic exercise is doomed and we should just go and enjoy ourselves.
I’d recommend specifically checking out here and here, for why we should expect unintended effects (of ambiguous sign) to dominate any intervention’s impact on total cosmos-wide welfare by default. The whole cosmos is very, very weird. (Heck, ASI takeoff on Earth alone seems liable to be very weird.) I think given the arguments I’ve linked, anyone proposing that a particular intervention is an exception to this default should spell out much more clearly why they think that’s the case.
cluelessness about some effects (like those in the far future) doesn’t override the obligations given to us by the benefits we’re not clueless about, such as the immediate benefits of our donations to the global poor
What makes you think that? Are you embracing a non-consequentialist or non-impartial view to come to that conclusion? Or do you think it’s justified under impartial consequentialism?
I have mixed feelings about this. So, there are basically two reasons why bracketing isn’t orthodox impartial consequentialism:
My choice between A and B isn’t exactly determined by whether I think A is “better” than B. See Jesse’s discussion in this part of the appendix.
Even if we could interpret bracketing as a betterness ranking, the notion of “betterness” here requires assigning a weight of zero to consequences that I don’t think are precisely equally good under A vs. B.
I do think both of these are reasons to give less weight to bracketing in my decision-making than I give to standard non-consequentialist views.[1]
However:
It’s still clearly consequentialist in the sense that, well, we’re making our choice based only on the consequences, and in a scope-sensitive manner. I don’t think standard non-consequentialist views get you the conclusion that you should donate to AMF rather than MAWF, unless they’re defined such that they suffer from cluelessness too.
There’s an impartial reason why we “ignore” the consequences at some locations of value in our decision-making, namely, that those consequences don’t favor one action over the other. (I think the same is true if we don’t use the “locations of value” framework, but instead something more like what Jesse sketches here, though that’s harder to make precise.)
E.g. compare (i) “A reduces x more units of disutility than B within the maximal bracket-set I’, but I’m clueless about A vs. B when looking outside the maximal bracket-set”, with (ii) “A reduces x more units of disutility than B within I’, and A and B are equally good in expectation when looking outside the maximal bracket-set.” I find (i) to be a somewhat compelling reason to do A, but it doesn’t feel like as overwhelming a moral duty as the kind of reason given by (ii).
I feel this post is just saying you can solve the problem of cluelessness by ignoring that it exists, even though you know it still does. It just doesn’t seem like a satisfactory response to me.
Wouldn’t the better response be to find things we aren’t clueless about—perhaps because we think the indirect effects are smaller in expected magnitude than the direct effects. I think this is probably the case with elevating the moral status of digital minds (for example).
It sounds like you reject this kind of thinking:
I don’t think that’s unreasonable. Personally, I strongly have the intuition expressed in that quote, though definitely not certain that I will endorse it on reflection.
The background assumption in this post is that there are no such interventions.
> We start from a place of cluelessness about the effects of our actions on aggregate, cosmos-wide value. Our uncertainty is so deep that we can’t even say whether we expect one action to be better than, worse than, or just as good as another, in terms of its effects on aggregate utility. (See Section 2 of the paper and resources here for arguments as to why we ought to regard ourselves as such.)
I do reject this thinking because it seems to imply either:
Embracing non-consequentialist views: I don’t have zero credence in deontology or virtue ethics, but to just ignore far future effects I feel I would have to have very low credence in consequentialism, given the expected vastness of the future.
Rejecting impartiality: For example, saying that effects closer in time are inherently worth more than those farther away. For me, utility is utility regardless of who enjoys it or when.
There’s certainly a lot of stuff out there I still need to read (thanks for sharing the resources), but I tend to agree with Hilary Greaves that the way to avoid cluelessness is to target interventions whose intended long-run impact dominates plausible unintended effects.
For example, I don’t think I am clueless about the value of spreading concern for digital sentience (in a thoughtful way). The intended effect is to materially reduce the probability of vast future suffering in scenarios that I assign non-trivial probability. Plausible negative effects, for example people feeling preached to about something they see as stupid leading to an even worse outcome, seem like they can be mitigated / just don’t compete overall with the possibility that we would be alerting society to a potentially devastating moral catastrophe. I’m not saying I’m certain it would go well (there is always ex-ante uncertainty), but I don’t feel clueless about whether it’s worth doing or not.
And if we are helplessly clueless about everything, then I honestly think the altruistic exercise is doomed and we should just go and enjoy ourselves.
I’d recommend specifically checking out here and here, for why we should expect unintended effects (of ambiguous sign) to dominate any intervention’s impact on total cosmos-wide welfare by default. The whole cosmos is very, very weird. (Heck, ASI takeoff on Earth alone seems liable to be very weird.) I think given the arguments I’ve linked, anyone proposing that a particular intervention is an exception to this default should spell out much more clearly why they think that’s the case.
I’ll read those. Can I ask regarding this:
What makes you think that? Are you embracing a non-consequentialist or non-impartial view to come to that conclusion? Or do you think it’s justified under impartial consequentialism?
I have mixed feelings about this. So, there are basically two reasons why bracketing isn’t orthodox impartial consequentialism:
My choice between A and B isn’t exactly determined by whether I think A is “better” than B. See Jesse’s discussion in this part of the appendix.
Even if we could interpret bracketing as a betterness ranking, the notion of “betterness” here requires assigning a weight of zero to consequences that I don’t think are precisely equally good under A vs. B.
I do think both of these are reasons to give less weight to bracketing in my decision-making than I give to standard non-consequentialist views.[1]
However:
It’s still clearly consequentialist in the sense that, well, we’re making our choice based only on the consequences, and in a scope-sensitive manner. I don’t think standard non-consequentialist views get you the conclusion that you should donate to AMF rather than MAWF, unless they’re defined such that they suffer from cluelessness too.
There’s an impartial reason why we “ignore” the consequences at some locations of value in our decision-making, namely, that those consequences don’t favor one action over the other. (I think the same is true if we don’t use the “locations of value” framework, but instead something more like what Jesse sketches here, though that’s harder to make precise.)
E.g. compare (i) “A reduces x more units of disutility than B within the maximal bracket-set I’, but I’m clueless about A vs. B when looking outside the maximal bracket-set”, with (ii) “A reduces x more units of disutility than B within I’, and A and B are equally good in expectation when looking outside the maximal bracket-set.” I find (i) to be a somewhat compelling reason to do A, but it doesn’t feel like as overwhelming a moral duty as the kind of reason given by (ii).