I agree that there may be cases of “complex” (i.e. non-symmetric) cluelessness that are nevertheless resiliently uncertain, as you point out.
My interpretation of @Gregory_Lewis’ view was that rather than looking mainly at whether the cluelessness is “simple” or “complex”, we should look for the important cases of cluelessness where we can make some progress. These will all be “complex”, but not all “complex” cases are tractable.
I really like this framing, because it feels more useful for making decisions. The thing that lets us safely ignore a case of “simple” cluelessness isn’t the symmetry in itself, but the intractability of making progress. I think I agree with the conclusion that we ought to be prioritising the difficult task of better understanding the long-run consequences of our actions, in the ways that are tractable.
I really like this framing, because it feels more useful for making decisions. The thing that lets us safely ignore a case of “simple” cluelessness isn’t the symmetry in itself, but the intractability of making progress.
If by ignore, you just mean not trying to make progress on the specific problem, I agree.
I don’t think we should ignore cases of complex cluelessness where progress is intractable, though, in a more general sense. With simple cluelessness, parts of your subjective distributions of outcomes are identical and independent of the parts that are predictably different, so you can ignore those identical parts, with some common additivity/separability assumptions. With complex cluelessness, they are not identical, and you may not know how to weigh them.
Just ignoring them completely (effectively assuming they balance out) can lead to inconsistency and assume away some effects/considerations completely. For example, consider donating to A or B, and you have complex cluelessness about which to donate to. You may also have complex cluelessness about donating to B or twice as much to A. If you assume things balance out in cases of complex cluelessness, you’re assuming A ~ B and B~ 2A, so by transitivity, we should get A ~ 2A, and we should be indifferent between donating to A and donating twice as much to A.
But complex cluelessness isn’t transitive like this. We could think 2A is actually better than A, despite having complex cluelessness about comparing each to B. We might think A does good compared to nothing, and 2A does twice as much good, but just be unable to compare A and 2A to B. We can’t have A ~ B, B ~ 2A, 2A > A and transitivity altogether.
I suspect this kind of thing could infect almost all of your decisions.
I think I agree with the conclusion that we ought to be prioritising the difficult task of better understanding the long-run consequences of our actions, in the ways that are tractable.
I agree with this, but also think we should be thinking more about how to deal with resilient complex cluelessness, i.e. come up with procedures for decision making under deep (and moral) uncertainty.
I may be missing the thread, but the ‘ignoring’ I’d have in mind for resilient cluelessness would be straight-ticket precision, which shouldn’t be intransitive (or have issues with principle of indifference).
E.g. Say I’m sure I can make no progress on (e.g.) the moral weight of chickens versus humans in moral calculation—maybe I’m confident there’s no fact of the matter, or interpretation of the empirical basis is beyond our capabilities forevermore, or whatever else.
Yet (I urge) I should still make a precise assignment (which is not obliged to be indifferent/symmetrical), and I can still be in reflective equilibrium between these assignments even if I’m resiliently uncertain.
My impression is that assigning precise credences may often just assume away the issue without addressing it, since the assignment can definitely seem more or less arbitrary. The larger your range would be if you entertained multiple distributions, the more arbitrary just picking one is (although using this to argue for multiple distributions seems circular). Or, just compare your choice of precise distribution with your peers’, maybe those with similar information specifically; the more variance or the wider the range, the more arbitrary just picking one, and the more what you do depends on the particulars of your priors which you never chose for rational reasons over others.
Maybe this arbitrariness doesn’t actually matter, but I think that deserves a separate argument before we settle forever on a decision procedure that is not at all sensitive to it. (Of course, we can settle tentatively on one, and be willing to adopt one that is sensitive to it later if it seems better.)
I agree that there may be cases of “complex” (i.e. non-symmetric) cluelessness that are nevertheless resiliently uncertain, as you point out.
My interpretation of @Gregory_Lewis’ view was that rather than looking mainly at whether the cluelessness is “simple” or “complex”, we should look for the important cases of cluelessness where we can make some progress. These will all be “complex”, but not all “complex” cases are tractable.
I really like this framing, because it feels more useful for making decisions. The thing that lets us safely ignore a case of “simple” cluelessness isn’t the symmetry in itself, but the intractability of making progress. I think I agree with the conclusion that we ought to be prioritising the difficult task of better understanding the long-run consequences of our actions, in the ways that are tractable.
If by ignore, you just mean not trying to make progress on the specific problem, I agree.
I don’t think we should ignore cases of complex cluelessness where progress is intractable, though, in a more general sense. With simple cluelessness, parts of your subjective distributions of outcomes are identical and independent of the parts that are predictably different, so you can ignore those identical parts, with some common additivity/separability assumptions. With complex cluelessness, they are not identical, and you may not know how to weigh them.
Just ignoring them completely (effectively assuming they balance out) can lead to inconsistency and assume away some effects/considerations completely. For example, consider donating to A or B, and you have complex cluelessness about which to donate to. You may also have complex cluelessness about donating to B or twice as much to A. If you assume things balance out in cases of complex cluelessness, you’re assuming A ~ B and B~ 2A, so by transitivity, we should get A ~ 2A, and we should be indifferent between donating to A and donating twice as much to A.
But complex cluelessness isn’t transitive like this. We could think 2A is actually better than A, despite having complex cluelessness about comparing each to B. We might think A does good compared to nothing, and 2A does twice as much good, but just be unable to compare A and 2A to B. We can’t have A ~ B, B ~ 2A, 2A > A and transitivity altogether.
I suspect this kind of thing could infect almost all of your decisions.
I agree with this, but also think we should be thinking more about how to deal with resilient complex cluelessness, i.e. come up with procedures for decision making under deep (and moral) uncertainty.
I may be missing the thread, but the ‘ignoring’ I’d have in mind for resilient cluelessness would be straight-ticket precision, which shouldn’t be intransitive (or have issues with principle of indifference).
E.g. Say I’m sure I can make no progress on (e.g.) the moral weight of chickens versus humans in moral calculation—maybe I’m confident there’s no fact of the matter, or interpretation of the empirical basis is beyond our capabilities forevermore, or whatever else.
Yet (I urge) I should still make a precise assignment (which is not obliged to be indifferent/symmetrical), and I can still be in reflective equilibrium between these assignments even if I’m resiliently uncertain.
My impression is that assigning precise credences may often just assume away the issue without addressing it, since the assignment can definitely seem more or less arbitrary. The larger your range would be if you entertained multiple distributions, the more arbitrary just picking one is (although using this to argue for multiple distributions seems circular). Or, just compare your choice of precise distribution with your peers’, maybe those with similar information specifically; the more variance or the wider the range, the more arbitrary just picking one, and the more what you do depends on the particulars of your priors which you never chose for rational reasons over others.
Maybe this arbitrariness doesn’t actually matter, but I think that deserves a separate argument before we settle forever on a decision procedure that is not at all sensitive to it. (Of course, we can settle tentatively on one, and be willing to adopt one that is sensitive to it later if it seems better.)