The probability of success in some project may be correlated with value conditional on success in many domains, not just ones involving deference, and we typically don’t think that gets in the way of using probabilities in the usual way, no? If you’re wondering whether some corner of something sticking out of the ground is a box of treasure or a huge boulder, maybe you think that the probability you can excavate it is higher if it’s the box of treasure, and that there’s only any value to doing so if it is. The expected value of trying to excavate is P(treasure) * P(success|treasure) * value of treasure. All the probabilities are “all-things-considered”.
I respect you a lot, both as a thinker and as a friend, so I really am sorry if this reply seems dismissive. But I think there’s a sort of “LessWrong decision theory black hole” that makes people a bit crazy in ways that are obvious from the outside, and this comment thread isn’t the place to adjudicate all that. I trust that most readers who aren’t in the hole will not see your example as demonstration that you shouldn’t use all-things-considered probabilities when making decisions, so I won’t press the point beyond this comment.
I think there’s a sort of “LessWrong decision theory black hole” that makes people a bit crazy in ways that are obvious from the outside, and this comment thread isn’t the place to adjudicate all that.
From my perspective it’s the opposite: epistemic modesty is an incredibly strong skeptical argument (a type of argument that often gets people very confused), extreme forms of which have been popular in EA despite leading to conclusions which conflict strongly with common sense (like “in most cases, one should pay scarcely any attention to what you find the most persuasive view on an issue”).
In practice, fortunately, even people who endorse strong epistemic modesty don’t actually implement it, and thereby manage to still do useful things. But I haven’t yet seen any supporters of epistemic modesty provide a principled way of deciding when to act on their own judgment, in defiance of the conclusions of (a large majority of) the 8 billion other people on earth.
By contrast, I think that focusing on policies rather than all-things-considered credences (which is the thing I was gesturing at with my toy example) basically dissolves the problem. I don’t expect that you believe me about this, since I haven’t yet written this argument up clearly (although I hope to do so soon). But in some sense I’m not claiming anything new here: I think that an individual’s all-things-considered deferential credences aren’t very useful for almost the exact same reason that it’s not very useful to take a group of people and aggregate their beliefs into a single set of “all-people-considered” credences when trying to get them to make a group decision (at least not using naive methods; doing it using prediction markets is more reasonable).
The probability of success in some project may be correlated with value conditional on success in many domains, not just ones involving deference, and we typically don’t think that gets in the way of using probabilities in the usual way, no? If you’re wondering whether some corner of something sticking out of the ground is a box of treasure or a huge boulder, maybe you think that the probability you can excavate it is higher if it’s the box of treasure, and that there’s only any value to doing so if it is. The expected value of trying to excavate is P(treasure) * P(success|treasure) * value of treasure. All the probabilities are “all-things-considered”.
I respect you a lot, both as a thinker and as a friend, so I really am sorry if this reply seems dismissive. But I think there’s a sort of “LessWrong decision theory black hole” that makes people a bit crazy in ways that are obvious from the outside, and this comment thread isn’t the place to adjudicate all that. I trust that most readers who aren’t in the hole will not see your example as demonstration that you shouldn’t use all-things-considered probabilities when making decisions, so I won’t press the point beyond this comment.
From my perspective it’s the opposite: epistemic modesty is an incredibly strong skeptical argument (a type of argument that often gets people very confused), extreme forms of which have been popular in EA despite leading to conclusions which conflict strongly with common sense (like “in most cases, one should pay scarcely any attention to what you find the most persuasive view on an issue”).
In practice, fortunately, even people who endorse strong epistemic modesty don’t actually implement it, and thereby manage to still do useful things. But I haven’t yet seen any supporters of epistemic modesty provide a principled way of deciding when to act on their own judgment, in defiance of the conclusions of (a large majority of) the 8 billion other people on earth.
By contrast, I think that focusing on policies rather than all-things-considered credences (which is the thing I was gesturing at with my toy example) basically dissolves the problem. I don’t expect that you believe me about this, since I haven’t yet written this argument up clearly (although I hope to do so soon). But in some sense I’m not claiming anything new here: I think that an individual’s all-things-considered deferential credences aren’t very useful for almost the exact same reason that it’s not very useful to take a group of people and aggregate their beliefs into a single set of “all-people-considered” credences when trying to get them to make a group decision (at least not using naive methods; doing it using prediction markets is more reasonable).