I don’t follow. I get that acting on low-probability scenarios can let you get in on neglected opportunities, but you don’t want to actually get the probabilities wrong, right?
I reject the idea that all-things-considered probabilities are “right” and inside-view probabilities are “wrong”, because you should very rarely be using all-things-considered probabilities when making decisions, for reasons of simple arithmetic (as per my example). Tell me what you want to use the probability for and I’ll tell you what type of probability you should be using.
You might say: look, even if you never actually use all-things-considered probabilities in the real world, at least in theory they’re still normatively ideal. But I reject that too—see the Anthropic Decision Theory paper for why.
On a separate note: I currently don’t think that epistemic deference as a concept makes sense, because defying a consensus has two effects that are often roughly the same size: it means you’re more likely to be wrong, and it means you’re creating more value if right.
I don’t fully follow this explanation, but if it’s true that defying a consensus has two effects that are the same size, doesn’t that suggest you can choose any consensus-defying action because the EV is the same regardless, since the likelihood of you being wrong is ~cancelled out by the expected value of being right?
Also the “value if right” doesn’t seem likely to be only modulated by the extent to which you are defying the consensus?
Example: If you are flying a plane and considering a new way of landing a plane that goes against what 99% of pilots think is reasonable , the “value if right” might be much smaller than the negative effects of “value if wrong”. It’s also not clear to me that if you now decide to take an landing approach that was against what 99.9% of pilots think was reasonable you will 10x your “value if right” compared to the 99% action.
The probability of success in some project may be correlated with value conditional on success in many domains, not just ones involving deference, and we typically don’t think that gets in the way of using probabilities in the usual way, no? If you’re wondering whether some corner of something sticking out of the ground is a box of treasure or a huge boulder, maybe you think that the probability you can excavate it is higher if it’s the box of treasure, and that there’s only any value to doing so if it is. The expected value of trying to excavate is P(treasure) * P(success|treasure) * value of treasure. All the probabilities are “all-things-considered”.
I respect you a lot, both as a thinker and as a friend, so I really am sorry if this reply seems dismissive. But I think there’s a sort of “LessWrong decision theory black hole” that makes people a bit crazy in ways that are obvious from the outside, and this comment thread isn’t the place to adjudicate all that. I trust that most readers who aren’t in the hole will not see your example as demonstration that you shouldn’t use all-things-considered probabilities when making decisions, so I won’t press the point beyond this comment.
I think there’s a sort of “LessWrong decision theory black hole” that makes people a bit crazy in ways that are obvious from the outside, and this comment thread isn’t the place to adjudicate all that.
From my perspective it’s the opposite: epistemic modesty is an incredibly strong skeptical argument (a type of argument that often gets people very confused), extreme forms of which have been popular in EA despite leading to conclusions which conflict strongly with common sense (like “in most cases, one should pay scarcely any attention to what you find the most persuasive view on an issue”).
In practice, fortunately, even people who endorse strong epistemic modesty don’t actually implement it, and thereby manage to still do useful things. But I haven’t yet seen any supporters of epistemic modesty provide a principled way of deciding when to act on their own judgment, in defiance of the conclusions of (a large majority of) the 8 billion other people on earth.
By contrast, I think that focusing on policies rather than all-things-considered credences (which is the thing I was gesturing at with my toy example) basically dissolves the problem. I don’t expect that you believe me about this, since I haven’t yet written this argument up clearly (although I hope to do so soon). But in some sense I’m not claiming anything new here: I think that an individual’s all-things-considered deferential credences aren’t very useful for almost the exact same reason that it’s not very useful to take a group of people and aggregate their beliefs into a single set of “all-people-considered” credences when trying to get them to make a group decision (at least not using naive methods; doing it using prediction markets is more reasonable).
I reject the idea that all-things-considered probabilities are “right” and inside-view probabilities are “wrong”, because you should very rarely be using all-things-considered probabilities when making decisions, for reasons of simple arithmetic (as per my example). Tell me what you want to use the probability for and I’ll tell you what type of probability you should be using.
You might say: look, even if you never actually use all-things-considered probabilities in the real world, at least in theory they’re still normatively ideal. But I reject that too—see the Anthropic Decision Theory paper for why.
I don’t fully follow this explanation, but if it’s true that defying a consensus has two effects that are the same size, doesn’t that suggest you can choose any consensus-defying action because the EV is the same regardless, since the likelihood of you being wrong is ~cancelled out by the expected value of being right?
Also the “value if right” doesn’t seem likely to be only modulated by the extent to which you are defying the consensus?
Example:
If you are flying a plane and considering a new way of landing a plane that goes against what 99% of pilots think is reasonable , the “value if right” might be much smaller than the negative effects of “value if wrong”. It’s also not clear to me that if you now decide to take an landing approach that was against what 99.9% of pilots think was reasonable you will 10x your “value if right” compared to the 99% action.
The probability of success in some project may be correlated with value conditional on success in many domains, not just ones involving deference, and we typically don’t think that gets in the way of using probabilities in the usual way, no? If you’re wondering whether some corner of something sticking out of the ground is a box of treasure or a huge boulder, maybe you think that the probability you can excavate it is higher if it’s the box of treasure, and that there’s only any value to doing so if it is. The expected value of trying to excavate is P(treasure) * P(success|treasure) * value of treasure. All the probabilities are “all-things-considered”.
I respect you a lot, both as a thinker and as a friend, so I really am sorry if this reply seems dismissive. But I think there’s a sort of “LessWrong decision theory black hole” that makes people a bit crazy in ways that are obvious from the outside, and this comment thread isn’t the place to adjudicate all that. I trust that most readers who aren’t in the hole will not see your example as demonstration that you shouldn’t use all-things-considered probabilities when making decisions, so I won’t press the point beyond this comment.
From my perspective it’s the opposite: epistemic modesty is an incredibly strong skeptical argument (a type of argument that often gets people very confused), extreme forms of which have been popular in EA despite leading to conclusions which conflict strongly with common sense (like “in most cases, one should pay scarcely any attention to what you find the most persuasive view on an issue”).
In practice, fortunately, even people who endorse strong epistemic modesty don’t actually implement it, and thereby manage to still do useful things. But I haven’t yet seen any supporters of epistemic modesty provide a principled way of deciding when to act on their own judgment, in defiance of the conclusions of (a large majority of) the 8 billion other people on earth.
By contrast, I think that focusing on policies rather than all-things-considered credences (which is the thing I was gesturing at with my toy example) basically dissolves the problem. I don’t expect that you believe me about this, since I haven’t yet written this argument up clearly (although I hope to do so soon). But in some sense I’m not claiming anything new here: I think that an individual’s all-things-considered deferential credences aren’t very useful for almost the exact same reason that it’s not very useful to take a group of people and aggregate their beliefs into a single set of “all-people-considered” credences when trying to get them to make a group decision (at least not using naive methods; doing it using prediction markets is more reasonable).
That said, thanks for sharing the Anthropic Decision Theory paper! I’ll check it out.