Principle 3: Our explicit, subjective credences are approximately accurate enough, most of the time, even in crazy domains, for it to be worth treating those credences as a salient input into action.
I think for me at least, and I’d guess for other people, the thing that makes the explicit subjective credences worth using is that since we have to make prioritisation decisions//decisions about how to act anyway, and we’re going to make them using some kind of fuzzy approximated expected value reasoning , making our probabilities explicit should improve our reasoning.
e.g. Why do some others not work on reducing risks of catastrophe from AI? It seems like it’s at least partly because they think it’s very very unlikely that such a catastrophe could happen. EAs are more likely to think that asking themselves “how unlikely do I really think it is?” and then be able to reason with the result, is helpful.
The Jadagul post is good push back on that, but I do think it helps put “rational pressure” on one’s beliefs in a way that is often productive. I’d guess that without naming the probabilities explicitly, the people in that story would still have similar (and similarly not-consistent) beliefs.
Great post.
One disagreement:
I think for me at least, and I’d guess for other people, the thing that makes the explicit subjective credences worth using is that since we have to make prioritisation decisions//decisions about how to act anyway, and we’re going to make them using some kind of fuzzy approximated expected value reasoning , making our probabilities explicit should improve our reasoning.
e.g. Why do some others not work on reducing risks of catastrophe from AI? It seems like it’s at least partly because they think it’s very very unlikely that such a catastrophe could happen. EAs are more likely to think that asking themselves “how unlikely do I really think it is?” and then be able to reason with the result, is helpful.
The Jadagul post is good push back on that, but I do think it helps put “rational pressure” on one’s beliefs in a way that is often productive. I’d guess that without naming the probabilities explicitly, the people in that story would still have similar (and similarly not-consistent) beliefs.