Another complication here is that a lot of arguments are arguments about the expected value of some variable—ie, the argument that we should take some action is implicitly an argument that the expected utility from taking that action is greater than that from taking the action we would have taken otherwise.
And it’s not clear what a % credence means when it comes to an estimate of an expected value—expected values aren’t random variables. Ie, if I think we ought to work on AI-risk over Global Public Health since I think there is a 1% chance of an AI intervention saving trillions of lives, it’s not clear what it’d mean to put another % confidence over that already probabilistically derived expected utility: I’ve already incorporated the 99% chance of failure into my case for working on AI-risk. Certainly it’s good to acknowledge that chance of failure, but it doesn’t say anything about my epistemic status in my argument.
I think reporting % credences serve a purpose more similar to reporting effect sizes than an epistemic status. They’re something for you to average together to get a quick & dirty estimate of what the consensus is.
Anyway, re: what to do in the case when the argument is about an expected value—I think the best practice is to to point out the known unknowns that you think are the most likely ways your argument might be shown to be false—ie, “I think we should work on AI over Global Public Health, but I think my case depends on fast takeoff being true, I’m only 60% confident that it is, and I think we can get better info about which takeoff scenario is the more likely to happen.”
In the case the biggest known unknowns are what priors you should have before seeing a piece of evidence, this basically reduces down to your strength of evidence/epistemic shift proposal. But I think generally when we’re talking about our epistemic status, it’s more useful to concentrate on how our beliefs might be changed in the future, and how qualitatively we think other people might accomplish changing our minds, than how they’ve changed in the past.
(It seems the correct “Bayesian” thing to do the above if you really wanted to report your beliefs using numbers would be to take your priors about information you’ll receive at each time t in the future, encode the structure of your uncertainty about what you’ll know at each point in time as a filtration of your event space Ft⊆F, and then report your uncertainty about the trajectory about your future beliefs about X as the martingale process Yt=E[X|Ft].
Needless to say this is a pretty unwieldy and impractical way to report your epistemic status).
Another complication here is that a lot of arguments are arguments about the expected value of some variable—ie, the argument that we should take some action is implicitly an argument that the expected utility from taking that action is greater than that from taking the action we would have taken otherwise.
And it’s not clear what a % credence means when it comes to an estimate of an expected value—expected values aren’t random variables. Ie, if I think we ought to work on AI-risk over Global Public Health since I think there is a 1% chance of an AI intervention saving trillions of lives, it’s not clear what it’d mean to put another % confidence over that already probabilistically derived expected utility: I’ve already incorporated the 99% chance of failure into my case for working on AI-risk. Certainly it’s good to acknowledge that chance of failure, but it doesn’t say anything about my epistemic status in my argument.
I think reporting % credences serve a purpose more similar to reporting effect sizes than an epistemic status. They’re something for you to average together to get a quick & dirty estimate of what the consensus is.
Anyway, re: what to do in the case when the argument is about an expected value—I think the best practice is to to point out the known unknowns that you think are the most likely ways your argument might be shown to be false—ie, “I think we should work on AI over Global Public Health, but I think my case depends on fast takeoff being true, I’m only 60% confident that it is, and I think we can get better info about which takeoff scenario is the more likely to happen.”
In the case the biggest known unknowns are what priors you should have before seeing a piece of evidence, this basically reduces down to your strength of evidence/epistemic shift proposal. But I think generally when we’re talking about our epistemic status, it’s more useful to concentrate on how our beliefs might be changed in the future, and how qualitatively we think other people might accomplish changing our minds, than how they’ve changed in the past.
(It seems the correct “Bayesian” thing to do the above if you really wanted to report your beliefs using numbers would be to take your priors about information you’ll receive at each time t in the future, encode the structure of your uncertainty about what you’ll know at each point in time as a filtration of your event space Ft⊆F, and then report your uncertainty about the trajectory about your future beliefs about X as the martingale process Yt=E[X|Ft].
Needless to say this is a pretty unwieldy and impractical way to report your epistemic status).