This is in essence the claim of the epistemic critique of strong longtermism. Notice that this way of framing the epistemic concern does not involve rejecting the Bayesian mindset or the usefulness of expected value theory. Instead, it involves recognizing that to maximize expected value, we might in some cases want to not rely on expected value calculations.
Hmm. I get what you mean. To make the best decision that I can, I might not use expected value calculations to compare alternative actions when for example there’s no feedback on my action or the probabilities are very low and so hard to guess because of my own cognitive limitations.
An outside view applies heuristics (for example, the heuristic “don’t do EV when a subjective probability is below 0.0001”) to my decision of whether use EV calculations, but it doesn’t calculate EV. I would consider it a belief.
Belief: “Subjective probabilities below 0.0001% attached to values in an EV calculation are subject to scope insensitivity and their associated EV calculations contain errors.”
Rule: “If an EV calculation and comparison effort relies on a probability below 0.0001%, then cease the whole effort.”
Bayesian: “As an EV calculation subjective probability drops below 0.0001%, the probability that the EV calculation probability is actually unknown increases past 80%.”
I can see this issue of the EV calculation subjective probability being so small as similar to a visual distinction between tiny and very tiny. You might be able to see something, but it’s too small to tell if it’s 1⁄2 the size of something bigger, or 1⁄10. All you know is that you can barely see it and anything 10X bigger than it.
The real question for me is whether the Bayesian formulation is meaningful. Is there another formulation that a Bayesian could make that is better suited, involving priors, different assertions, probabilities, etc?
I tried imagining how this might go if it were me answering the question.
Me:
Well, when I think someone’s wrong, I pull out a subjective probability that lets me communicate that. I like 80% because it makes people think of the 80⁄20 rule and then they think I’m really smart and believe what I’m telling them. I could list a higher probability, but they would actually quibble with me about it, and I don’t want that. Also, at that percentage, I’m not fully disagreeing with them. They like that.
I say stuff like, “I estimate an 80% probability that you’re wrong.”when I think they’re wrong.
And here’s the problem with the “put some money behind that probability” thing. I really think they’re wrong, but I also know that this is a situation in which verifying the truth to both side’s satisfaction is tricky, and because the verification is over money, there’s all kinds of distortion that’s likely to occur if money enters into it. It might actually be impossible to verify the truth. Me and the other side both know that.
That’s the perfect time to use a probability and really make it sound carefully considered, like I really believe it’s 80%, and not 98%.
It’s like knowing when to say “dibs” or “jinx”. You have got to understand context.
I’m joking. I don’t deceive like that. However, how do you qualify a Bayesian estimate as legitimate or not, in general?
You have gone partway here, rejecting EV calculations in some circumstances. You have also said that you still believe in probability estimates and expected value theory, and are instead just careful about when to use them.
So do you or can you use expected value calculations or subjective probabilities to decide when to use either?
Hmm. I get what you mean. To make the best decision that I can, I might not use expected value calculations to compare alternative actions when for example there’s no feedback on my action or the probabilities are very low and so hard to guess because of my own cognitive limitations.
An outside view applies heuristics (for example, the heuristic “don’t do EV when a subjective probability is below 0.0001”) to my decision of whether use EV calculations, but it doesn’t calculate EV. I would consider it a belief.
Belief: “Subjective probabilities below 0.0001% attached to values in an EV calculation are subject to scope insensitivity and their associated EV calculations contain errors.”
Rule: “If an EV calculation and comparison effort relies on a probability below 0.0001%, then cease the whole effort.”
Bayesian: “As an EV calculation subjective probability drops below 0.0001%, the probability that the EV calculation probability is actually unknown increases past 80%.”
I can see this issue of the EV calculation subjective probability being so small as similar to a visual distinction between tiny and very tiny. You might be able to see something, but it’s too small to tell if it’s 1⁄2 the size of something bigger, or 1⁄10. All you know is that you can barely see it and anything 10X bigger than it.
The real question for me is whether the Bayesian formulation is meaningful. Is there another formulation that a Bayesian could make that is better suited, involving priors, different assertions, probabilities, etc?
I tried imagining how this might go if it were me answering the question.
Me:
I’m joking. I don’t deceive like that. However, how do you qualify a Bayesian estimate as legitimate or not, in general?
You have gone partway here, rejecting EV calculations in some circumstances. You have also said that you still believe in probability estimates and expected value theory, and are instead just careful about when to use them.
So do you or can you use expected value calculations or subjective probabilities to decide when to use either?