We can make subjective probability estimates, but if a probability estimate does not flow out of a clearly articulated model of the world, its believability is suspect
I don’t see how this implies that the expected value isn’t the right answer. Also, what exactly do you mean by “believability”? It’s a subjective probability estimate.
Greaves is saying that real-world agents don’t assign precise probabilities to outcomes, they instead consider multiple possible probabilities for each outcome (taken together, these probabilities sum to the agent’s “representor”). Because an agent holds multiple probabilities for each outcome, and has no way by which to arbitrate between its multiple probabilities, it cannot use a straightforward expected value calculation to determine the best outcome.
I don’t hold multiple probabilities in this way. Sure some agents do, but presumably those agents aren’t doing things correctly. Maybe the right answer here is “don’t be confused about the nature of probability.”
The next time you encounter someone making a subjective probability estimate, ask “how did you arrive at that number?” The answer will frequently be along the lines of “it seems about right” or “I would be surprised if it were higher.” Answers like this indicate that the estimator doesn’t have visibility into the process by which they’re arriving at their estimate
There are lots of claims we make on the basis of intuition. Do you believe that all such claims are poor, or is probability some kind of special case? It would help to be more clear about your point—what kind of visibility do we need and why is it important?
Whenever we make a probability estimate that doesn’t flow from a clear world-model, the believability of that estimate is questionable
This statement is kind of nonsensical with a subjective Bayesian model of probability; the estimate is your belief. If you don’t have that model, then sure a probability estimate could be described as likely to be wrong, but it’s still not clear why that would prevent us from saying that a probability estimate is the best we can do.
And if we attempt to reconcile multiple probability estimates into a single best-guess, the believability of that best-guess is questionable because our method of reconciling multiple estimates into a single value is opaque.
The way of reconciling multiple estimates is to treat them as evidence and update via Bayes’ Theorem, or to weight them by their probability of being correct and average them using standard expected value calculation. If you simply take issue with the fact that real-world agents don’t do this formally, I don’t see what the argument is. We already have a philosophical answer, so naturally the right thing to do is for real-world agents to approximate it as well as they can.
The way of reconciling multiple estimates is to treat them as evidence and update via Bayes’ Theorem, or to weight them by their probability of being correct and average them using standard expected value calculation. If you simply take issue with the fact that real-world agents don’t do this formally, I don’t see what the argument is. We already have a philosophical answer, so naturally the right thing to do is for real-world agents to approximate it as well as they can.
“Approximate it as well as they can” implies a standard beyond the subjective Bayesian framework by which subjective estimates are compared. Outside of the subjective Bayesian framework seems to be where the difficulty lies.
I agree with what Jesse stated above: “I am saying that I would like this epistemic state to be grounded in empirical reality via good models of the world. This goes beyond subjective expected utility theory. As does what you have said about robustness and being well or poorly supported by evidence.”
A standard like “how accurately does this estimate predict the future state of the world?” is what we seem to use when comparing the quality (believability) of subjective estimates.
I think the difficulty is that it is very hard to assess the accuracy of subjective estimates about complicated real-world events, where many of the causal inputs of the event are unknown & the impacts of the event occur over a long time horizon.
“Approximate it as well as they can” implies a standard beyond the subjective Bayesian framework by which subjective estimates are compared.
How does it imply that? A Bayesian agent makes updates to their beliefs to approximate the real world as well as it can. That’s just regular Bayesian updating, whether you are subjective or not.
I think the difficulty is that it is very hard to assess the accuracy of subjective estimates about complicated real-world events, where many of the causal inputs of the event are unknown & the impacts of the event occur over a long time horizon.
I don’t see what this has to do with subjective estimates. If we talk about estimates in objective and/or frequentist terms, it’s equally difficult to observe the long term unfolding of the scenario. Switching away from subjective estimates won’t make you better at determining which estimates are correct or not.
I don’t have a fully articulated view here, but I think the problem lies with how the agent assesses how its approximations are doing (i.e. the procedure an agent uses to assess when an update is modeling the world more accurately or less).
I don’t see how this implies that the expected value isn’t the right answer. Also, what exactly do you mean by “believability”? It’s a subjective probability estimate.
I don’t hold multiple probabilities in this way. Sure some agents do, but presumably those agents aren’t doing things correctly. Maybe the right answer here is “don’t be confused about the nature of probability.”
There are lots of claims we make on the basis of intuition. Do you believe that all such claims are poor, or is probability some kind of special case? It would help to be more clear about your point—what kind of visibility do we need and why is it important?
This statement is kind of nonsensical with a subjective Bayesian model of probability; the estimate is your belief. If you don’t have that model, then sure a probability estimate could be described as likely to be wrong, but it’s still not clear why that would prevent us from saying that a probability estimate is the best we can do.
The way of reconciling multiple estimates is to treat them as evidence and update via Bayes’ Theorem, or to weight them by their probability of being correct and average them using standard expected value calculation. If you simply take issue with the fact that real-world agents don’t do this formally, I don’t see what the argument is. We already have a philosophical answer, so naturally the right thing to do is for real-world agents to approximate it as well as they can.
“Approximate it as well as they can” implies a standard beyond the subjective Bayesian framework by which subjective estimates are compared. Outside of the subjective Bayesian framework seems to be where the difficulty lies.
I agree with what Jesse stated above: “I am saying that I would like this epistemic state to be grounded in empirical reality via good models of the world. This goes beyond subjective expected utility theory. As does what you have said about robustness and being well or poorly supported by evidence.”
A standard like “how accurately does this estimate predict the future state of the world?” is what we seem to use when comparing the quality (believability) of subjective estimates.
I think the difficulty is that it is very hard to assess the accuracy of subjective estimates about complicated real-world events, where many of the causal inputs of the event are unknown & the impacts of the event occur over a long time horizon.
How does it imply that? A Bayesian agent makes updates to their beliefs to approximate the real world as well as it can. That’s just regular Bayesian updating, whether you are subjective or not.
I don’t see what this has to do with subjective estimates. If we talk about estimates in objective and/or frequentist terms, it’s equally difficult to observe the long term unfolding of the scenario. Switching away from subjective estimates won’t make you better at determining which estimates are correct or not.
I don’t have a fully articulated view here, but I think the problem lies with how the agent assesses how its approximations are doing (i.e. the procedure an agent uses to assess when an update is modeling the world more accurately or less).
Agreed. I think the difficulty applies to both types of estimates (sorry for being imprecise above).