The way of reconciling multiple estimates is to treat them as evidence and update via Bayes’ Theorem, or to weight them by their probability of being correct and average them using standard expected value calculation. If you simply take issue with the fact that real-world agents don’t do this formally, I don’t see what the argument is. We already have a philosophical answer, so naturally the right thing to do is for real-world agents to approximate it as well as they can.
“Approximate it as well as they can” implies a standard beyond the subjective Bayesian framework by which subjective estimates are compared. Outside of the subjective Bayesian framework seems to be where the difficulty lies.
I agree with what Jesse stated above: “I am saying that I would like this epistemic state to be grounded in empirical reality via good models of the world. This goes beyond subjective expected utility theory. As does what you have said about robustness and being well or poorly supported by evidence.”
A standard like “how accurately does this estimate predict the future state of the world?” is what we seem to use when comparing the quality (believability) of subjective estimates.
I think the difficulty is that it is very hard to assess the accuracy of subjective estimates about complicated real-world events, where many of the causal inputs of the event are unknown & the impacts of the event occur over a long time horizon.
“Approximate it as well as they can” implies a standard beyond the subjective Bayesian framework by which subjective estimates are compared.
How does it imply that? A Bayesian agent makes updates to their beliefs to approximate the real world as well as it can. That’s just regular Bayesian updating, whether you are subjective or not.
I think the difficulty is that it is very hard to assess the accuracy of subjective estimates about complicated real-world events, where many of the causal inputs of the event are unknown & the impacts of the event occur over a long time horizon.
I don’t see what this has to do with subjective estimates. If we talk about estimates in objective and/or frequentist terms, it’s equally difficult to observe the long term unfolding of the scenario. Switching away from subjective estimates won’t make you better at determining which estimates are correct or not.
I don’t have a fully articulated view here, but I think the problem lies with how the agent assesses how its approximations are doing (i.e. the procedure an agent uses to assess when an update is modeling the world more accurately or less).
“Approximate it as well as they can” implies a standard beyond the subjective Bayesian framework by which subjective estimates are compared. Outside of the subjective Bayesian framework seems to be where the difficulty lies.
I agree with what Jesse stated above: “I am saying that I would like this epistemic state to be grounded in empirical reality via good models of the world. This goes beyond subjective expected utility theory. As does what you have said about robustness and being well or poorly supported by evidence.”
A standard like “how accurately does this estimate predict the future state of the world?” is what we seem to use when comparing the quality (believability) of subjective estimates.
I think the difficulty is that it is very hard to assess the accuracy of subjective estimates about complicated real-world events, where many of the causal inputs of the event are unknown & the impacts of the event occur over a long time horizon.
How does it imply that? A Bayesian agent makes updates to their beliefs to approximate the real world as well as it can. That’s just regular Bayesian updating, whether you are subjective or not.
I don’t see what this has to do with subjective estimates. If we talk about estimates in objective and/or frequentist terms, it’s equally difficult to observe the long term unfolding of the scenario. Switching away from subjective estimates won’t make you better at determining which estimates are correct or not.
I don’t have a fully articulated view here, but I think the problem lies with how the agent assesses how its approximations are doing (i.e. the procedure an agent uses to assess when an update is modeling the world more accurately or less).
Agreed. I think the difficulty applies to both types of estimates (sorry for being imprecise above).