This business with multiple possible probabilities sounds like you are partway through reinventing Bayesian model uncertainty. Seems like “representor” corresponds to “posterior distribution over possible models”. From a Bayesian perspective, you can solve this problem by using the full posterior for inference, and summing out the model.
Answers like this indicate that the estimator doesn’t have visibility into the process by which they’re arriving at their estimate.
“It is better to be approximately right than to be precisely wrong.”—Warren Buffett
“Anything you need to quantify can be measured in some way that is superior to not measuring it at all.”—Gilb’s Law
Sorry, I’m not sure what the official jargon for the thing I’m trying to refer to is. In the limit of trying to be more accessible, I’m basically teaching a class in Bayesian statistics, and that’s not something I’m qualified to do. (I don’t even remember the jargon!) But the point is there are theoretically well-developed methods for talking about these issues, and maybe you shouldn’t reinvent the wheel. Also, I’m almost certain they work fine with expected value.
Do you think it’s an acceptable conversational move for me to give you pointers to a literature which I believe addresses issues you’re working on even if I don’t have a deep familiarity with that literature?
I think it’s acceptable, but being “acceptable” feels like a pretty low bar.
Basically I don’t think it’s rude, or arguing in bad faith, or anything like that. But not being able to give a specific reference when we dig into one of your claims lowers my credence of that claim.
This business with multiple possible probabilities sounds like you are partway through reinventing Bayesian model uncertainty. Seems like “representor” corresponds to “posterior distribution over possible models”. From a Bayesian perspective, you can solve this problem by using the full posterior for inference, and summing out the model.
“It is better to be approximately right than to be precisely wrong.”—Warren Buffett
“Anything you need to quantify can be measured in some way that is superior to not measuring it at all.”—Gilb’s Law
I don’t follow. What does it mean to use “the full posterior for inference,” in this context?
A couple examples would help me.
I think this is the jargon: https://en.wikipedia.org/wiki/Posterior_predictive_distribution
Sorry, I’m not sure what the official jargon for the thing I’m trying to refer to is. In the limit of trying to be more accessible, I’m basically teaching a class in Bayesian statistics, and that’s not something I’m qualified to do. (I don’t even remember the jargon!) But the point is there are theoretically well-developed methods for talking about these issues, and maybe you shouldn’t reinvent the wheel. Also, I’m almost certain they work fine with expected value.
Hm, I feel sorta strange about this exchange.
Here’s a toy model of the story in my head:
Does that seem strange to you, too? I’m not trying to be unfair here.
Basically it seems strange that you know that Bayesian statistics addresses this issue, but it’s not easy to give examples of how.
Do you think it’s an acceptable conversational move for me to give you pointers to a literature which I believe addresses issues you’re working on even if I don’t have a deep familiarity with that literature?
I think it’s acceptable, but being “acceptable” feels like a pretty low bar.
Basically I don’t think it’s rude, or arguing in bad faith, or anything like that. But not being able to give a specific reference when we dig into one of your claims lowers my credence of that claim.
For what it’s worth, this is Greaves’ terminology, not mine.