an expert (or a prediction market median) is much stronger than you, but you have a strong inside view
I think is in practice uncommon:
I think the ideal example in my head for showcasing what you describe goes something like this:
An expert/expert consensus/prediction market median that I respect strongly (as predictors) have high probability on X
I strongly believe not X. (or equivalently, very low probability on X).
I have strong inside views for why I believe not X.
X is the answer to a well-operationalized question
with a specific definition...
that everybody on the definition of.
I learned about the expert view very soon after they made it
I do not think there is new information that the experts are not updating on
This question’s answer has a resolution in the near future, in a context that I have both inside-view and outside-view confidence in our relative track records (in either direction).
I basically think that there are very few examples of situations like this, for various reasons:
For starters, I don’t think I have very strong inside views on a lot of questions.
Though sometimes the outside views look something like “this simple model predicts stuff around X, and the outside view is that this class of simple models outpredict both experts and my own more complicated models ”
Eg, 20 countries have curves that look like this, I don’t have enough Bayesian evidence that this particular country’s progression will be different.
There are also weird outside views on people’s speech acts, for example “our country will be different” is on a meta-level something that people from many countries believe, and this conveys almost no information
These outsideish views can of course be wrong (for example I was wrong about Japan and plausibly Pakistan).
Unfortunately, what is and isn’t a good outside view is often easy to self-hack by accident.
Note that outside view doesn’t necessarily look like expert deference.
Usually if there are experts or other aggregations whose opinion as forecasters that I strongly respect, I will just defer to them and not think that much myself
For example I’m deferring serious thinking around the 2020 election because I basically think 538.com has “got this.”
I mostly select easier/relatively neglected domains to forecast on, at least with “ease” defined as “the market looks basically efficient”
Eg, I stay away from financial and election forecasts
A lot of the time, when experts say something that I think is wildly wrong and I dig into it further, it turns out they said it Y days/weeks ago, and I’ve already heard contradictory evidence that updated my internal picture since (and presumably the experts as well).
A caveat to all this is that I’m probably not as good at deferring to the right experts as many EA Forum users. Perhaps if I was better at it (“it” being identifying/deeply interpreting the right experts), I will feel differently.
Footnote on why this scenario
I think is in practice uncommon:
I think the ideal example in my head for showcasing what you describe goes something like this:
An expert/expert consensus/prediction market median that I respect strongly (as predictors) have high probability on X
I strongly believe not X. (or equivalently, very low probability on X).
I have strong inside views for why I believe not X.
X is the answer to a well-operationalized question
with a specific definition...
that everybody on the definition of.
I learned about the expert view very soon after they made it
I do not think there is new information that the experts are not updating on
This question’s answer has a resolution in the near future, in a context that I have both inside-view and outside-view confidence in our relative track records (in either direction).
I basically think that there are very few examples of situations like this, for various reasons:
For starters, I don’t think I have very strong inside views on a lot of questions.
Though sometimes the outside views look something like “this simple model predicts stuff around X, and the outside view is that this class of simple models outpredict both experts and my own more complicated models ”
Eg, 20 countries have curves that look like this, I don’t have enough Bayesian evidence that this particular country’s progression will be different.
There are also weird outside views on people’s speech acts, for example “our country will be different” is on a meta-level something that people from many countries believe, and this conveys almost no information
These outsideish views can of course be wrong (for example I was wrong about Japan and plausibly Pakistan).
Unfortunately, what is and isn’t a good outside view is often easy to self-hack by accident.
Note that outside view doesn’t necessarily look like expert deference.
Usually if there are experts or other aggregations whose opinion as forecasters that I strongly respect, I will just defer to them and not think that much myself
For example I’m deferring serious thinking around the 2020 election because I basically think 538.com has “got this.”
I mostly select easier/relatively neglected domains to forecast on, at least with “ease” defined as “the market looks basically efficient”
Eg, I stay away from financial and election forecasts
A lot of the time, when experts say something that I think is wildly wrong and I dig into it further, it turns out they said it Y days/weeks ago, and I’ve already heard contradictory evidence that updated my internal picture since (and presumably the experts as well).
A caveat to all this is that I’m probably not as good at deferring to the right experts as many EA Forum users. Perhaps if I was better at it (“it” being identifying/deeply interpreting the right experts), I will feel differently.