Say an expert (or a prediction market median) is much stronger than you, but you have a strong inside view. Whatâs your thought process for validating it? Whatâs your thought process if you choose to defer?
I know this isnât the answer you want, but I think the short answer here is that I really donât know, because I donât think this situation is common. so I donât have a good reference class/âlist of case studies to describe how Iâd react in this situation.
If this were to happen often for a specific reference class of questions (where some people just very obviously do better than me for those questions), I imagine Iâd quickly get out of the predictions business for those questions, and start predicting on other things instead.
As a forecaster, Iâm mostly philosophically opposed to updating strongly (arguably at all) based on other peopleâs predictions. If I updated strongly, I worry that this will cause information cascades.
However, if I was in a different role, eg making action-relevant decisions myself, or ârepresenting forecastersâ to decision-makers, I might try to present a broader community view, or highlight specific experts.
an expert (or a prediction market median) is much stronger than you, but you have a strong inside view
I think is in practice uncommon:
I think the ideal example in my head for showcasing what you describe goes something like this:
An expert/âexpert consensus/âprediction market median that I respect strongly (as predictors) have high probability on X
I strongly believe not X. (or equivalently, very low probability on X).
I have strong inside views for why I believe not X.
X is the answer to a well-operationalized question
with a specific definition...
that everybody on the definition of.
I learned about the expert view very soon after they made it
I do not think there is new information that the experts are not updating on
This questionâs answer has a resolution in the near future, in a context that I have both inside-view and outside-view confidence in our relative track records (in either direction).
I basically think that there are very few examples of situations like this, for various reasons:
For starters, I donât think I have very strong inside views on a lot of questions.
Though sometimes the outside views look something like âthis simple model predicts stuff around X, and the outside view is that this class of simple models outpredict both experts and my own more complicated models â
Eg, 20 countries have curves that look like this, I donât have enough Bayesian evidence that this particular countryâs progression will be different.
There are also weird outside views on peopleâs speech acts, for example âour country will be differentâ is on a meta-level something that people from many countries believe, and this conveys almost no information
These outsideish views can of course be wrong (for example I was wrong about Japan and plausibly Pakistan).
Unfortunately, what is and isnât a good outside view is often easy to self-hack by accident.
Note that outside view doesnât necessarily look like expert deference.
Usually if there are experts or other aggregations whose opinion as forecasters that I strongly respect, I will just defer to them and not think that much myself
For example Iâm deferring serious thinking around the 2020 election because I basically think 538.com has âgot this.â
I mostly select easier/ârelatively neglected domains to forecast on, at least with âeaseâ defined as âthe market looks basically efficientâ
Eg, I stay away from financial and election forecasts
A lot of the time, when experts say something that I think is wildly wrong and I dig into it further, it turns out they said it Y days/âweeks ago, and Iâve already heard contradictory evidence that updated my internal picture since (and presumably the experts as well).
A caveat to all this is that Iâm probably not as good at deferring to the right experts as many EA Forum users. Perhaps if I was better at it (âitâ being identifying/âdeeply interpreting the right experts), I will feel differently.
Say an expert (or a prediction market median) is much stronger than you, but you have a strong inside view. Whatâs your thought process for validating it? Whatâs your thought process if you choose to defer?
I know this isnât the answer you want, but I think the short answer here is that I really donât know, because I donât think this situation is common. so I donât have a good reference class/âlist of case studies to describe how Iâd react in this situation.
If this were to happen often for a specific reference class of questions (where some people just very obviously do better than me for those questions), I imagine Iâd quickly get out of the predictions business for those questions, and start predicting on other things instead.
As a forecaster, Iâm mostly philosophically opposed to updating strongly (arguably at all) based on other peopleâs predictions. If I updated strongly, I worry that this will cause information cascades.
However, if I was in a different role, eg making action-relevant decisions myself, or ârepresenting forecastersâ to decision-makers, I might try to present a broader community view, or highlight specific experts.
Past work on this includes comments on Greg Lewisâs excellent EA forum article on epistemic modesty, Scott Sumner on why the US Fed should use market notions of monetary policy rather than what the chairperson of the Fed believes and notions of public vs. private uses of reason by Immanuel Kant.
I also raised this question on Metaculus.
Footnote on why this scenario
I think is in practice uncommon:
I think the ideal example in my head for showcasing what you describe goes something like this:
An expert/âexpert consensus/âprediction market median that I respect strongly (as predictors) have high probability on X
I strongly believe not X. (or equivalently, very low probability on X).
I have strong inside views for why I believe not X.
X is the answer to a well-operationalized question
with a specific definition...
that everybody on the definition of.
I learned about the expert view very soon after they made it
I do not think there is new information that the experts are not updating on
This questionâs answer has a resolution in the near future, in a context that I have both inside-view and outside-view confidence in our relative track records (in either direction).
I basically think that there are very few examples of situations like this, for various reasons:
For starters, I donât think I have very strong inside views on a lot of questions.
Though sometimes the outside views look something like âthis simple model predicts stuff around X, and the outside view is that this class of simple models outpredict both experts and my own more complicated models â
Eg, 20 countries have curves that look like this, I donât have enough Bayesian evidence that this particular countryâs progression will be different.
There are also weird outside views on peopleâs speech acts, for example âour country will be differentâ is on a meta-level something that people from many countries believe, and this conveys almost no information
These outsideish views can of course be wrong (for example I was wrong about Japan and plausibly Pakistan).
Unfortunately, what is and isnât a good outside view is often easy to self-hack by accident.
Note that outside view doesnât necessarily look like expert deference.
Usually if there are experts or other aggregations whose opinion as forecasters that I strongly respect, I will just defer to them and not think that much myself
For example Iâm deferring serious thinking around the 2020 election because I basically think 538.com has âgot this.â
I mostly select easier/ârelatively neglected domains to forecast on, at least with âeaseâ defined as âthe market looks basically efficientâ
Eg, I stay away from financial and election forecasts
A lot of the time, when experts say something that I think is wildly wrong and I dig into it further, it turns out they said it Y days/âweeks ago, and Iâve already heard contradictory evidence that updated my internal picture since (and presumably the experts as well).
A caveat to all this is that Iâm probably not as good at deferring to the right experts as many EA Forum users. Perhaps if I was better at it (âitâ being identifying/âdeeply interpreting the right experts), I will feel differently.