I also largely agree with the direction and spirit of Greg’s main post.
Personally, I broadly agreed with the spirit of the post before 2020. I’m somewhat more reticent now. But this is maybe a distraction so let’s set it aside for now.
At a high level, I think I agree with the core of your argument. However, some of the subcomponents/implications seem “slippery” to me. In particular, I think readers (or at least, one particularly thick-skulled reader that I irrational have undue concern about) may read into it connotations that are potentially quite confusing/misleading.
I’ll first try to restate the denotations of your post in my own words, so you can tell me where (if) I already went wrong.
Assume that someone espouses the tenets of philosophical Bayesianism
Many other people in the world either do not espouse Bayesianism, or do not follow it deeply, or both.
A subset of the group above include some (most) people whom we in 2021 secular Western culture consider to be domain experts.
With equal evidence, an epistemic process that does not follow ideal Bayesian reasoning will be less truth-tracking than ideal Bayesian reasoning (assuming, again, that philosophical Bayesianism is correct).
Epistemic processes that are not explicitly Bayesian are on average worse at approximating ideal Bayesian reasoning than epistemic processes that are explicitly Bayesian.
From 3 and 4⁄5, we can gather that non-Bayesian domain experts will on average fall short of both the Bayesian ideal and the Bayesian practice (all else being equal).
{Examples where, to a Bayesian reasoner, it appears that people with non-Bayesian methods fall short of Bayesian reasoning}
Thus, Bayesian reasoners should not defer unconditionally to non-Bayesian experts (assuming Bayesian reasoning is correct).
Non-Bayesian experts are systematically flawed in ways that are ex ante predictable. (In statistical jargon, this is bias, not just high variance/noise).
One of the ways in which Non-Bayesian experts are systematically biased is in an over-reliance on naive scientism/myopic empiricism.
Thus, Bayesian reasoners who adjust for this bias when deferring to non-Bayesian experts will come to more accurate conclusions than Bayesian reasoners who do not.
Assuming I read your post correctly, I agree with both of the above bolded claims (which I assume to be the central points of your article). I also think I agree with the (importantly non-trivial!) examples you give, barring caveats like AGB’s. However, I think many readers (or again, perhaps just one) reading this may take implied connotations that perhaps you did not intend, some of which are wrong.
People who call themselves “Bayesians” are on average systematically better at reasoning than people who do not, all else being equal.
People who call themselves “Bayesians” are on average systematically better at reasoning than people who do not.
(I think we already established upthread that you do not believe this).
Naive scientism is the most important bias facing non-Bayesian scientists today.
Naive scientism is the most important bias to adjust for when interacting or learning from non-Bayesian scientists.
When trying to interpret claims from scientists, the primary thing that a self-professed Bayesian should watch out for is naive scientism.
Scientists will come to much more accurate beliefs if they switched to Bayesian methods.
Scientists will come to much more accurate beliefs if they switched to philosophical Bayesianism.
(Perhaps you meant this forum post as a conditional, so we can adjust the above point/points with the caveat “assuming philosophical Bayesianism is true/useful”)
At the risk of fighting a strawman, when I think about the problems in reasoning/epistemics, either in general, or during the pandemic, I do not think naive scientism is the most obvious, or most egregious, mistake.
The epistemic failure modes involved with deferring to the wrong experts is itself not a mistake of naive scientism
Somebody who thinks “I should defer to the US CDC on masks (or Sweden on lockdowns, etc) but not China, Japan, or South Korea” is not committing this error because they had good global priors on deference but choose to ignore their priors in favor of an RCT saying that US residents who deferred to the US gov’t had systematically better outcomes than US residents who deferred to other health authorities.
If anything, most likely this question did not come to mind at all.
I think most mistakes of deference are due to practical issues of the deferrer or situation rather than because the experts were wrong.
To the extent experts were wrong, I’m pretty suss of stories that this is primarily due to naive scientism (though I agree that this is a nontrivial and large bias):
Many(most?) early pandemic expert surveys underperformed both simple linear extrapolation and simple SEIR models for covid.
Expert Political Judgment also demonstrates political science experts robustly underpredicting simple algorithms, in an environment where more straightforward studies cannot be conducted
Clinicians often advocate for “real world evidence” over RCTs. But I don’t think clinical intuitions robustly outperform RCT evidence in predicting the results of replications
(though it’s perhaps telling that I don’t have a citation on hand on this)
To the extent scientific experts are wrong in ex ante predictable ways, here’s a short candidate list of ways that I think are potentially more important than naive scientism (note that of course a lot of this is field, topic, individual, timing, and otherwise context dependent):
edit 2021/07/13: I misunderstood what “garden of forking paths” is when I made the comment originally. Since then I’ve read the paper and by a happy coincidence it turns out that “garden of forking paths” is still large enough of a problem to belong here. But sort of Gettier problem.
actually being bad at statistics
blatant lies
Anyway, I realize this comment is way too long for something that’s effectively saying “80% in agreement.” I just wanted a place to write down my thoughts. Perhaps it can be helpful to at least one other person.
Personally, I broadly agreed with the spirit of the post before 2020. I’m somewhat more reticent now. But this is maybe a distraction so let’s set it aside for now.
At a high level, I think I agree with the core of your argument. However, some of the subcomponents/implications seem “slippery” to me. In particular, I think readers (or at least, one particularly thick-skulled reader that I irrational have undue concern about) may read into it connotations that are potentially quite confusing/misleading.
I’ll first try to restate the denotations of your post in my own words, so you can tell me where (if) I already went wrong.
Assume that someone espouses the tenets of philosophical Bayesianism
Many other people in the world either do not espouse Bayesianism, or do not follow it deeply, or both.
A subset of the group above include some (most) people whom we in 2021 secular Western culture consider to be domain experts.
With equal evidence, an epistemic process that does not follow ideal Bayesian reasoning will be less truth-tracking than ideal Bayesian reasoning (assuming, again, that philosophical Bayesianism is correct).
Epistemic processes that are not explicitly Bayesian are on average worse at approximating ideal Bayesian reasoning than epistemic processes that are explicitly Bayesian.
From 3 and 4⁄5, we can gather that non-Bayesian domain experts will on average fall short of both the Bayesian ideal and the Bayesian practice (all else being equal).
{Examples where, to a Bayesian reasoner, it appears that people with non-Bayesian methods fall short of Bayesian reasoning}
Thus, Bayesian reasoners should not defer unconditionally to non-Bayesian experts (assuming Bayesian reasoning is correct).
Non-Bayesian experts are systematically flawed in ways that are ex ante predictable. (In statistical jargon, this is bias, not just high variance/noise).
One of the ways in which Non-Bayesian experts are systematically biased is in an over-reliance on naive scientism/myopic empiricism.
Thus, Bayesian reasoners who adjust for this bias when deferring to non-Bayesian experts will come to more accurate conclusions than Bayesian reasoners who do not.
Assuming I read your post correctly, I agree with both of the above bolded claims (which I assume to be the central points of your article). I also think I agree with the (importantly non-trivial!) examples you give, barring caveats like AGB’s. However, I think many readers (or again, perhaps just one) reading this may take implied connotations that perhaps you did not intend, some of which are wrong.
People who call themselves “Bayesians” are on average systematically better at reasoning than people who do not, all else being equal.
People who call themselves “Bayesians” are on average systematically better at reasoning than people who do not.
(I think we already established upthread that you do not believe this).
Naive scientism is the most important bias facing non-Bayesian scientists today.
Naive scientism is the most important bias to adjust for when interacting or learning from non-Bayesian scientists.
When trying to interpret claims from scientists, the primary thing that a self-professed Bayesian should watch out for is naive scientism.
Scientists will come to much more accurate beliefs if they switched to Bayesian methods.
Scientists will come to much more accurate beliefs if they switched to philosophical Bayesianism.
(Perhaps you meant this forum post as a conditional, so we can adjust the above point/points with the caveat “assuming philosophical Bayesianism is true/useful”)
At the risk of fighting a strawman, when I think about the problems in reasoning/epistemics, either in general, or during the pandemic, I do not think naive scientism is the most obvious, or most egregious, mistake.
The epistemic failure modes involved with deferring to the wrong experts is itself not a mistake of naive scientism
Somebody who thinks “I should defer to the US CDC on masks (or Sweden on lockdowns, etc) but not China, Japan, or South Korea” is not committing this error because they had good global priors on deference but choose to ignore their priors in favor of an RCT saying that US residents who deferred to the US gov’t had systematically better outcomes than US residents who deferred to other health authorities.
If anything, most likely this question did not come to mind at all.
I think most mistakes of deference are due to practical issues of the deferrer or situation rather than because the experts were wrong.
See my past comments here.
To the extent experts were wrong, I’m pretty suss of stories that this is primarily due to naive scientism (though I agree that this is a nontrivial and large bias):
Many(most?) early pandemic expert surveys underperformed both simple linear extrapolation and simple SEIR models for covid.
Expert Political Judgment also demonstrates political science experts robustly underpredicting simple algorithms, in an environment where more straightforward studies cannot be conducted
Clinicians often advocate for “real world evidence” over RCTs. But I don’t think clinical intuitions robustly outperform RCT evidence in predicting the results of replications
(though it’s perhaps telling that I don’t have a citation on hand on this)
To the extent scientific experts are wrong in ex ante predictable ways, here’s a short candidate list of ways that I think are potentially more important than naive scientism (note that of course a lot of this is field, topic, individual, timing, and otherwise context dependent):
motivated reasoning
publication bias
garden-of-forking paths
edit 2021/07/13: I misunderstood what “garden of forking paths” is when I made the comment originally. Since then I’ve read the paper and by a happy coincidence it turns out that “garden of forking paths” is still large enough of a problem to belong here. But sort of Gettier problem.
actually being bad at statistics
blatant lies
Anyway, I realize this comment is way too long for something that’s effectively saying “80% in agreement.” I just wanted a place to write down my thoughts. Perhaps it can be helpful to at least one other person.