I realize that this is kind of a tangent to your tangent, but I don’t think the general conjunction of (Western) expert views in 2020 was particularly defensible. Roughly speaking, the views (that I still sometimes hear it parroted by Twitter folks) were something like
For most respiratory epidemics, (surgical) masks are effective at protecting wearers in medical settings.
They are also effective as a form of source control in medical settings.
They should be effective as a form of source control in community transmission.
However, there is insufficient evidence to determine whether they are useful to protect wearers in community transmission.
I think each of those beliefs may [1] be reasonable by themselves in the abstract, but the conjunction together is extremely suspicious. The policy prescriptions are likewise suspicious.
Thus, I think Halstead’s evidence in that section can be modified fairly trivially to still preserve the core of that argument.
[1] Personally, my view on this is that if masks were a newfangled technology, the empirical beliefs (though not necessarily the logic that led to holding them together) may be forgivable coming from our experts. But 109+ years is also a long time to get something this important wrong. FWIW, I didn’t have a strong opinion on masks for community transmission in 2019, so it’s not like I got this particularly early. But I like to imagine that if any of the commentators here were to be an expert actively studying this, it would have taken most of them less than a century to figure this out.
I mostly agree with this. Of course, to notice that you have to know (2)/(3) are part of the ‘expert belief set’, or at least it really helps, which you easily might not have done if you relied on Twitter/Facebook/headlines for your sense of ‘expert views’.
And indeed, I had conversations where pointing those things out to people updated them a fair amount towards thinking that masks were worth wearing.
In other words, even if you go and read the expert view directly and decide it doesn’t make sense, I expect you to end up in a better epistemic position than you would otherwise be; it’s useful for both deference and anti-deference, and imo will strongly tend to push you in the ‘right’ direction for the matter at hand.
Edit: Somewhat independently, I’d generally like our standards to be higher than ‘this argument/evidence could be modified to preserve the conclusion’. I suspect you don’t disagree, but stating it explicitly because leaning too hard on that in a lot of different areas is one of the larger factors leading me to be unhappy with the current state of EA discourse.
Upon reflection, I want to emphasize that I strongly agree with your general point that in the world we live in, on the margin people probably ought to listen directly to what experts say. Unfortunately, I think this is in the general category of other advice like “do the homework” (eg, read original sources, don’t be sloppy with the statistics, read original papers, don’t just read the abstract or press release, read the original 2-sentence quote before taking somebody else’s 1-sentence summary at face value, etc), and time/attention/laziness constraints may make taking this advice to heart prohibitively costly (or be perceived this way).
I certainly think it’s unfortunate that the default information aggregation systems we have (headlines, social media, etc) are not quite up to the task of accurately representing experts. I think this is an important and (in the abstract) nontrivial point, and I’m a bit sad that our best solution here appears to be blaming user error.
Somewhat independently, I’d generally like our standards to be higher than ‘this argument/evidence could be modified to preserve the conclusion’
I strongly agree, though I usually feel much more strongly about this for evidence than for arguments! :P
I certainly think it’s unfortunate that the default information aggregation systems we have (headlines, social media, etc) are not quite up to the task of accurately representing experts. I think this is an important and (in the abstract) nontrivial point, and I’m a bit sad that our best solution here appears to be blaming user error.
Yeah, I think this seems true and important to me too.
And I think we can also broaden the idea of “research distillation” to distilling bodies of knowledge other than just “research”, like sets of reasonable-seeming arguments and considerations various people have highlighted.
I think the new EA Forum wiki+tagging system is a nice example of these three types of solutions, which is part of why I’m spending some time helping with it lately.
And I think “argument mapping” type things might also be a valuable, somewhat similar solution to part of the problem. (E.g., Kialo, though I’ve never actually used that myself.)
I realize that this is kind of a tangent to your tangent, but I don’t think the general conjunction of (Western) expert views in 2020 was particularly defensible. Roughly speaking, the views (that I still sometimes hear it parroted by Twitter folks) were something like
I think each of those beliefs may [1] be reasonable by themselves in the abstract, but the conjunction together is extremely suspicious. The policy prescriptions are likewise suspicious.
Thus, I think Halstead’s evidence in that section can be modified fairly trivially to still preserve the core of that argument.
[1] Personally, my view on this is that if masks were a newfangled technology, the empirical beliefs (though not necessarily the logic that led to holding them together) may be forgivable coming from our experts. But 109+ years is also a long time to get something this important wrong. FWIW, I didn’t have a strong opinion on masks for community transmission in 2019, so it’s not like I got this particularly early. But I like to imagine that if any of the commentators here were to be an expert actively studying this, it would have taken most of them less than a century to figure this out.
I mostly agree with this. Of course, to notice that you have to know (2)/(3) are part of the ‘expert belief set’, or at least it really helps, which you easily might not have done if you relied on Twitter/Facebook/headlines for your sense of ‘expert views’.
And indeed, I had conversations where pointing those things out to people updated them a fair amount towards thinking that masks were worth wearing.
In other words, even if you go and read the expert view directly and decide it doesn’t make sense, I expect you to end up in a better epistemic position than you would otherwise be; it’s useful for both deference and anti-deference, and imo will strongly tend to push you in the ‘right’ direction for the matter at hand.
Edit: Somewhat independently, I’d generally like our standards to be higher than ‘this argument/evidence could be modified to preserve the conclusion’. I suspect you don’t disagree, but stating it explicitly because leaning too hard on that in a lot of different areas is one of the larger factors leading me to be unhappy with the current state of EA discourse.
Upon reflection, I want to emphasize that I strongly agree with your general point that in the world we live in, on the margin people probably ought to listen directly to what experts say. Unfortunately, I think this is in the general category of other advice like “do the homework” (eg, read original sources, don’t be sloppy with the statistics, read original papers, don’t just read the abstract or press release, read the original 2-sentence quote before taking somebody else’s 1-sentence summary at face value, etc), and time/attention/laziness constraints may make taking this advice to heart prohibitively costly (or be perceived this way).
I certainly think it’s unfortunate that the default information aggregation systems we have (headlines, social media, etc) are not quite up to the task of accurately representing experts. I think this is an important and (in the abstract) nontrivial point, and I’m a bit sad that our best solution here appears to be blaming user error.
I strongly agree, though I usually feel much more strongly about this for evidence than for arguments! :P
Yeah, I think this seems true and important to me too.
There are three, somewhat overlapping solutions to small parts of this problem that I’m excited about: (1) “Research Distillation” to pay off “Research Debt”, (2) more summaries, and (3) more collections.
And I think we can also broaden the idea of “research distillation” to distilling bodies of knowledge other than just “research”, like sets of reasonable-seeming arguments and considerations various people have highlighted.
I think the new EA Forum wiki+tagging system is a nice example of these three types of solutions, which is part of why I’m spending some time helping with it lately.
And I think “argument mapping” type things might also be a valuable, somewhat similar solution to part of the problem. (E.g., Kialo, though I’ve never actually used that myself.)
There was also a relevant EAG panel discussion a few years ago: Aggregating knowledge | Panel | EA Global: San Francisco 2016.