I mostly agree with this. Of course, to notice that you have to know (2)/(3) are part of the ‘expert belief set’, or at least it really helps, which you easily might not have done if you relied on Twitter/Facebook/headlines for your sense of ‘expert views’.
And indeed, I had conversations where pointing those things out to people updated them a fair amount towards thinking that masks were worth wearing.
In other words, even if you go and read the expert view directly and decide it doesn’t make sense, I expect you to end up in a better epistemic position than you would otherwise be; it’s useful for both deference and anti-deference, and imo will strongly tend to push you in the ‘right’ direction for the matter at hand.
Edit: Somewhat independently, I’d generally like our standards to be higher than ‘this argument/evidence could be modified to preserve the conclusion’. I suspect you don’t disagree, but stating it explicitly because leaning too hard on that in a lot of different areas is one of the larger factors leading me to be unhappy with the current state of EA discourse.
Upon reflection, I want to emphasize that I strongly agree with your general point that in the world we live in, on the margin people probably ought to listen directly to what experts say. Unfortunately, I think this is in the general category of other advice like “do the homework” (eg, read original sources, don’t be sloppy with the statistics, read original papers, don’t just read the abstract or press release, read the original 2-sentence quote before taking somebody else’s 1-sentence summary at face value, etc), and time/attention/laziness constraints may make taking this advice to heart prohibitively costly (or be perceived this way).
I certainly think it’s unfortunate that the default information aggregation systems we have (headlines, social media, etc) are not quite up to the task of accurately representing experts. I think this is an important and (in the abstract) nontrivial point, and I’m a bit sad that our best solution here appears to be blaming user error.
Somewhat independently, I’d generally like our standards to be higher than ‘this argument/evidence could be modified to preserve the conclusion’
I strongly agree, though I usually feel much more strongly about this for evidence than for arguments! :P
I certainly think it’s unfortunate that the default information aggregation systems we have (headlines, social media, etc) are not quite up to the task of accurately representing experts. I think this is an important and (in the abstract) nontrivial point, and I’m a bit sad that our best solution here appears to be blaming user error.
Yeah, I think this seems true and important to me too.
And I think we can also broaden the idea of “research distillation” to distilling bodies of knowledge other than just “research”, like sets of reasonable-seeming arguments and considerations various people have highlighted.
I think the new EA Forum wiki+tagging system is a nice example of these three types of solutions, which is part of why I’m spending some time helping with it lately.
And I think “argument mapping” type things might also be a valuable, somewhat similar solution to part of the problem. (E.g., Kialo, though I’ve never actually used that myself.)
I mostly agree with this. Of course, to notice that you have to know (2)/(3) are part of the ‘expert belief set’, or at least it really helps, which you easily might not have done if you relied on Twitter/Facebook/headlines for your sense of ‘expert views’.
And indeed, I had conversations where pointing those things out to people updated them a fair amount towards thinking that masks were worth wearing.
In other words, even if you go and read the expert view directly and decide it doesn’t make sense, I expect you to end up in a better epistemic position than you would otherwise be; it’s useful for both deference and anti-deference, and imo will strongly tend to push you in the ‘right’ direction for the matter at hand.
Edit: Somewhat independently, I’d generally like our standards to be higher than ‘this argument/evidence could be modified to preserve the conclusion’. I suspect you don’t disagree, but stating it explicitly because leaning too hard on that in a lot of different areas is one of the larger factors leading me to be unhappy with the current state of EA discourse.
Upon reflection, I want to emphasize that I strongly agree with your general point that in the world we live in, on the margin people probably ought to listen directly to what experts say. Unfortunately, I think this is in the general category of other advice like “do the homework” (eg, read original sources, don’t be sloppy with the statistics, read original papers, don’t just read the abstract or press release, read the original 2-sentence quote before taking somebody else’s 1-sentence summary at face value, etc), and time/attention/laziness constraints may make taking this advice to heart prohibitively costly (or be perceived this way).
I certainly think it’s unfortunate that the default information aggregation systems we have (headlines, social media, etc) are not quite up to the task of accurately representing experts. I think this is an important and (in the abstract) nontrivial point, and I’m a bit sad that our best solution here appears to be blaming user error.
I strongly agree, though I usually feel much more strongly about this for evidence than for arguments! :P
Yeah, I think this seems true and important to me too.
There are three, somewhat overlapping solutions to small parts of this problem that I’m excited about: (1) “Research Distillation” to pay off “Research Debt”, (2) more summaries, and (3) more collections.
And I think we can also broaden the idea of “research distillation” to distilling bodies of knowledge other than just “research”, like sets of reasonable-seeming arguments and considerations various people have highlighted.
I think the new EA Forum wiki+tagging system is a nice example of these three types of solutions, which is part of why I’m spending some time helping with it lately.
And I think “argument mapping” type things might also be a valuable, somewhat similar solution to part of the problem. (E.g., Kialo, though I’ve never actually used that myself.)
There was also a relevant EAG panel discussion a few years ago: Aggregating knowledge | Panel | EA Global: San Francisco 2016.