I mostly agree with this. Of course, to notice that you have to know (2)/â(3) are part of the âexpert belief setâ, or at least it really helps, which you easily might not have done if you relied on Twitter/âFacebook/âheadlines for your sense of âexpert viewsâ.
And indeed, I had conversations where pointing those things out to people updated them a fair amount towards thinking that masks were worth wearing.
In other words, even if you go and read the expert view directly and decide it doesnât make sense, I expect you to end up in a better epistemic position than you would otherwise be; itâs useful for both deference and anti-deference, and imo will strongly tend to push you in the ârightâ direction for the matter at hand.
Edit: Somewhat independently, Iâd generally like our standards to be higher than âthis argument/âevidence could be modified to preserve the conclusionâ. I suspect you donât disagree, but stating it explicitly because leaning too hard on that in a lot of different areas is one of the larger factors leading me to be unhappy with the current state of EA discourse.
Upon reflection, I want to emphasize that I strongly agree with your general point that in the world we live in, on the margin people probably ought to listen directly to what experts say. Unfortunately, I think this is in the general category of other advice like âdo the homeworkâ (eg, read original sources, donât be sloppy with the statistics, read original papers, donât just read the abstract or press release, read the original 2-sentence quote before taking somebody elseâs 1-sentence summary at face value, etc), and time/âattention/âlaziness constraints may make taking this advice to heart prohibitively costly (or be perceived this way).
I certainly think itâs unfortunate that the default information aggregation systems we have (headlines, social media, etc) are not quite up to the task of accurately representing experts. I think this is an important and (in the abstract) nontrivial point, and Iâm a bit sad that our best solution here appears to be blaming user error.
Somewhat independently, Iâd generally like our standards to be higher than âthis argument/âevidence could be modified to preserve the conclusionâ
I strongly agree, though I usually feel much more strongly about this for evidence than for arguments! :P
I certainly think itâs unfortunate that the default information aggregation systems we have (headlines, social media, etc) are not quite up to the task of accurately representing experts. I think this is an important and (in the abstract) nontrivial point, and Iâm a bit sad that our best solution here appears to be blaming user error.
Yeah, I think this seems true and important to me too.
And I think we can also broaden the idea of âresearch distillationâ to distilling bodies of knowledge other than just âresearchâ, like sets of reasonable-seeming arguments and considerations various people have highlighted.
I think the new EA Forum wiki+tagging system is a nice example of these three types of solutions, which is part of why Iâm spending some time helping with it lately.
And I think âargument mappingâ type things might also be a valuable, somewhat similar solution to part of the problem. (E.g., Kialo, though Iâve never actually used that myself.)
I mostly agree with this. Of course, to notice that you have to know (2)/â(3) are part of the âexpert belief setâ, or at least it really helps, which you easily might not have done if you relied on Twitter/âFacebook/âheadlines for your sense of âexpert viewsâ.
And indeed, I had conversations where pointing those things out to people updated them a fair amount towards thinking that masks were worth wearing.
In other words, even if you go and read the expert view directly and decide it doesnât make sense, I expect you to end up in a better epistemic position than you would otherwise be; itâs useful for both deference and anti-deference, and imo will strongly tend to push you in the ârightâ direction for the matter at hand.
Edit: Somewhat independently, Iâd generally like our standards to be higher than âthis argument/âevidence could be modified to preserve the conclusionâ. I suspect you donât disagree, but stating it explicitly because leaning too hard on that in a lot of different areas is one of the larger factors leading me to be unhappy with the current state of EA discourse.
Upon reflection, I want to emphasize that I strongly agree with your general point that in the world we live in, on the margin people probably ought to listen directly to what experts say. Unfortunately, I think this is in the general category of other advice like âdo the homeworkâ (eg, read original sources, donât be sloppy with the statistics, read original papers, donât just read the abstract or press release, read the original 2-sentence quote before taking somebody elseâs 1-sentence summary at face value, etc), and time/âattention/âlaziness constraints may make taking this advice to heart prohibitively costly (or be perceived this way).
I certainly think itâs unfortunate that the default information aggregation systems we have (headlines, social media, etc) are not quite up to the task of accurately representing experts. I think this is an important and (in the abstract) nontrivial point, and Iâm a bit sad that our best solution here appears to be blaming user error.
I strongly agree, though I usually feel much more strongly about this for evidence than for arguments! :P
Yeah, I think this seems true and important to me too.
There are three, somewhat overlapping solutions to small parts of this problem that Iâm excited about: (1) âResearch Distillationâ to pay off âResearch Debtâ, (2) more summaries, and (3) more collections.
And I think we can also broaden the idea of âresearch distillationâ to distilling bodies of knowledge other than just âresearchâ, like sets of reasonable-seeming arguments and considerations various people have highlighted.
I think the new EA Forum wiki+tagging system is a nice example of these three types of solutions, which is part of why Iâm spending some time helping with it lately.
And I think âargument mappingâ type things might also be a valuable, somewhat similar solution to part of the problem. (E.g., Kialo, though Iâve never actually used that myself.)
There was also a relevant EAG panel discussion a few years ago: Aggregating knowledge | Panel | EA Global: San Francisco 2016.