Upon reflection, I want to emphasize that I strongly agree with your general point that in the world we live in, on the margin people probably ought to listen directly to what experts say. Unfortunately, I think this is in the general category of other advice like “do the homework” (eg, read original sources, don’t be sloppy with the statistics, read original papers, don’t just read the abstract or press release, read the original 2-sentence quote before taking somebody else’s 1-sentence summary at face value, etc), and time/attention/laziness constraints may make taking this advice to heart prohibitively costly (or be perceived this way).
I certainly think it’s unfortunate that the default information aggregation systems we have (headlines, social media, etc) are not quite up to the task of accurately representing experts. I think this is an important and (in the abstract) nontrivial point, and I’m a bit sad that our best solution here appears to be blaming user error.
Somewhat independently, I’d generally like our standards to be higher than ‘this argument/evidence could be modified to preserve the conclusion’
I strongly agree, though I usually feel much more strongly about this for evidence than for arguments! :P
I certainly think it’s unfortunate that the default information aggregation systems we have (headlines, social media, etc) are not quite up to the task of accurately representing experts. I think this is an important and (in the abstract) nontrivial point, and I’m a bit sad that our best solution here appears to be blaming user error.
Yeah, I think this seems true and important to me too.
And I think we can also broaden the idea of “research distillation” to distilling bodies of knowledge other than just “research”, like sets of reasonable-seeming arguments and considerations various people have highlighted.
I think the new EA Forum wiki+tagging system is a nice example of these three types of solutions, which is part of why I’m spending some time helping with it lately.
And I think “argument mapping” type things might also be a valuable, somewhat similar solution to part of the problem. (E.g., Kialo, though I’ve never actually used that myself.)
Upon reflection, I want to emphasize that I strongly agree with your general point that in the world we live in, on the margin people probably ought to listen directly to what experts say. Unfortunately, I think this is in the general category of other advice like “do the homework” (eg, read original sources, don’t be sloppy with the statistics, read original papers, don’t just read the abstract or press release, read the original 2-sentence quote before taking somebody else’s 1-sentence summary at face value, etc), and time/attention/laziness constraints may make taking this advice to heart prohibitively costly (or be perceived this way).
I certainly think it’s unfortunate that the default information aggregation systems we have (headlines, social media, etc) are not quite up to the task of accurately representing experts. I think this is an important and (in the abstract) nontrivial point, and I’m a bit sad that our best solution here appears to be blaming user error.
I strongly agree, though I usually feel much more strongly about this for evidence than for arguments! :P
Yeah, I think this seems true and important to me too.
There are three, somewhat overlapping solutions to small parts of this problem that I’m excited about: (1) “Research Distillation” to pay off “Research Debt”, (2) more summaries, and (3) more collections.
And I think we can also broaden the idea of “research distillation” to distilling bodies of knowledge other than just “research”, like sets of reasonable-seeming arguments and considerations various people have highlighted.
I think the new EA Forum wiki+tagging system is a nice example of these three types of solutions, which is part of why I’m spending some time helping with it lately.
And I think “argument mapping” type things might also be a valuable, somewhat similar solution to part of the problem. (E.g., Kialo, though I’ve never actually used that myself.)
There was also a relevant EAG panel discussion a few years ago: Aggregating knowledge | Panel | EA Global: San Francisco 2016.