I certainly think it’s unfortunate that the default information aggregation systems we have (headlines, social media, etc) are not quite up to the task of accurately representing experts. I think this is an important and (in the abstract) nontrivial point, and I’m a bit sad that our best solution here appears to be blaming user error.
Yeah, I think this seems true and important to me too.
And I think we can also broaden the idea of “research distillation” to distilling bodies of knowledge other than just “research”, like sets of reasonable-seeming arguments and considerations various people have highlighted.
I think the new EA Forum wiki+tagging system is a nice example of these three types of solutions, which is part of why I’m spending some time helping with it lately.
And I think “argument mapping” type things might also be a valuable, somewhat similar solution to part of the problem. (E.g., Kialo, though I’ve never actually used that myself.)
Yeah, I think this seems true and important to me too.
There are three, somewhat overlapping solutions to small parts of this problem that I’m excited about: (1) “Research Distillation” to pay off “Research Debt”, (2) more summaries, and (3) more collections.
And I think we can also broaden the idea of “research distillation” to distilling bodies of knowledge other than just “research”, like sets of reasonable-seeming arguments and considerations various people have highlighted.
I think the new EA Forum wiki+tagging system is a nice example of these three types of solutions, which is part of why I’m spending some time helping with it lately.
And I think “argument mapping” type things might also be a valuable, somewhat similar solution to part of the problem. (E.g., Kialo, though I’ve never actually used that myself.)
There was also a relevant EAG panel discussion a few years ago: Aggregating knowledge | Panel | EA Global: San Francisco 2016.