Would it be possible to add a forum-wide search/sorting option for comments that score unusually high on the negative product of agreement and karma? It would help with finding posts that people really appreciate but still disagree with.
Usually, karma is strongly correlated with agreement on some level, even in this system. So if a comment has high disagreement and high karma, the karma has been deconfounded—it seems much more likely that people have updated on it or otherwise thought the arguments have gone underappreciated. And if a high proportion of people updated on it, then it’s more likely that I will too.
Finding comments like this is a great way for me increase my exposure to good arguments I haven’t encountered before.[1] If this sorting option existed, it would be the primary benefit of the agreement axis for me.
In general, I think research communities should prioritise the flow of information that updates people’s models of things (i.e. gears-level/model-building evidence as opposed to testimonial evidence). This is a departure from academic “veritistic” social epistemology, where the explicit aim is usually to increase average epistemic accuracy by making people update on testimony correctly. But most research in EA, I think, isn’t bottlenecked by more accurate beliefs (selecting the best-fit beliefs out of prevailing options). Instead, I think EA is bottlenecked by new insights and models, and you increase the rate of those by having more people exposed to gears-level evidence.
There are many that I can’t recall, but these two comments made by Matthew Barnett and Paul Christiano are two examples. I mildly disagree the former, and I strongly disagree with the latter, but still found both of them very helpfwl.
Love the new voting axis.
Would it be possible to add a forum-wide search/sorting option for comments that score unusually high on the negative product of agreement and karma? It would help with finding posts that people really appreciate but still disagree with.
Usually, karma is strongly correlated with agreement on some level, even in this system. So if a comment has high disagreement and high karma, the karma has been deconfounded—it seems much more likely that people have updated on it or otherwise thought the arguments have gone underappreciated. And if a high proportion of people updated on it, then it’s more likely that I will too.
Finding comments like this is a great way for me increase my exposure to good arguments I haven’t encountered before.[1] If this sorting option existed, it would be the primary benefit of the agreement axis for me.
In general, I think research communities should prioritise the flow of information that updates people’s models of things (i.e. gears-level/model-building evidence as opposed to testimonial evidence). This is a departure from academic “veritistic” social epistemology, where the explicit aim is usually to increase average epistemic accuracy by making people update on testimony correctly. But most research in EA, I think, isn’t bottlenecked by more accurate beliefs (selecting the best-fit beliefs out of prevailing options). Instead, I think EA is bottlenecked by new insights and models, and you increase the rate of those by having more people exposed to gears-level evidence.
Can you give an example of a comment you really disagreed with, yet made you change your beliefs?
There are many that I can’t recall, but these two comments made by Matthew Barnett and Paul Christiano are two examples. I mildly disagree the former, and I strongly disagree with the latter, but still found both of them very helpfwl.