One thing I struggle with in discourse, is expressing agreement. Agreeing seems less generative, since I often don’t have much more to say than “I agree with this and think you explain it well.” I strongly agree with this post, and am very happy you made it. I have some questions/minor points of disagreement, but want to focus on what I agree with before I get to that, since I overwhelmingly agree and don’t want to detract from your point.
The sentiment “we are smarter than everyone and therefore we distrust non-EA sources” seems pervasive in EA. I love a lot about EA, I am a highly engaged member. But that sentiment is one of the worst parts about EA (if not the worst). I believe it is highly destructive to our ability to achieve our aims of doing good effectively.
Some sub-communities within EA seem to do better at this than others. That being said, I think every element of EA engages in this kind of thinking to some extent. I don’t know if I’ve ever met any EA who didn’t think it on some level. I definitely have a stream of this within me.
But, there is a much softer, more reasonable version of that sentiment. Something like “EA has an edge in some domains, but other groups also have worthwhile contributions.” And I’ve met plenty of EAs who operate much more on this more reasonable line than the excessively superior sentiment described above. Still, it’s easy to slip into the excessively superior sentiment and I think we should be vigilant to avoid it.
------
Onto some more critical questions/thoughts.
My previous epistemics used to center around “expert consensus.” The COVID-19 pandemic changed that. Expert consensus seemed to frequently be wrong, and I ended up relying much more on individuals with a proven track record, like Zeynep Tufekci. I’m still not sure what my epistemics are, but I’ve moved towards a forecasting based model. Where I most trust people with a proven track record of getting things right, rather than experts. But it’s hard to find people with this proven track record, so I almost always still default to trusting experts. I certainly don’t think forum/blog posts fit into this “proven track record” category, unless it’s the blog of someone with a proven track record. But “proven track record” is still a very high standard. Zeynep is literally the only person I know who fits the bill. My worry with people using a “forecaster > expert” model is they won’t have a high enough standard for what qualifies someone as a trust worthy forecaster. And it’s not like I trust her on everything. I’m wondering what your thoughts are on a forecaster model.
Another question I have is that the slowness of peer-review does strike me as a legitimate issue. But I am not in the AI field at all so I have very little knowledge. I still would like to see AI researchers make more efforts to get their work peer-reviewed, but I’m wondering if there might be some dual system, where less time sensitive reports get peer reviewed and are treated with a high-level of trust, and more time-sensitive reports do not go through as rigorous of a process, but are still shared, albeit with a lower level of trust. I’m really not sure, but some sort of dual system seems necessary to me. It can’t be we totally disregard all non peer-reviewed work?
One thing I struggle with in discourse, is expressing agreement. Agreeing seems less generative, since I often don’t have much more to say than “I agree with this and think you explain it well.” I strongly agree with this post, and am very happy you made it. I have some questions/minor points of disagreement, but want to focus on what I agree with before I get to that, since I overwhelmingly agree and don’t want to detract from your point.
The sentiment “we are smarter than everyone and therefore we distrust non-EA sources” seems pervasive in EA. I love a lot about EA, I am a highly engaged member. But that sentiment is one of the worst parts about EA (if not the worst). I believe it is highly destructive to our ability to achieve our aims of doing good effectively.
Some sub-communities within EA seem to do better at this than others. That being said, I think every element of EA engages in this kind of thinking to some extent. I don’t know if I’ve ever met any EA who didn’t think it on some level. I definitely have a stream of this within me.
But, there is a much softer, more reasonable version of that sentiment. Something like “EA has an edge in some domains, but other groups also have worthwhile contributions.” And I’ve met plenty of EAs who operate much more on this more reasonable line than the excessively superior sentiment described above. Still, it’s easy to slip into the excessively superior sentiment and I think we should be vigilant to avoid it.
------
Onto some more critical questions/thoughts.
My previous epistemics used to center around “expert consensus.” The COVID-19 pandemic changed that. Expert consensus seemed to frequently be wrong, and I ended up relying much more on individuals with a proven track record, like Zeynep Tufekci. I’m still not sure what my epistemics are, but I’ve moved towards a forecasting based model. Where I most trust people with a proven track record of getting things right, rather than experts. But it’s hard to find people with this proven track record, so I almost always still default to trusting experts. I certainly don’t think forum/blog posts fit into this “proven track record” category, unless it’s the blog of someone with a proven track record. But “proven track record” is still a very high standard. Zeynep is literally the only person I know who fits the bill. My worry with people using a “forecaster > expert” model is they won’t have a high enough standard for what qualifies someone as a trust worthy forecaster. And it’s not like I trust her on everything. I’m wondering what your thoughts are on a forecaster model.
Another question I have is that the slowness of peer-review does strike me as a legitimate issue. But I am not in the AI field at all so I have very little knowledge. I still would like to see AI researchers make more efforts to get their work peer-reviewed, but I’m wondering if there might be some dual system, where less time sensitive reports get peer reviewed and are treated with a high-level of trust, and more time-sensitive reports do not go through as rigorous of a process, but are still shared, albeit with a lower level of trust. I’m really not sure, but some sort of dual system seems necessary to me. It can’t be we totally disregard all non peer-reviewed work?