I agree with you that people should be much more willing to disagree, and we need to foster a culture that encourages this. No disagreement is a sign of insufficient debate, not a well-mapped landscape. That said, I think EA’s in general should think way less about who said what and focus much more on whether the arguments themselves hold water.
I find it striking that all the examples in the post are about some redacted entity, when all of them could just as well have been rephrased to be about object level reality itself. For example:
[redacted] is on the wrong side of their disagreement with [redacted] and often seems to have kind of sloppy thinking about things like this,
Could to me be rephrased to
Why I believe <stance on topic> is incorrect.
To me it seems that just having the debate on <topic> is more interesting than the meta debate of <is org’s thinking on topic sloppy>. Thinking a lot about the views of specific persons or organizations has its time and place, but the right split of thinking about reality versus social reality is probably closer to 90⁄10 than 10⁄90.
I think it’s not primarily a question of how much to disagree – as I said, we see plenty of disagreement every day on the forum. The issue I’m trying to address is:
with whom we disagree,
how visible those disagreements are,
and particularly I’m trying to highlight that many internal disagreements will not be made public. The main epistemic benefit of disagreement is there even in private, but there’s a secondary benefit which needs the disagreement to be public, and that’s the one I’m trying to address.
To me it seems that just having the debate on <topic> is more interesting than the meta debate of <is org’s thinking on topic sloppy>.
The necessity of thinking about the second question is clearest when deciding who to fund, who to work for, who to hire, etc.
I appreciate this point, but personally I am probably more like 70-30 for general thinking, with variance depending on the topic. So much of thinking about the world is trust-based. My views on historical explanations virtually never depend on my reading of primary documents—they depend on my assessment of what the proportional consensus of expert historians thinks. Same with economics, or physics, or lots of things.
When I’m dealing directly with an issue, like biosecurity, it makes sense to have a higher split − 80-20 or 90-10 - but it’s still much easier to navigate if you know the landscape of views. For something like AI, I just don’t trust my own take on many arguments—I really rely a lot on the different communities of AI experts (such as they are).
I think most people most of the time don’t know enough about an issue to justify a 90-10 split in issue vs. view thinking. However, I should note all this is regarding the right split of personal attention; for public debate, I can understand wanting a greater focus on the object level (because the view-level should hopefully be served well by good object-level work anyway).
I agree with you that people should be much more willing to disagree, and we need to foster a culture that encourages this. No disagreement is a sign of insufficient debate, not a well-mapped landscape. That said, I think EA’s in general should think way less about who said what and focus much more on whether the arguments themselves hold water.
I find it striking that all the examples in the post are about some redacted entity, when all of them could just as well have been rephrased to be about object level reality itself. For example:
Could to me be rephrased to
To me it seems that just having the debate on <topic> is more interesting than the meta debate of <is org’s thinking on topic sloppy>. Thinking a lot about the views of specific persons or organizations has its time and place, but the right split of thinking about reality versus social reality is probably closer to 90⁄10 than 10⁄90.
I think it’s not primarily a question of how much to disagree – as I said, we see plenty of disagreement every day on the forum. The issue I’m trying to address is:
with whom we disagree,
how visible those disagreements are,
and particularly I’m trying to highlight that many internal disagreements will not be made public. The main epistemic benefit of disagreement is there even in private, but there’s a secondary benefit which needs the disagreement to be public, and that’s the one I’m trying to address.
The necessity of thinking about the second question is clearest when deciding who to fund, who to work for, who to hire, etc.
makes sense, agree completely
I appreciate this point, but personally I am probably more like 70-30 for general thinking, with variance depending on the topic. So much of thinking about the world is trust-based. My views on historical explanations virtually never depend on my reading of primary documents—they depend on my assessment of what the proportional consensus of expert historians thinks. Same with economics, or physics, or lots of things.
When I’m dealing directly with an issue, like biosecurity, it makes sense to have a higher split − 80-20 or 90-10 - but it’s still much easier to navigate if you know the landscape of views. For something like AI, I just don’t trust my own take on many arguments—I really rely a lot on the different communities of AI experts (such as they are).
I think most people most of the time don’t know enough about an issue to justify a 90-10 split in issue vs. view thinking. However, I should note all this is regarding the right split of personal attention; for public debate, I can understand wanting a greater focus on the object level (because the view-level should hopefully be served well by good object-level work anyway).