I think EA is correct about the importance of cause prioritization, cause neutrality, paying attention to outcomes, and the general virtues of explicit modelling and being strategic about how you try to improve the world
Yes, and these things are explicitly under attack from political actors.
Funding bednets, or policy reform, or AI risk research, are all contingent on a combination of those core EA ideas that we take for granted with a series of object-level, empirical beliefs, almost none of which EAs are naturally “the experts” on
When EAs are not the experts, EAs pay attention to the relevant experts.
“Politicized” questions and values are no different, so we need to be open to feedback and input from external experts
This is not about whether we should be “open to feedback and input”. This is about whether politicized stances are harmful or helpful. All the examples in the OP are cases where I am or was, in at least a minimal theoretical sense, “open to feedback and input”, but quickly realized that other people were wrong and destructive. And other EAs have also quickly realized that they were being wrong and destructive.
Yes, and these things are explicitly under attack from political actors.
When EAs are not the experts, EAs pay attention to the relevant experts.
This is not about whether we should be “open to feedback and input”. This is about whether politicized stances are harmful or helpful. All the examples in the OP are cases where I am or was, in at least a minimal theoretical sense, “open to feedback and input”, but quickly realized that other people were wrong and destructive. And other EAs have also quickly realized that they were being wrong and destructive.