I think this is only true with a very narrow conception of what the âEA things that we are doingâ are. I think EA is correct about the importance of cause prioritization, cause neutrality, paying attention to outcomes, and the general virtues of explicit modelling and being strategic about how you try to improve the world.
Thatâs all I believe constitutes âEA thingsâ in your usage. Funding bednets, or policy reform, or AI risk research, are all contingent on a combination of those core EA ideas that we take for granted with a series of object-level, empirical beliefs, almost none of which EAs are naturally âthe expertsâ on. If the global research community on poverty interventions came to the consensus âactually we think bednets are bad nowâ then EA orgs would need to listen to that and change course.
âPoliticizedâ questions and values are no different, so we need to be open to feedback and input from external experts, whatever constitutes expertise in the field in question.
I think EA is correct about the importance of cause prioritization, cause neutrality, paying attention to outcomes, and the general virtues of explicit modelling and being strategic about how you try to improve the world
Yes, and these things are explicitly under attack from political actors.
Funding bednets, or policy reform, or AI risk research, are all contingent on a combination of those core EA ideas that we take for granted with a series of object-level, empirical beliefs, almost none of which EAs are naturally âthe expertsâ on
When EAs are not the experts, EAs pay attention to the relevant experts.
âPoliticizedâ questions and values are no different, so we need to be open to feedback and input from external experts
This is not about whether we should be âopen to feedback and inputâ. This is about whether politicized stances are harmful or helpful. All the examples in the OP are cases where I am or was, in at least a minimal theoretical sense, âopen to feedback and inputâ, but quickly realized that other people were wrong and destructive. And other EAs have also quickly realized that they were being wrong and destructive.
I think this is only true with a very narrow conception of what the âEA things that we are doingâ are. I think EA is correct about the importance of cause prioritization, cause neutrality, paying attention to outcomes, and the general virtues of explicit modelling and being strategic about how you try to improve the world.
Thatâs all I believe constitutes âEA thingsâ in your usage. Funding bednets, or policy reform, or AI risk research, are all contingent on a combination of those core EA ideas that we take for granted with a series of object-level, empirical beliefs, almost none of which EAs are naturally âthe expertsâ on. If the global research community on poverty interventions came to the consensus âactually we think bednets are bad nowâ then EA orgs would need to listen to that and change course.
âPoliticizedâ questions and values are no different, so we need to be open to feedback and input from external experts, whatever constitutes expertise in the field in question.
Yes, and these things are explicitly under attack from political actors.
When EAs are not the experts, EAs pay attention to the relevant experts.
This is not about whether we should be âopen to feedback and inputâ. This is about whether politicized stances are harmful or helpful. All the examples in the OP are cases where I am or was, in at least a minimal theoretical sense, âopen to feedback and inputâ, but quickly realized that other people were wrong and destructive. And other EAs have also quickly realized that they were being wrong and destructive.