I just assume that EAs are correct about the EA things that we are doing. Of course that is a rational assumption to make. Otherwise you are just throwing yourself into a pit of endless self-doubt. It does not need to be argued that EAs know best about EA, just as it does not need to be argued that climatologists know best about the climate, psychologists know best about psychology and so on.
I think this is only true with a very narrow conception of what the “EA things that we are doing” are. I think EA is correct about the importance of cause prioritization, cause neutrality, paying attention to outcomes, and the general virtues of explicit modelling and being strategic about how you try to improve the world.
That’s all I believe constitutes “EA things” in your usage. Funding bednets, or policy reform, or AI risk research, are all contingent on a combination of those core EA ideas that we take for granted with a series of object-level, empirical beliefs, almost none of which EAs are naturally “the experts” on. If the global research community on poverty interventions came to the consensus “actually we think bednets are bad now” then EA orgs would need to listen to that and change course.
“Politicized” questions and values are no different, so we need to be open to feedback and input from external experts, whatever constitutes expertise in the field in question.
I think EA is correct about the importance of cause prioritization, cause neutrality, paying attention to outcomes, and the general virtues of explicit modelling and being strategic about how you try to improve the world
Yes, and these things are explicitly under attack from political actors.
Funding bednets, or policy reform, or AI risk research, are all contingent on a combination of those core EA ideas that we take for granted with a series of object-level, empirical beliefs, almost none of which EAs are naturally “the experts” on
When EAs are not the experts, EAs pay attention to the relevant experts.
“Politicized” questions and values are no different, so we need to be open to feedback and input from external experts
This is not about whether we should be “open to feedback and input”. This is about whether politicized stances are harmful or helpful. All the examples in the OP are cases where I am or was, in at least a minimal theoretical sense, “open to feedback and input”, but quickly realized that other people were wrong and destructive. And other EAs have also quickly realized that they were being wrong and destructive.
This is a misunderstanding. Perhaps you might re-read the OP more carefully?
Feel free to add to the list.
I would take your response more seriously if you hadn’t told everyone who commented that they had misunderstood your post.
If everyone’s missing the point, presumably you should write the point more clearly?
I just assume that EAs are correct about the EA things that we are doing. Of course that is a rational assumption to make. Otherwise you are just throwing yourself into a pit of endless self-doubt. It does not need to be argued that EAs know best about EA, just as it does not need to be argued that climatologists know best about the climate, psychologists know best about psychology and so on.
I think this is only true with a very narrow conception of what the “EA things that we are doing” are. I think EA is correct about the importance of cause prioritization, cause neutrality, paying attention to outcomes, and the general virtues of explicit modelling and being strategic about how you try to improve the world.
That’s all I believe constitutes “EA things” in your usage. Funding bednets, or policy reform, or AI risk research, are all contingent on a combination of those core EA ideas that we take for granted with a series of object-level, empirical beliefs, almost none of which EAs are naturally “the experts” on. If the global research community on poverty interventions came to the consensus “actually we think bednets are bad now” then EA orgs would need to listen to that and change course.
“Politicized” questions and values are no different, so we need to be open to feedback and input from external experts, whatever constitutes expertise in the field in question.
Yes, and these things are explicitly under attack from political actors.
When EAs are not the experts, EAs pay attention to the relevant experts.
This is not about whether we should be “open to feedback and input”. This is about whether politicized stances are harmful or helpful. All the examples in the OP are cases where I am or was, in at least a minimal theoretical sense, “open to feedback and input”, but quickly realized that other people were wrong and destructive. And other EAs have also quickly realized that they were being wrong and destructive.