I am not suggesting avoiding the word “should” generally, as I said in the post. I thought it was clear that I am criticizing the way in which overly narrowing the ideal of what is and is not EA, and unreasonably narrowing what is normatively acceptable within the movement, which I keep seeing, is harmful. I think it’s clear that this can be done without claiming that everything is EA, or refraining from making normative statements altogether.
Regarding criticising Givewell’s reliance on RCTs, I think there is room for a diversity of opinion. It’s certainly reasonable to claim that as a matter of decision analysis, non-RCT evidence should be considered, and that risk-neutrality and unbiased decision making require treating less convincing evidence as valid, if weaker. (I’m certainly of that opinion.)
On the other hand, there is room for some effective altruists who prefer to be somewhat risk-averse to correctly view RCTs as more certain evidence than most other forms, and prefer interventions with clear evidence of that sort. So instead of saying that GiveWell should not rely as heavily on RCTs, or that EA organizations should do other things, I think we can, and should, make the case that there is an alternative approach which treats RCTs as only a single type of evidence, and that the views of Givewell and similar EA orgs are not the only valid way to approach effective giving. (And I think that this view is at least understood, and partly shared by many EA organizations and individuals, including many at Givewell.)
The argument now has a bit of a motte and bailey feel, in that case. In various places you make claims such as
“The Folly of “EAs Should”
“One consequence of this is that if there are no normative claims, any supposition about what ought to happen based on EA ideas is invalid”;
“So I think we should discuss why Effective Altruism implying that there are specific and clear preferable options for Effective Altruists is often harmful”;
“Claiming something normative given moral uncertainty, i.e. that we may be incorrect, is hard to justify. There are approaches to moral uncertainty that allow a resolution, but if EAs should cooperate, I argue that it may be useful, regardless of normative goals, to avoid normative statements that exclude some viewpoints.”
“and conclusions based on the suppositions about key facts are usually unwarranted, at least without clear caveats about the positions needed to make the conclusions”
These seem to be claims to the effect that (1) we should (almost) never make normative claims (2) strong scepticism about knowing that one path is better from an EA point of view than another. But I don’t see a defence of either of these claims in the piece. For example, I don’t see a defence of the claim that it is mistaken to think/say/argue that focusing on US policy or on GiveWell charities is not the best way to do the most good.
If the claim is the weaker one that sometimes EAs can be overconfident in their view of the best way forward or use language that can be off-putting, then that may be right. But that seems different to the “never say that some choices EAs make are better than others” claim, which is suggested elsewhere in the piece
I think I agree with you on the substantive points, and didn’t think that people would misread it as making the bolder claim if they read the post, given that I caveated most of the statements fairly explicitly. If this was misleading, I apologize.
I don’t think there’s any need to apologise! I was trying to make the case that I don’t think you showed how we could distinguish reasonable and unreasonable uses of normative claims
I am not suggesting avoiding the word “should” generally, as I said in the post. I thought it was clear that I am criticizing the way in which overly narrowing the ideal of what is and is not EA, and unreasonably narrowing what is normatively acceptable within the movement, which I keep seeing, is harmful. I think it’s clear that this can be done without claiming that everything is EA, or refraining from making normative statements altogether.
Regarding criticising Givewell’s reliance on RCTs, I think there is room for a diversity of opinion. It’s certainly reasonable to claim that as a matter of decision analysis, non-RCT evidence should be considered, and that risk-neutrality and unbiased decision making require treating less convincing evidence as valid, if weaker. (I’m certainly of that opinion.)
On the other hand, there is room for some effective altruists who prefer to be somewhat risk-averse to correctly view RCTs as more certain evidence than most other forms, and prefer interventions with clear evidence of that sort. So instead of saying that GiveWell should not rely as heavily on RCTs, or that EA organizations should do other things, I think we can, and should, make the case that there is an alternative approach which treats RCTs as only a single type of evidence, and that the views of Givewell and similar EA orgs are not the only valid way to approach effective giving. (And I think that this view is at least understood, and partly shared by many EA organizations and individuals, including many at Givewell.)
Hi, thanks for the reply!
The argument now has a bit of a motte and bailey feel, in that case. In various places you make claims such as
“The Folly of “EAs Should”
“One consequence of this is that if there are no normative claims, any supposition about what ought to happen based on EA ideas is invalid”;
“So I think we should discuss why Effective Altruism implying that there are specific and clear preferable options for Effective Altruists is often harmful”;
“Claiming something normative given moral uncertainty, i.e. that we may be incorrect, is hard to justify. There are approaches to moral uncertainty that allow a resolution, but if EAs should cooperate, I argue that it may be useful, regardless of normative goals, to avoid normative statements that exclude some viewpoints.”
“and conclusions based on the suppositions about key facts are usually unwarranted, at least without clear caveats about the positions needed to make the conclusions”
These seem to be claims to the effect that (1) we should (almost) never make normative claims (2) strong scepticism about knowing that one path is better from an EA point of view than another. But I don’t see a defence of either of these claims in the piece. For example, I don’t see a defence of the claim that it is mistaken to think/say/argue that focusing on US policy or on GiveWell charities is not the best way to do the most good.
If the claim is the weaker one that sometimes EAs can be overconfident in their view of the best way forward or use language that can be off-putting, then that may be right. But that seems different to the “never say that some choices EAs make are better than others” claim, which is suggested elsewhere in the piece
I think I agree with you on the substantive points, and didn’t think that people would misread it as making the bolder claim if they read the post, given that I caveated most of the statements fairly explicitly. If this was misleading, I apologize.
I don’t think there’s any need to apologise! I was trying to make the case that I don’t think you showed how we could distinguish reasonable and unreasonable uses of normative claims