The argument now has a bit of a motte and bailey feel, in that case. In various places you make claims such as
“The Folly of “EAs Should”
“One consequence of this is that if there are no normative claims, any supposition about what ought to happen based on EA ideas is invalid”;
“So I think we should discuss why Effective Altruism implying that there are specific and clear preferable options for Effective Altruists is often harmful”;
“Claiming something normative given moral uncertainty, i.e. that we may be incorrect, is hard to justify. There are approaches to moral uncertainty that allow a resolution, but if EAs should cooperate, I argue that it may be useful, regardless of normative goals, to avoid normative statements that exclude some viewpoints.”
“and conclusions based on the suppositions about key facts are usually unwarranted, at least without clear caveats about the positions needed to make the conclusions”
These seem to be claims to the effect that (1) we should (almost) never make normative claims (2) strong scepticism about knowing that one path is better from an EA point of view than another. But I don’t see a defence of either of these claims in the piece. For example, I don’t see a defence of the claim that it is mistaken to think/say/argue that focusing on US policy or on GiveWell charities is not the best way to do the most good.
If the claim is the weaker one that sometimes EAs can be overconfident in their view of the best way forward or use language that can be off-putting, then that may be right. But that seems different to the “never say that some choices EAs make are better than others” claim, which is suggested elsewhere in the piece
I think I agree with you on the substantive points, and didn’t think that people would misread it as making the bolder claim if they read the post, given that I caveated most of the statements fairly explicitly. If this was misleading, I apologize.
I don’t think there’s any need to apologise! I was trying to make the case that I don’t think you showed how we could distinguish reasonable and unreasonable uses of normative claims
Hi, thanks for the reply!
The argument now has a bit of a motte and bailey feel, in that case. In various places you make claims such as
“The Folly of “EAs Should”
“One consequence of this is that if there are no normative claims, any supposition about what ought to happen based on EA ideas is invalid”;
“So I think we should discuss why Effective Altruism implying that there are specific and clear preferable options for Effective Altruists is often harmful”;
“Claiming something normative given moral uncertainty, i.e. that we may be incorrect, is hard to justify. There are approaches to moral uncertainty that allow a resolution, but if EAs should cooperate, I argue that it may be useful, regardless of normative goals, to avoid normative statements that exclude some viewpoints.”
“and conclusions based on the suppositions about key facts are usually unwarranted, at least without clear caveats about the positions needed to make the conclusions”
These seem to be claims to the effect that (1) we should (almost) never make normative claims (2) strong scepticism about knowing that one path is better from an EA point of view than another. But I don’t see a defence of either of these claims in the piece. For example, I don’t see a defence of the claim that it is mistaken to think/say/argue that focusing on US policy or on GiveWell charities is not the best way to do the most good.
If the claim is the weaker one that sometimes EAs can be overconfident in their view of the best way forward or use language that can be off-putting, then that may be right. But that seems different to the “never say that some choices EAs make are better than others” claim, which is suggested elsewhere in the piece
I think I agree with you on the substantive points, and didn’t think that people would misread it as making the bolder claim if they read the post, given that I caveated most of the statements fairly explicitly. If this was misleading, I apologize.
I don’t think there’s any need to apologise! I was trying to make the case that I don’t think you showed how we could distinguish reasonable and unreasonable uses of normative claims