The challenge of the Scourge is that a common bioconservative belief (“The embryo has the same moral status as an adult human”) may entail another which seems facially highly implausible (“Therefore, spontaneous abortion is one of the most serious problems facing humanity, and we must do our utmost to investigate ways of preventing this death—even if this is to the detriment of other pressing issues”). Many (most?) find the latter bizarre, so if they believed it was entailed by the bioconservative claim would infer this claim must be false.
I don’t really see how this helps, because it seems a similar thing applies to EAs, regardless of whether the issue is hypocrisy or a modus ponens / modus tollens. We use common moral beliefs (future people have value) to entail others which seem facially highly implausible (we should spend vast sums of money on strange projects, even if this is to the detriment of other pressing issues). Many (most?) find the latter bizarre, so if they believed it was entailed by the future-people-have-value claim would infer this claim must be false. In both cases the argument is using common ‘near’ moral views to deduce sweeping global moral imperatives.
Sure—I’m not claiming “EA doctrine” has no putative counter-examples which should lead us to doubt it. But these counter-examples should rely on beliefs about propositions not assessments of behaviour: if EA says “it is better to do X than Y”, yet this seems wrong, this is a reason to doubt EA, but whether anyone is actually doing X (or X instead of Y) is irrelevant. “EA doctrine” (ditto most other moral views) urges us to be much less selfish—that I am selfish anyway is not an argument against it.
I don’t really see how this helps, because it seems a similar thing applies to EAs, regardless of whether the issue is hypocrisy or a modus ponens / modus tollens. We use common moral beliefs (future people have value) to entail others which seem facially highly implausible (we should spend vast sums of money on strange projects, even if this is to the detriment of other pressing issues). Many (most?) find the latter bizarre, so if they believed it was entailed by the future-people-have-value claim would infer this claim must be false. In both cases the argument is using common ‘near’ moral views to deduce sweeping global moral imperatives.
Sure—I’m not claiming “EA doctrine” has no putative counter-examples which should lead us to doubt it. But these counter-examples should rely on beliefs about propositions not assessments of behaviour: if EA says “it is better to do X than Y”, yet this seems wrong, this is a reason to doubt EA, but whether anyone is actually doing X (or X instead of Y) is irrelevant. “EA doctrine” (ditto most other moral views) urges us to be much less selfish—that I am selfish anyway is not an argument against it.