I agree—it would be bizarre to selectively criticise EA on this basis when our entire healthcare system is predicated on ethical assumptions.
Similarly, we could ask “why satisfy my own preferences?”, but seeing as we just do, we have to take it as a given. I think that the argument outlined in this post takes a similar position: we just do value certain things, and EA is simply the logical extension of our valuing these things.
You don’t really have a choice but to satisfy your own preferences.
Suppose you decide to stop satisfying your preferences. Well, you’ve just satisfied your preference to stop satisfying your preferences.
So the answer to the question is that it’s logically impossible not to. Sometimes your preferences will include helping others, and sometimes they wont. In either case, you’re satisfying your preference when you act on it.
That’s the lynch pin. You don’t have to. You can be utterly incapable of actually following through on what you’ve deemed is a logical behaviour, yet still comment on what is objectively right or wrong. (this goes back to your original comment too)
There are millions of obese people failing to immediately start and follow through on diets and exercise regimes today. This is failing to satisfy their preferences—they have an interest in not dying early, which being obese reliably correlates with. It ostensibly looks like they don’t value health and longevity on the basis of their outward behaviour. This doesn’t make the objectivity of health science any less real. If you do want to avoid premature death and if you do value bodily nourishment, then their approach is wrong. You can absolutely fail to satisfy your own preferences.
Asking the further questions of, “why satisfy my own preferences?”, or “what act in a logically consistent fashion?”, just drift us into the realm of radical scepticism. This is an utterly unhelpful position to hold—you can go nowhere from there. “Why trust my sense data are sometimes veridical?” …you don’t have to, but you’d be mad not to.
I agree—it would be bizarre to selectively criticise EA on this basis when our entire healthcare system is predicated on ethical assumptions.
Similarly, we could ask “why satisfy my own preferences?”, but seeing as we just do, we have to take it as a given. I think that the argument outlined in this post takes a similar position: we just do value certain things, and EA is simply the logical extension of our valuing these things.
You don’t really have a choice but to satisfy your own preferences.
Suppose you decide to stop satisfying your preferences. Well, you’ve just satisfied your preference to stop satisfying your preferences.
So the answer to the question is that it’s logically impossible not to. Sometimes your preferences will include helping others, and sometimes they wont. In either case, you’re satisfying your preference when you act on it.
“why satisfy my own preferences?”
That’s the lynch pin. You don’t have to. You can be utterly incapable of actually following through on what you’ve deemed is a logical behaviour, yet still comment on what is objectively right or wrong. (this goes back to your original comment too)
There are millions of obese people failing to immediately start and follow through on diets and exercise regimes today. This is failing to satisfy their preferences—they have an interest in not dying early, which being obese reliably correlates with. It ostensibly looks like they don’t value health and longevity on the basis of their outward behaviour. This doesn’t make the objectivity of health science any less real. If you do want to avoid premature death and if you do value bodily nourishment, then their approach is wrong. You can absolutely fail to satisfy your own preferences.
Asking the further questions of, “why satisfy my own preferences?”, or “what act in a logically consistent fashion?”, just drift us into the realm of radical scepticism. This is an utterly unhelpful position to hold—you can go nowhere from there. “Why trust my sense data are sometimes veridical?” …you don’t have to, but you’d be mad not to.