I agree with Squark—it’s only when we’ve already decided that, say, saving lives is important that we create health systems to do just that.
But, I agree with the point that EA is not doing anything different to society as a whole—particularly healthcare—in terms of its philosophical assumptions. It would be fairly inconsistent to selectively look for the philosophical assumptions that underlie EA and not healthcare systems.
More generally, I approach morality in a similar way: sentient beings aim to satisfy their own preferences. I can’t suddenly decide not to satisfy my own preferences, yet there’s no justification for putting my own preferences above those of others’. It seems to me, then, if I am satisfying my own preferences—which it is impossible not to do—I’m obligated to maximise the preference-satisfaction of others too.
We could ask “why act in a logically consistent fashion?” or “why act as logic tells you to act?”, but such questions presuppose the existence of logic, so I don’t think they’re valid questions to ask.
“it’s only when we’ve already decided that, say, saving lives is important that we create health systems to do just that.” But no one pays any credence to the few who argue that we shouldn’t value saving lives, we don’t even shrug and say ‘that’s their opinion, who am I to say that’s wrong?’, we just say that they are wrong. Why should ethics be any different?
I think that people often derive their morality through social proof – when other people like me do it or think it, then it’s probably right. Hence it is a good strategy to appeal to their need for consistency that way – “If you think a health care system is a good thing, then don’t you think that this and that aspect of EA is just a natural extension of that, which you should endorse as well?”
I should try this line of argument around my parts, but last time I checked the premise was not universally endorsed in the US. If I remember the proportions correctly, then there was a sizable minority that had an agent-relative moral system and made a clear distinction between their own preferences, which were relevant to them, and other people’s preferences, which were irrelevant to them so long as they didn’t actively violate the other’s preference (according to some fuzzy, intuitive definition of “active”). Hence the argument might not work for those people.
I agree—it would be bizarre to selectively criticise EA on this basis when our entire healthcare system is predicated on ethical assumptions.
Similarly, we could ask “why satisfy my own preferences?”, but seeing as we just do, we have to take it as a given. I think that the argument outlined in this post takes a similar position: we just do value certain things, and EA is simply the logical extension of our valuing these things.
You don’t really have a choice but to satisfy your own preferences.
Suppose you decide to stop satisfying your preferences. Well, you’ve just satisfied your preference to stop satisfying your preferences.
So the answer to the question is that it’s logically impossible not to. Sometimes your preferences will include helping others, and sometimes they wont. In either case, you’re satisfying your preference when you act on it.
That’s the lynch pin. You don’t have to. You can be utterly incapable of actually following through on what you’ve deemed is a logical behaviour, yet still comment on what is objectively right or wrong. (this goes back to your original comment too)
There are millions of obese people failing to immediately start and follow through on diets and exercise regimes today. This is failing to satisfy their preferences—they have an interest in not dying early, which being obese reliably correlates with. It ostensibly looks like they don’t value health and longevity on the basis of their outward behaviour. This doesn’t make the objectivity of health science any less real. If you do want to avoid premature death and if you do value bodily nourishment, then their approach is wrong. You can absolutely fail to satisfy your own preferences.
Asking the further questions of, “why satisfy my own preferences?”, or “what act in a logically consistent fashion?”, just drift us into the realm of radical scepticism. This is an utterly unhelpful position to hold—you can go nowhere from there. “Why trust my sense data are sometimes veridical?” …you don’t have to, but you’d be mad not to.
I agree with Squark—it’s only when we’ve already decided that, say, saving lives is important that we create health systems to do just that.
But, I agree with the point that EA is not doing anything different to society as a whole—particularly healthcare—in terms of its philosophical assumptions. It would be fairly inconsistent to selectively look for the philosophical assumptions that underlie EA and not healthcare systems.
More generally, I approach morality in a similar way: sentient beings aim to satisfy their own preferences. I can’t suddenly decide not to satisfy my own preferences, yet there’s no justification for putting my own preferences above those of others’. It seems to me, then, if I am satisfying my own preferences—which it is impossible not to do—I’m obligated to maximise the preference-satisfaction of others too.
We could ask “why act in a logically consistent fashion?” or “why act as logic tells you to act?”, but such questions presuppose the existence of logic, so I don’t think they’re valid questions to ask.
“it’s only when we’ve already decided that, say, saving lives is important that we create health systems to do just that.” But no one pays any credence to the few who argue that we shouldn’t value saving lives, we don’t even shrug and say ‘that’s their opinion, who am I to say that’s wrong?’, we just say that they are wrong. Why should ethics be any different?
I think that people often derive their morality through social proof – when other people like me do it or think it, then it’s probably right. Hence it is a good strategy to appeal to their need for consistency that way – “If you think a health care system is a good thing, then don’t you think that this and that aspect of EA is just a natural extension of that, which you should endorse as well?”
I should try this line of argument around my parts, but last time I checked the premise was not universally endorsed in the US. If I remember the proportions correctly, then there was a sizable minority that had an agent-relative moral system and made a clear distinction between their own preferences, which were relevant to them, and other people’s preferences, which were irrelevant to them so long as they didn’t actively violate the other’s preference (according to some fuzzy, intuitive definition of “active”). Hence the argument might not work for those people.
I agree—it would be bizarre to selectively criticise EA on this basis when our entire healthcare system is predicated on ethical assumptions.
Similarly, we could ask “why satisfy my own preferences?”, but seeing as we just do, we have to take it as a given. I think that the argument outlined in this post takes a similar position: we just do value certain things, and EA is simply the logical extension of our valuing these things.
You don’t really have a choice but to satisfy your own preferences.
Suppose you decide to stop satisfying your preferences. Well, you’ve just satisfied your preference to stop satisfying your preferences.
So the answer to the question is that it’s logically impossible not to. Sometimes your preferences will include helping others, and sometimes they wont. In either case, you’re satisfying your preference when you act on it.
“why satisfy my own preferences?”
That’s the lynch pin. You don’t have to. You can be utterly incapable of actually following through on what you’ve deemed is a logical behaviour, yet still comment on what is objectively right or wrong. (this goes back to your original comment too)
There are millions of obese people failing to immediately start and follow through on diets and exercise regimes today. This is failing to satisfy their preferences—they have an interest in not dying early, which being obese reliably correlates with. It ostensibly looks like they don’t value health and longevity on the basis of their outward behaviour. This doesn’t make the objectivity of health science any less real. If you do want to avoid premature death and if you do value bodily nourishment, then their approach is wrong. You can absolutely fail to satisfy your own preferences.
Asking the further questions of, “why satisfy my own preferences?”, or “what act in a logically consistent fashion?”, just drift us into the realm of radical scepticism. This is an utterly unhelpful position to hold—you can go nowhere from there. “Why trust my sense data are sometimes veridical?” …you don’t have to, but you’d be mad not to.