I think that people often derive their morality through social proof – when other people like me do it or think it, then it’s probably right. Hence it is a good strategy to appeal to their need for consistency that way – “If you think a health care system is a good thing, then don’t you think that this and that aspect of EA is just a natural extension of that, which you should endorse as well?”
I should try this line of argument around my parts, but last time I checked the premise was not universally endorsed in the US. If I remember the proportions correctly, then there was a sizable minority that had an agent-relative moral system and made a clear distinction between their own preferences, which were relevant to them, and other people’s preferences, which were irrelevant to them so long as they didn’t actively violate the other’s preference (according to some fuzzy, intuitive definition of “active”). Hence the argument might not work for those people.
I think that people often derive their morality through social proof – when other people like me do it or think it, then it’s probably right. Hence it is a good strategy to appeal to their need for consistency that way – “If you think a health care system is a good thing, then don’t you think that this and that aspect of EA is just a natural extension of that, which you should endorse as well?”
I should try this line of argument around my parts, but last time I checked the premise was not universally endorsed in the US. If I remember the proportions correctly, then there was a sizable minority that had an agent-relative moral system and made a clear distinction between their own preferences, which were relevant to them, and other people’s preferences, which were irrelevant to them so long as they didn’t actively violate the other’s preference (according to some fuzzy, intuitive definition of “active”). Hence the argument might not work for those people.