One question is what we want “morality” to refer to under anti-realism. For me, what seems important and action-guiding is what I want to do in life, so personally I think of normative ethics as “What is my goal?”.
Under this interpretation, the difference between biting bullets or not is how much people care about their theories being elegant, simple, parsimonious, vs how much they care about tracking their intuitions as closely as possible. You mention two good reasons for favoring a more intuition-tracking approach.
Alternatively, why might some people still want to bite bullets? Firstly, no one wants to accept a view that seems unacceptable. Introspectively biting a bullet can feel “right”, if I am convinced that the alternatives feel worse and if I realize that the aversion-generating intuitions are not intuitions that my rational self-image would endorse. For instance, I might feel quite uncomfortable with the thought to send all my money to people far away, while neglecting poor people in my community. I can accept this feeling as a sign that community matters intrinsically to me, i.e. that I care (somewhat) more strongly about the people close to me. Or I could bite the bullet and label “preference for in-group” as a “moral bias” – biased in relation to what I want my life-goals to be about. Perhaps, upon reflection, I decide that some moral intuitions matter more fundamentally to me, say for instance because I want to live for something that is “altruistic”/”universalizable” from a perspective like Harsanyi’s Veil of Ignorance. Given this fundamental assumption, I’ll be happy to ignore agent-relative moral intuitions. Of course, it isn’t wrong to end up with a mix of both ideas if the intuition “people in my community really matter more to me!” is just as strong strong as the intuition that you want your goal to work behind a veil of ignorance.
On Lesswrong, people often point out that human values are complex, and that those who bite too many bullets are making a mistake. I disagree. What is complex are human moral intuitions. Values, by which I mean “goals” or “terminal values”, are chosen, not discovered. (Because consequentialists goals are new and weird and hard for humans to have, so why would they be discoverable in a straightforward manner from all the stuff we start out with?) And just because our intuitions are complex – and totally contradicting each other sometimes – doesn’t mean that we’re forced to choose goals that look the same. Likewise, I think people who think some form of utiltiarianism must be the thing are making a mistake as well.
If values are chosen, not discovered, then how is the choice of values made?
Do you think the choice of values is made, even partially, even implicitly, in a way that involves something that fits the loose definition of a value—like “I want my values to be elegant when described in english” or “I want my values to match my pre-theoretic intuitions about the kinds of cases that I am likely to encounter?” Or do you think that the choice of values is made in some other way?
I too think that values are chosen, but I think that the choice involves implicit appeal to “deeper” values. These deeper values are not themselves chosen, on pain of infinite regress. And I think the case can be made that these deeper values are complex, at least for most people.
Sorry for the late reply. Good question. I would be more inclined to call it a “mechanism” rather than a (meta-)value. You’re right, there has to be something that isn’t chosen. Introspectively, it feels to me as though I’m concerned about my self-image as a moral/altruistic person, which is what drove me to hold the values I have. This is highly speculative, but perhaps “having a self-image as x” is what could be responsible for how people pick consequentialist goals?
One question is what we want “morality” to refer to under anti-realism. For me, what seems important and action-guiding is what I want to do in life, so personally I think of normative ethics as “What is my goal?”.
Under this interpretation, the difference between biting bullets or not is how much people care about their theories being elegant, simple, parsimonious, vs how much they care about tracking their intuitions as closely as possible. You mention two good reasons for favoring a more intuition-tracking approach.
Alternatively, why might some people still want to bite bullets? Firstly, no one wants to accept a view that seems unacceptable. Introspectively biting a bullet can feel “right”, if I am convinced that the alternatives feel worse and if I realize that the aversion-generating intuitions are not intuitions that my rational self-image would endorse. For instance, I might feel quite uncomfortable with the thought to send all my money to people far away, while neglecting poor people in my community. I can accept this feeling as a sign that community matters intrinsically to me, i.e. that I care (somewhat) more strongly about the people close to me. Or I could bite the bullet and label “preference for in-group” as a “moral bias” – biased in relation to what I want my life-goals to be about. Perhaps, upon reflection, I decide that some moral intuitions matter more fundamentally to me, say for instance because I want to live for something that is “altruistic”/”universalizable” from a perspective like Harsanyi’s Veil of Ignorance. Given this fundamental assumption, I’ll be happy to ignore agent-relative moral intuitions. Of course, it isn’t wrong to end up with a mix of both ideas if the intuition “people in my community really matter more to me!” is just as strong strong as the intuition that you want your goal to work behind a veil of ignorance.
On Lesswrong, people often point out that human values are complex, and that those who bite too many bullets are making a mistake. I disagree. What is complex are human moral intuitions. Values, by which I mean “goals” or “terminal values”, are chosen, not discovered. (Because consequentialists goals are new and weird and hard for humans to have, so why would they be discoverable in a straightforward manner from all the stuff we start out with?) And just because our intuitions are complex – and totally contradicting each other sometimes – doesn’t mean that we’re forced to choose goals that look the same. Likewise, I think people who think some form of utiltiarianism must be the thing are making a mistake as well.
If values are chosen, not discovered, then how is the choice of values made?
Do you think the choice of values is made, even partially, even implicitly, in a way that involves something that fits the loose definition of a value—like “I want my values to be elegant when described in english” or “I want my values to match my pre-theoretic intuitions about the kinds of cases that I am likely to encounter?” Or do you think that the choice of values is made in some other way?
I too think that values are chosen, but I think that the choice involves implicit appeal to “deeper” values. These deeper values are not themselves chosen, on pain of infinite regress. And I think the case can be made that these deeper values are complex, at least for most people.
Sorry for the late reply. Good question. I would be more inclined to call it a “mechanism” rather than a (meta-)value. You’re right, there has to be something that isn’t chosen. Introspectively, it feels to me as though I’m concerned about my self-image as a moral/altruistic person, which is what drove me to hold the values I have. This is highly speculative, but perhaps “having a self-image as x” is what could be responsible for how people pick consequentialist goals?