If values are chosen, not discovered, then how is the choice of values made?
Do you think the choice of values is made, even partially, even implicitly, in a way that involves something that fits the loose definition of a value—like “I want my values to be elegant when described in english” or “I want my values to match my pre-theoretic intuitions about the kinds of cases that I am likely to encounter?” Or do you think that the choice of values is made in some other way?
I too think that values are chosen, but I think that the choice involves implicit appeal to “deeper” values. These deeper values are not themselves chosen, on pain of infinite regress. And I think the case can be made that these deeper values are complex, at least for most people.
Sorry for the late reply. Good question. I would be more inclined to call it a “mechanism” rather than a (meta-)value. You’re right, there has to be something that isn’t chosen. Introspectively, it feels to me as though I’m concerned about my self-image as a moral/altruistic person, which is what drove me to hold the values I have. This is highly speculative, but perhaps “having a self-image as x” is what could be responsible for how people pick consequentialist goals?
If values are chosen, not discovered, then how is the choice of values made?
Do you think the choice of values is made, even partially, even implicitly, in a way that involves something that fits the loose definition of a value—like “I want my values to be elegant when described in english” or “I want my values to match my pre-theoretic intuitions about the kinds of cases that I am likely to encounter?” Or do you think that the choice of values is made in some other way?
I too think that values are chosen, but I think that the choice involves implicit appeal to “deeper” values. These deeper values are not themselves chosen, on pain of infinite regress. And I think the case can be made that these deeper values are complex, at least for most people.
Sorry for the late reply. Good question. I would be more inclined to call it a “mechanism” rather than a (meta-)value. You’re right, there has to be something that isn’t chosen. Introspectively, it feels to me as though I’m concerned about my self-image as a moral/altruistic person, which is what drove me to hold the values I have. This is highly speculative, but perhaps “having a self-image as x” is what could be responsible for how people pick consequentialist goals?