I mean, that might help with a few problems, but doesn’t help with a lot of the problems. Also, it just seems so crazy. Giving up axiology to hold on to a not even very widely shared intuition? Giving up the idea that the world would be better if it had lots of extra happy people and every existing person was a million times better?
I don’t think giving up axiology is much or any bullet to bite, and I find the frameworks I linked
better motivated than axiology, and, in particular, by empathy, and to better respect what individuals (would) actually care about,[1] which I take to be pretty fundamental and pretty much the point of “ethics”, and
better fits with subjectivism/moral antirealism.[2]
The problems with axiology also seem worse to me, often as a consequence of failing to respect what individuals (would) actually care about and so failing at empathy, one way or another, as I illustrate in my sequence.
Giving up axiology to hold on to a not even very widely shared intuition?
What do you mean to imply here? Why would I force myself to accept axiology, which I don’t find compelling, at the cost of giving up my own stronger intuitions?
And is axiology (or the disjunction of conjunctions of intuitions from which it would follow) much more popular than person-affecting intuitions like the Procreation Asymmetry?
Giving up the idea that the world would be better if it had lots of extra happy people and every existing person was a million times better?
I think whether or not a given person-affecting view has to give that up can depend on the view and/or the details of the hypothetical.
At a basic level better, not necessarily the things they care about by derivation from other things they care about, because they can be mistaken in their derivations.
Moral realism, that there’s good or bad independently of individuals’ stances (or evaluative attitudes, as in my first post) seems to me to be a non-starter. I’ve never seen anything close to a good argument for moral realism, maybe other than epistemic humility and wagers.
I mean, that might help with a few problems, but doesn’t help with a lot of the problems. Also, it just seems so crazy. Giving up axiology to hold on to a not even very widely shared intuition? Giving up the idea that the world would be better if it had lots of extra happy people and every existing person was a million times better?
I think we have very different intuitions.
I don’t think giving up axiology is much or any bullet to bite, and I find the frameworks I linked
better motivated than axiology, and, in particular, by empathy, and to better respect what individuals (would) actually care about,[1] which I take to be pretty fundamental and pretty much the point of “ethics”, and
better fits with subjectivism/moral antirealism.[2]
The problems with axiology also seem worse to me, often as a consequence of failing to respect what individuals (would) actually care about and so failing at empathy, one way or another, as I illustrate in my sequence.
What do you mean to imply here? Why would I force myself to accept axiology, which I don’t find compelling, at the cost of giving up my own stronger intuitions?
And is axiology (or the disjunction of conjunctions of intuitions from which it would follow) much more popular than person-affecting intuitions like the Procreation Asymmetry?
I think whether or not a given person-affecting view has to give that up can depend on the view and/or the details of the hypothetical.
At a basic level better, not necessarily the things they care about by derivation from other things they care about, because they can be mistaken in their derivations.
Moral realism, that there’s good or bad independently of individuals’ stances (or evaluative attitudes, as in my first post) seems to me to be a non-starter. I’ve never seen anything close to a good argument for moral realism, maybe other than epistemic humility and wagers.
Want to come on the podcast and argue about the person-affecting view?
Probably our disagreements are too vast to settle much in a comment.