I’m continually unsure how best to label or characterize my beliefs. I recently switched from calling myself a moral realist (usually with some “but its complicated” pasted on) to an “axiological realist.”
I think some states of the world are objectively better than others, pleasure is inherently good and suffering is inherently bad, and that we can say things like “objectively it would be better topromote happiness over suffering”
But I’m not sure I see the basis for making some additional leap to genuine normativity; I don’t think things like objective ordering imply some additional property which is strongly associated with phrases like “one must” or “one should”.
Of course the label doesn’t matter a ton, but I’m curious both what people think of as the appropriate label for such a set of beliefs and what people think of it on the merits.
I’m not an axiological realist, but it seems really helpful to have a term for that position, upvoted.
Broadly, and off-topic-ally, I’m confused why moral philosophers don’t always distinguish between axiology (valuations of states of the world) and morality (how one ought to behave). People seem to frequently talk past each for lack of this distinction. For example, they object to valuing a really large number of moral patients (an axiological claim) on the grounds that doing so would be too demanding (a moral claim). I first learned these terms from https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/ which I recommend.
Alastair Norcross is a famous philosopher with similar views. Here’s the argument I once gave him that seemed to convert him (at least on that day) to realism about normative reasons:
First, we can ask whether you’d like to give up your Value Realism in favour of a relativistic view on which there’s “hedonistic value”, “desire-fulfilment value”, and “Nazi value”, all metaphysically on a par. If not—if there’s really just one correct view of value, regardless of what subjective standards anyone might arbitrarily endorse—then we can raise the question of why normative reasons don’t move in parallel. Surely an account of reasons for action that is grounded in facts about what’s genuinely valuable is superior to an alternative account that bears no connection to the true value facts?
This just seems to be question-begging. It just seems to me you’re saying “axiological realism gives rise to normative realism because surely axiological realism gives rise to normative realism”.
Why do you think some states of the world are objectively better than others, or that pleasure is inherently good? I suppose I can go check out the podcast, but I’d be happy to have a discussion with you here.
Assuming we’re not radically mistaken about our own subjective experience, it really seems like pleasure is good for the being experiencing it (aside from any function or causal effects it may have).
In fact, pleasure without goodness in some sense seems like an incoherent concept. If a person was to insist that they felt pleasure but in no sense was this a good thing, I would say that they are mistaken about something, whether it be the nature of their own experience or the usual meaning of words.
Some people, I think, concede the above but want to object that lower-case goodness in the sense described is distinct from some capital-G objective Goodness out there in the world.
But sentient beings are a perfectly valid element of the world/universe, and so goodness for a given being simply implies goodness at large (all else equal of course). There’s no spooky metaphysical sense in which it’s written into the stars; it is simply directly implied by the facts about what some things are like to some sentient beings.
I’d add that the above logic holds fine, and with even more rhetorical and ethical force, in the case of suffering.
Now if you accept the above, here’s a simple thought experiment: consider two states of the world, identical in every way except in world A you’re experiencing a terrible stomach ache and in world B you’re not.
The previous argument implies that there is simply more badness in world A, full stop.
Much more to be said ofc but I’ll leave it there :)
When you say “Assuming we’re not radically mistaken”...you’re using the term “we” as though you’re assuming I and others agree with you. But I don’t know if I agree with you, and there’s a good chance I don’t. What do you mean when you say that pleasure is good for the being experiencing it? For that matter, what do you mean by “pleasure”? If “pleasure” refers to any experience that an agent prefers, and for something to be good for something is for them to prefer it, then you’d be saying something I’d agree with: that any experiences an agent prefers are experiences that agent prefers. But if you’re not saying that, then I am not sure what you are saying.
I think there are facts about what is good according to different people’s stances. So my pleasure can be good according to my stance. But I do not think pleasure is stance-independently good
In fact, pleasure without goodness in some sense seems like an incoherent concept.
What do you mean by “goodness”?
But sentient beings are a perfectly valid element of the world/universe, and so goodness for a given being simply implies goodness at large (all else equal of course).
I’m perfectly fine with saying that there are facts about what individuals prefer and consider good, but the fact that something is good relative to someone’s preferences does not entail that it is good simpliciter, good relative to my preferences, intrinsically good, or anything like that. The fact that this person is a “valid element of the world/universe” doesn’t change that fact.
There’s no spooky metaphysical sense in which it’s written into the stars; it is simply directly implied by the facts about what some things are like to some sentient beings.
What you’re saying doesn’t strike me so much as metaphysically spooky but as conceptually underdeveloped. I don’t think it’s clear (at least, not to me) what you mean when you refer to goodness. For instance, I cannot tell if you are arguing for some kind of moral realism or normative realism.
Now if you accept the above, here’s a simple thought experiment: consider two states of the world, identical in every way except in world A you’re experiencing a terrible stomach ache and in world B you’re not.
The previous argument implies that there is simply more badness in world A, full stop.
What would it mean for there to be “more badness” in world A? Again, it’s just not clear to me what you mean by the terms you are using.
I think I concede that ‘pleasure is good for the being experiencing it’. I don’t think this leads to were you take it, though. It is good for me to eat meat, but probably it isn’t good for the animal. But in the thought experiment you make, I prefer world A where I’m eating bacon and the pig is dead than world B where the pig is feeling fine and I’m eating broccoli. You can’t jump from what’s good for one to what’s good for many. But besides, granting something is good for he who experiences is feels likes bit broad: the good for him doesn’t make it into some law that must be obeyed, even form him/her. There are trade-offs between other desires, you might also want to consider (or not) long-term effects, etc… It also has no ontological status as ‘the good’, just as there is no Platonic form of ‘the good’ floating in Platonic heaven.
I think some states of the world are objectively better than others, pleasure is inherently good and suffering is inherently bad, and that we can say things like “objectively it would be better topromote happiness over suffering”
I know lots of people who think some amount of suffering is good (and not just instrumentally for having more pleasure later). Is your claim here just that you somehow know that pleasure is inherently good?
I think the belief you are describing is more accurately “I’m confident my subjective view won’t change” or something like that.
I think objective ordering does imply “one should” so I subscribe to moral realism. However, recently I’ve been highly appreciating the importance of your insistence that the “should” part is kind of fake—i.e. it means something like “action X is objectively the best way to create most value from the point of view of all moral patients” but it doesn’t imply that an ASI that figures out what is morally valuable will be motivated to act on it.
(Naively, it seems like if morality is objective, there’s basically a physical law formulated as “you should do actions with characteristics X”. Then, it seems like a superintelligence that figures out all the physical laws internalizes “I should do X”. I think this is wrong mainly because in human brains, that sentence deceptively seems to imply “I want to do x” (or perhaps “I want to want x”) whereas it actually means “Provided I want to create maximum value from an impartial perspective, I want to do x”. In my own case, the kind of argument for optimism around AI doom in the style that @Bentham’s Bulldog advocated in Doom Debates seemed a bit more attractive before I truly spelled this out in my head.)
I’m continually unsure how best to label or characterize my beliefs. I recently switched from calling myself a moral realist (usually with some “but its complicated” pasted on) to an “axiological realist.”
I think some states of the world are objectively better than others, pleasure is inherently good and suffering is inherently bad, and that we can say things like “objectively it would be better to promote happiness over suffering”
But I’m not sure I see the basis for making some additional leap to genuine normativity; I don’t think things like objective ordering imply some additional property which is strongly associated with phrases like “one must” or “one should”.
Of course the label doesn’t matter a ton, but I’m curious both what people think of as the appropriate label for such a set of beliefs and what people think of it on the merits.
(For those interested, I recorded a podcast on this with @sarahhw and @AbsurdlyMax a while back)
I’m not an axiological realist, but it seems really helpful to have a term for that position, upvoted.
Broadly, and off-topic-ally, I’m confused why moral philosophers don’t always distinguish between axiology (valuations of states of the world) and morality (how one ought to behave). People seem to frequently talk past each for lack of this distinction. For example, they object to valuing a really large number of moral patients (an axiological claim) on the grounds that doing so would be too demanding (a moral claim). I first learned these terms from https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/ which I recommend.
Alastair Norcross is a famous philosopher with similar views. Here’s the argument I once gave him that seemed to convert him (at least on that day) to realism about normative reasons:
This just seems to be question-begging. It just seems to me you’re saying “axiological realism gives rise to normative realism because surely axiological realism gives rise to normative realism”.
This is basically my view, and I think ‘axiological realism’ is a great name for it.
Why do you think some states of the world are objectively better than others, or that pleasure is inherently good? I suppose I can go check out the podcast, but I’d be happy to have a discussion with you here.
Assuming we’re not radically mistaken about our own subjective experience, it really seems like pleasure is good for the being experiencing it (aside from any function or causal effects it may have).
In fact, pleasure without goodness in some sense seems like an incoherent concept. If a person was to insist that they felt pleasure but in no sense was this a good thing, I would say that they are mistaken about something, whether it be the nature of their own experience or the usual meaning of words.
Some people, I think, concede the above but want to object that lower-case goodness in the sense described is distinct from some capital-G objective Goodness out there in the world.
But sentient beings are a perfectly valid element of the world/universe, and so goodness for a given being simply implies goodness at large (all else equal of course). There’s no spooky metaphysical sense in which it’s written into the stars; it is simply directly implied by the facts about what some things are like to some sentient beings.
I’d add that the above logic holds fine, and with even more rhetorical and ethical force, in the case of suffering.
Now if you accept the above, here’s a simple thought experiment: consider two states of the world, identical in every way except in world A you’re experiencing a terrible stomach ache and in world B you’re not.
The previous argument implies that there is simply more badness in world A, full stop.
Much more to be said ofc but I’ll leave it there :)
When you say “Assuming we’re not radically mistaken”...you’re using the term “we” as though you’re assuming I and others agree with you. But I don’t know if I agree with you, and there’s a good chance I don’t. What do you mean when you say that pleasure is good for the being experiencing it? For that matter, what do you mean by “pleasure”? If “pleasure” refers to any experience that an agent prefers, and for something to be good for something is for them to prefer it, then you’d be saying something I’d agree with: that any experiences an agent prefers are experiences that agent prefers. But if you’re not saying that, then I am not sure what you are saying.
I think there are facts about what is good according to different people’s stances. So my pleasure can be good according to my stance. But I do not think pleasure is stance-independently good
What do you mean by “goodness”?
I’m perfectly fine with saying that there are facts about what individuals prefer and consider good, but the fact that something is good relative to someone’s preferences does not entail that it is good simpliciter, good relative to my preferences, intrinsically good, or anything like that. The fact that this person is a “valid element of the world/universe” doesn’t change that fact.
What you’re saying doesn’t strike me so much as metaphysically spooky but as conceptually underdeveloped. I don’t think it’s clear (at least, not to me) what you mean when you refer to goodness. For instance, I cannot tell if you are arguing for some kind of moral realism or normative realism.
What would it mean for there to be “more badness” in world A? Again, it’s just not clear to me what you mean by the terms you are using.
I think I concede that ‘pleasure is good for the being experiencing it’. I don’t think this leads to were you take it, though. It is good for me to eat meat, but probably it isn’t good for the animal. But in the thought experiment you make, I prefer world A where I’m eating bacon and the pig is dead than world B where the pig is feeling fine and I’m eating broccoli. You can’t jump from what’s good for one to what’s good for many. But besides, granting something is good for he who experiences is feels likes bit broad: the good for him doesn’t make it into some law that must be obeyed, even form him/her. There are trade-offs between other desires, you might also want to consider (or not) long-term effects, etc… It also has no ontological status as ‘the good’, just as there is no Platonic form of ‘the good’ floating in Platonic heaven.
I know lots of people who think some amount of suffering is good (and not just instrumentally for having more pleasure later). Is your claim here just that you somehow know that pleasure is inherently good?
I think the belief you are describing is more accurately “I’m confident my subjective view won’t change” or something like that.
I think objective ordering does imply “one should” so I subscribe to moral realism. However, recently I’ve been highly appreciating the importance of your insistence that the “should” part is kind of fake—i.e. it means something like “action X is objectively the best way to create most value from the point of view of all moral patients” but it doesn’t imply that an ASI that figures out what is morally valuable will be motivated to act on it.
(Naively, it seems like if morality is objective, there’s basically a physical law formulated as “you should do actions with characteristics X”. Then, it seems like a superintelligence that figures out all the physical laws internalizes “I should do X”. I think this is wrong mainly because in human brains, that sentence deceptively seems to imply “I want to do x” (or perhaps “I want to want x”) whereas it actually means “Provided I want to create maximum value from an impartial perspective, I want to do x”. In my own case, the kind of argument for optimism around AI doom in the style that @Bentham’s Bulldog advocated in Doom Debates seemed a bit more attractive before I truly spelled this out in my head.)