So, regarding the moral motivation thing, moral realism and motivational internalism are distinct philosophical concepts, and one can be true without the other also being true. Like, there could be moral facts, but they might not matter to some people. Or, maybe people who believe things are moral are motivated to act on their theory of morality, but the theory isn’t based on any moral facts but are just deeply held beliefs.
The latter example could be true regardless of whether moral realism is true or not. For instance, the psychopath might -think- that egoism is the right thing to do because their folk morality is that everyone is in it for themselves and suckers deserve what they get. This isn’t morality as we might understand it, but it would function psychologically as a justification for their actions to them (so they sleep better at night and have a more positive self-image) and effectively be motivating in a sense.
Even -if- both moral realism and motivational internalism were true, this doesn’t mean that people will automatically discover moral facts and act on them reliably. You would basically need to have perfect information and be perfectly rational for that to happen, and no one has these traits in the real world (except maybe God, hypothetically).
Yep, in the philosophical literature, they are distinct. I was merely making the point that I’m not sure one of these (moral realism is true but not motivating) actually reflects what people want to be implying when they say moral realism is true. In what sense are we saying that there is objective morality if it relies on some sentiments? I guess one can claim that the rational thing to do given some objective (i.e. morality) is that objective, but that doesn’t seem very distinct from just practical rationality. If it’s just practical rationality, we should call it just that—still, as stated in the post, I don’t think that we can make ought claims about practical rationality (though you can probably make conditional claims; given that you want x, and you should do what you want, you should take action y). Similarly, if one took this definition of realism seriously, they’d say that moral realism is true in the same way that gastronomical realism is true (i.e. that there are true facts about what food I should have because it follows from my preferences about them).
Also, I’m not sure I buy your last point. I think under the forms of realism that people typically want to talk about, theres a gradient to your increased morality as you increase rationality (using your evidence well, acting in accordance with your goals, ect). While you could just say that morality and motivation towards it only cashes out at the highest level of rationality (i.e. god or whatever), this seems weird and much harder to justify.
You could argue that if moral realism is true, that even if our models of morality are probably wrong, you can be less wrong about them by acquiring knowledge about the world that contains relevant moral facts. We would never be certain they are correct, but we could be more confident about them in the same way we can be confident about a mathematical theory being valid.
I guess I should explain what my version of moral realism would entail.
Morality to my understanding is, for a lack of a better phrase, subjectively objective. Given a universe without any subjects making subjective value judgments, nothing would matter (it’s just a bunch of space rocks colliding and stuff). However, as soon as you introduce subjects capable of experiencing the universe and having values and making judgments about the value of different world states, we have the capacity to make “should” statements about the desirability of given possible world states. Some things are now “good” and some things are now “bad”, at least to a given subject. From an objective, neutral, impartial point of view, all subjects and their value judgments are equally important (following the Principle of Indifference aka the Principle of Maximum Entropy).
Thus, as long as anyone anywhere cares about something enough to value or disvalue it, it matters objectively. The statement that “Alice cares about not feeling pain” and its hedonic equivalent “Alice experiences pain as bad” is an objective moral fact. Given that all subjects are equal (possibly in proportion to degree of sentience, not sure about this), then we can aggregate these values and select the world state that is most desirable overall (greatest good for the greatest number).
The rest of morality, things like universalizable rules that generally encourage the greatest good in the long run, are built on top of this foundation of treating the desires/concerns/interests/well-being/happiness/Eudaimonia of all sentient beings throughout spacetime equally and fairly. At least, that’s my theory of morality.
I think I get the theory you’re positing, and I think you should look into Constructivism (particularly Humean Constructivism, as opposed to Kantian) and tethered values.
On this comment: Once you get agents with preferences, I’m not sure you can make claims about oughts—sure, they can do the preferences and want them (and maybe there is some definition of rationality you can create such that it would be better to act in certain ways relative to achieving one’s aims), but in what sense do they ought to? In what sense is this objective?
I’m also not sure I understand what a neutral/ impartial view means here, and I’m not understanding why someone might care about what it says at all (aside from their mere sentiments, which gets back to my last comment about motivation).
Also, I don’t understand how this relates to the principle of indifference, which states that given some partition over possible hypotheses in some sample space and no evidence in any direction, you should assign an equal credence to all possibilities such that their total sums to one.
I guess the main intuitional leap that this formulation of morality takes is the idea that if you care about your own preferences, you should care about the preferences of others as well, because if your preferences matter objectively, theirs do as well. If your preferences don’t matter objectively, why should you care about anything at all?
The principle of indifference as applied here is the idea that given that we generally start with maximum uncertainty about the various sentients in the universe (no evidence in any direction about their worth or desert), we should assign equal value to each of them and their concerns. It is admittedly an unusual use of the principle.
I find the jump hard to understand. Your preferences matter to you -not ‘objectively’. They just matter because you want x, y z-. It doesn’t matter if your preferences don’t matter objectively. You still care about them. You might have a preference for being nice to people, and that will still matter to you regardless of anything else -unless you change your preference, which I guess is possible but no easy. It depends on the preference. The principle of indifference… I really struggle to see how it could be meaningful, because one has an innate preference for oneself, so whatever uncertainty you have about other sentients, there’s no reason at all to grant them and their concerns equal value to yours a priori.
I mean, that innate preference for oneself isn’t objective in the sense of being a neutral outsider view of things. If you don’t see the point of having an objective “point of view of the universe” view about stuff, then sure, there’s no reason to care about this version of morality. I’m not arguing that you need to care, only that it would be objective and possibly truth tracking to do so, that there exists a formulation of morality that can be objective in nature.
Thanks! I think I can see your pov clearer now. One thing that often leads me astray is how words seem to latch different meanings, and this makes discussion and clarification difficult (as in ‘realism’ and ‘objective’). I think my crux, given what you say, is that I indeed don’t see the point of having a neutral, outsider, point of view of the universe in ethics. I’d need to think more about it. I think trying to be neutral or impartial makes sense in science, where the goal is understanding a mind-independent world. But in ethics, I don’t see why that outsider view would have any special authority unless we choose to give it weight. Objectivity in the sense of ‘from nowhere’ isn’t automatically normatively relevant, I feel. I can see why, for example, when pragmatically trying to satisfy your preferences and being a human in contact with other humans with their own preferences, it makes sense to include in the social contract some specialized and limited uses of objectivity: they’re useful tools for coordination, debate and decision-making, and it benefits the maximization of our personal preferences to have some figures of power (rulers, judges, etc...) who are constrained to follow them. But that wouldn’t make them ‘true’ in any sense: they are just the result of agreements and negotiated duties for attaining certain agreed-upon ends.
So, regarding the moral motivation thing, moral realism and motivational internalism are distinct philosophical concepts, and one can be true without the other also being true. Like, there could be moral facts, but they might not matter to some people. Or, maybe people who believe things are moral are motivated to act on their theory of morality, but the theory isn’t based on any moral facts but are just deeply held beliefs.
The latter example could be true regardless of whether moral realism is true or not. For instance, the psychopath might -think- that egoism is the right thing to do because their folk morality is that everyone is in it for themselves and suckers deserve what they get. This isn’t morality as we might understand it, but it would function psychologically as a justification for their actions to them (so they sleep better at night and have a more positive self-image) and effectively be motivating in a sense.
Even -if- both moral realism and motivational internalism were true, this doesn’t mean that people will automatically discover moral facts and act on them reliably. You would basically need to have perfect information and be perfectly rational for that to happen, and no one has these traits in the real world (except maybe God, hypothetically).
Thanks for the comment!
Yep, in the philosophical literature, they are distinct. I was merely making the point that I’m not sure one of these (moral realism is true but not motivating) actually reflects what people want to be implying when they say moral realism is true. In what sense are we saying that there is objective morality if it relies on some sentiments? I guess one can claim that the rational thing to do given some objective (i.e. morality) is that objective, but that doesn’t seem very distinct from just practical rationality. If it’s just practical rationality, we should call it just that—still, as stated in the post, I don’t think that we can make ought claims about practical rationality (though you can probably make conditional claims; given that you want x, and you should do what you want, you should take action y). Similarly, if one took this definition of realism seriously, they’d say that moral realism is true in the same way that gastronomical realism is true (i.e. that there are true facts about what food I should have because it follows from my preferences about them).
Also, I’m not sure I buy your last point. I think under the forms of realism that people typically want to talk about, theres a gradient to your increased morality as you increase rationality (using your evidence well, acting in accordance with your goals, ect). While you could just say that morality and motivation towards it only cashes out at the highest level of rationality (i.e. god or whatever), this seems weird and much harder to justify.
You could argue that if moral realism is true, that even if our models of morality are probably wrong, you can be less wrong about them by acquiring knowledge about the world that contains relevant moral facts. We would never be certain they are correct, but we could be more confident about them in the same way we can be confident about a mathematical theory being valid.
I guess I should explain what my version of moral realism would entail.
Morality to my understanding is, for a lack of a better phrase, subjectively objective. Given a universe without any subjects making subjective value judgments, nothing would matter (it’s just a bunch of space rocks colliding and stuff). However, as soon as you introduce subjects capable of experiencing the universe and having values and making judgments about the value of different world states, we have the capacity to make “should” statements about the desirability of given possible world states. Some things are now “good” and some things are now “bad”, at least to a given subject. From an objective, neutral, impartial point of view, all subjects and their value judgments are equally important (following the Principle of Indifference aka the Principle of Maximum Entropy).
Thus, as long as anyone anywhere cares about something enough to value or disvalue it, it matters objectively. The statement that “Alice cares about not feeling pain” and its hedonic equivalent “Alice experiences pain as bad” is an objective moral fact. Given that all subjects are equal (possibly in proportion to degree of sentience, not sure about this), then we can aggregate these values and select the world state that is most desirable overall (greatest good for the greatest number).
The rest of morality, things like universalizable rules that generally encourage the greatest good in the long run, are built on top of this foundation of treating the desires/concerns/interests/well-being/happiness/Eudaimonia of all sentient beings throughout spacetime equally and fairly. At least, that’s my theory of morality.
I think I get the theory you’re positing, and I think you should look into Constructivism (particularly Humean Constructivism, as opposed to Kantian) and tethered values.
On this comment: Once you get agents with preferences, I’m not sure you can make claims about oughts—sure, they can do the preferences and want them (and maybe there is some definition of rationality you can create such that it would be better to act in certain ways relative to achieving one’s aims), but in what sense do they ought to? In what sense is this objective?
I’m also not sure I understand what a neutral/ impartial view means here, and I’m not understanding why someone might care about what it says at all (aside from their mere sentiments, which gets back to my last comment about motivation).
Also, I don’t understand how this relates to the principle of indifference, which states that given some partition over possible hypotheses in some sample space and no evidence in any direction, you should assign an equal credence to all possibilities such that their total sums to one.
I guess the main intuitional leap that this formulation of morality takes is the idea that if you care about your own preferences, you should care about the preferences of others as well, because if your preferences matter objectively, theirs do as well. If your preferences don’t matter objectively, why should you care about anything at all?
The principle of indifference as applied here is the idea that given that we generally start with maximum uncertainty about the various sentients in the universe (no evidence in any direction about their worth or desert), we should assign equal value to each of them and their concerns. It is admittedly an unusual use of the principle.
I find the jump hard to understand. Your preferences matter to you -not ‘objectively’. They just matter because you want x, y z-. It doesn’t matter if your preferences don’t matter objectively. You still care about them. You might have a preference for being nice to people, and that will still matter to you regardless of anything else -unless you change your preference, which I guess is possible but no easy. It depends on the preference. The principle of indifference… I really struggle to see how it could be meaningful, because one has an innate preference for oneself, so whatever uncertainty you have about other sentients, there’s no reason at all to grant them and their concerns equal value to yours a priori.
I mean, that innate preference for oneself isn’t objective in the sense of being a neutral outsider view of things. If you don’t see the point of having an objective “point of view of the universe” view about stuff, then sure, there’s no reason to care about this version of morality. I’m not arguing that you need to care, only that it would be objective and possibly truth tracking to do so, that there exists a formulation of morality that can be objective in nature.
Thanks! I think I can see your pov clearer now. One thing that often leads me astray is how words seem to latch different meanings, and this makes discussion and clarification difficult (as in ‘realism’ and ‘objective’). I think my crux, given what you say, is that I indeed don’t see the point of having a neutral, outsider, point of view of the universe in ethics. I’d need to think more about it. I think trying to be neutral or impartial makes sense in science, where the goal is understanding a mind-independent world. But in ethics, I don’t see why that outsider view would have any special authority unless we choose to give it weight. Objectivity in the sense of ‘from nowhere’ isn’t automatically normatively relevant, I feel. I can see why, for example, when pragmatically trying to satisfy your preferences and being a human in contact with other humans with their own preferences, it makes sense to include in the social contract some specialized and limited uses of objectivity: they’re useful tools for coordination, debate and decision-making, and it benefits the maximization of our personal preferences to have some figures of power (rulers, judges, etc...) who are constrained to follow them. But that wouldn’t make them ‘true’ in any sense: they are just the result of agreements and negotiated duties for attaining certain agreed-upon ends.