I think I get the theory you’re positing, and I think you should look into Constructivism (particularly Humean Constructivism, as opposed to Kantian) and tethered values.
On this comment: Once you get agents with preferences, I’m not sure you can make claims about oughts—sure, they can do the preferences and want them (and maybe there is some definition of rationality you can create such that it would be better to act in certain ways relative to achieving one’s aims), but in what sense do they ought to? In what sense is this objective?
I’m also not sure I understand what a neutral/ impartial view means here, and I’m not understanding why someone might care about what it says at all (aside from their mere sentiments, which gets back to my last comment about motivation).
Also, I don’t understand how this relates to the principle of indifference, which states that given some partition over possible hypotheses in some sample space and no evidence in any direction, you should assign an equal credence to all possibilities such that their total sums to one.
I guess the main intuitional leap that this formulation of morality takes is the idea that if you care about your own preferences, you should care about the preferences of others as well, because if your preferences matter objectively, theirs do as well. If your preferences don’t matter objectively, why should you care about anything at all?
The principle of indifference as applied here is the idea that given that we generally start with maximum uncertainty about the various sentients in the universe (no evidence in any direction about their worth or desert), we should assign equal value to each of them and their concerns. It is admittedly an unusual use of the principle.
I find the jump hard to understand. Your preferences matter to you -not ‘objectively’. They just matter because you want x, y z-. It doesn’t matter if your preferences don’t matter objectively. You still care about them. You might have a preference for being nice to people, and that will still matter to you regardless of anything else -unless you change your preference, which I guess is possible but no easy. It depends on the preference. The principle of indifference… I really struggle to see how it could be meaningful, because one has an innate preference for oneself, so whatever uncertainty you have about other sentients, there’s no reason at all to grant them and their concerns equal value to yours a priori.
I mean, that innate preference for oneself isn’t objective in the sense of being a neutral outsider view of things. If you don’t see the point of having an objective “point of view of the universe” view about stuff, then sure, there’s no reason to care about this version of morality. I’m not arguing that you need to care, only that it would be objective and possibly truth tracking to do so, that there exists a formulation of morality that can be objective in nature.
Thanks! I think I can see your pov clearer now. One thing that often leads me astray is how words seem to latch different meanings, and this makes discussion and clarification difficult (as in ‘realism’ and ‘objective’). I think my crux, given what you say, is that I indeed don’t see the point of having a neutral, outsider, point of view of the universe in ethics. I’d need to think more about it. I think trying to be neutral or impartial makes sense in science, where the goal is understanding a mind-independent world. But in ethics, I don’t see why that outsider view would have any special authority unless we choose to give it weight. Objectivity in the sense of ‘from nowhere’ isn’t automatically normatively relevant, I feel. I can see why, for example, when pragmatically trying to satisfy your preferences and being a human in contact with other humans with their own preferences, it makes sense to include in the social contract some specialized and limited uses of objectivity: they’re useful tools for coordination, debate and decision-making, and it benefits the maximization of our personal preferences to have some figures of power (rulers, judges, etc...) who are constrained to follow them. But that wouldn’t make them ‘true’ in any sense: they are just the result of agreements and negotiated duties for attaining certain agreed-upon ends.
I think I get the theory you’re positing, and I think you should look into Constructivism (particularly Humean Constructivism, as opposed to Kantian) and tethered values.
On this comment: Once you get agents with preferences, I’m not sure you can make claims about oughts—sure, they can do the preferences and want them (and maybe there is some definition of rationality you can create such that it would be better to act in certain ways relative to achieving one’s aims), but in what sense do they ought to? In what sense is this objective?
I’m also not sure I understand what a neutral/ impartial view means here, and I’m not understanding why someone might care about what it says at all (aside from their mere sentiments, which gets back to my last comment about motivation).
Also, I don’t understand how this relates to the principle of indifference, which states that given some partition over possible hypotheses in some sample space and no evidence in any direction, you should assign an equal credence to all possibilities such that their total sums to one.
I guess the main intuitional leap that this formulation of morality takes is the idea that if you care about your own preferences, you should care about the preferences of others as well, because if your preferences matter objectively, theirs do as well. If your preferences don’t matter objectively, why should you care about anything at all?
The principle of indifference as applied here is the idea that given that we generally start with maximum uncertainty about the various sentients in the universe (no evidence in any direction about their worth or desert), we should assign equal value to each of them and their concerns. It is admittedly an unusual use of the principle.
I find the jump hard to understand. Your preferences matter to you -not ‘objectively’. They just matter because you want x, y z-. It doesn’t matter if your preferences don’t matter objectively. You still care about them. You might have a preference for being nice to people, and that will still matter to you regardless of anything else -unless you change your preference, which I guess is possible but no easy. It depends on the preference. The principle of indifference… I really struggle to see how it could be meaningful, because one has an innate preference for oneself, so whatever uncertainty you have about other sentients, there’s no reason at all to grant them and their concerns equal value to yours a priori.
I mean, that innate preference for oneself isn’t objective in the sense of being a neutral outsider view of things. If you don’t see the point of having an objective “point of view of the universe” view about stuff, then sure, there’s no reason to care about this version of morality. I’m not arguing that you need to care, only that it would be objective and possibly truth tracking to do so, that there exists a formulation of morality that can be objective in nature.
Thanks! I think I can see your pov clearer now. One thing that often leads me astray is how words seem to latch different meanings, and this makes discussion and clarification difficult (as in ‘realism’ and ‘objective’). I think my crux, given what you say, is that I indeed don’t see the point of having a neutral, outsider, point of view of the universe in ethics. I’d need to think more about it. I think trying to be neutral or impartial makes sense in science, where the goal is understanding a mind-independent world. But in ethics, I don’t see why that outsider view would have any special authority unless we choose to give it weight. Objectivity in the sense of ‘from nowhere’ isn’t automatically normatively relevant, I feel. I can see why, for example, when pragmatically trying to satisfy your preferences and being a human in contact with other humans with their own preferences, it makes sense to include in the social contract some specialized and limited uses of objectivity: they’re useful tools for coordination, debate and decision-making, and it benefits the maximization of our personal preferences to have some figures of power (rulers, judges, etc...) who are constrained to follow them. But that wouldn’t make them ‘true’ in any sense: they are just the result of agreements and negotiated duties for attaining certain agreed-upon ends.