What would this even mean? If I assert that X is wrong, and someone else asserts that it’s fine, how do we resolve this? We can appeal to common values that derive this conclusion, but that’s pretty arbitrary and largely just feels like my opinion. Claiming that morality is objective just feels groundless.
I agree that you need ridiculously fundamental assumptions like “I am not a Boltzmann brain that ephemerally emerged from the aether and is about to vanish” and “we are not in a simulation”. But if you have that kind of thing, I think you can reasonably discuss objective reality
I think if you grant something like “suffering is bad” you get (some form of) ethics, and this seems like a pretty minimal assumption. (Though I agree you can have an internally consistent view that suffering as good just as you can have an internally consistent view that you are a Boltzmann brain.)
I’d argue that you also need some assumptions around is-ought, whether to be a consequentialist or not, what else (if at all) you value and how this trades off against suffering, etc. And you also need to decide on some boundaries for which entities are capable of suffering in a meaningful way, which there’s wide spread disagreement on (in a way that imo goes beyond being empirical)
It’s enough to get you something like “if suffering can be averted costlessly then this is a good thing” but that’s pretty rarely practically relevant. Everything has a cost
Locally, I think that often there will be some cluster of less controversial common values like “caring about the flourishing of society” which can be used to derive something like locally-objective conclusions about moral questions (like whether X is wrong).
Globally, an operationalization of morality being objective might be something like “among civilizations of evolved beings in the multiverse, there’s a decently big attractor state of moral norms that a lot of the civilizations eventually converge on”.
Less controversial is a very long way from objective—why do you think that “caring about the flourishing of society” is objectively ethical?
Re the idea of an attractor, idk, history has sure had lot of popular beliefs I find abhorrent. How do we know there even is convergence at all rather than cycles? And why does being convergent imply objective? If you told me that the supermajority of civilization concluded that torturing criminals was morally good, that would not make me think it was ethical.
My overall take is that objective is just an incredibly strong word for which you need incredibly strong justifications, and your justifications don’t seem close, they seem more about “this is a Schelling point” or “this is a reasonable default that we can build a coalition around”
See my response to Manuel—I don’t think this is “proving moral realism”, but I do think it would be pointing at something deeper and closer-to-objective than “happen to have the same opinions”.
I don’t think I have much to object to that, but I do think that doesn’t look at all like ‘stance independent’ if we’re using that as the criterion for ethical realism. What you’re saying seems to boil down, if I understand it correctly is ‘given a bunch of intelligent creatures with some shared psychological perceptions of the world and some tendency towards collaboration, it is pretty likely they’ll end up arriving at a certain set of shared norms that optimize towards their well-being as a group -and in most cases, as individuals-. That makes the ‘state of moral norms that a lot of the civilizations eventually converge on’ something useful for ends x, y, z, but not ‘true’ and ‘independent of human or alien minds’.
Here are some senses in which it would make morality feel “more objective” rather than “more subjective”:
I can have the experience of having a view, and then hearing an argument, and updating. My stance towards my previous view then feels more like “oh, I was mistaken” (like if I’d made a mathematical error) rather than “oh, my view changed” (like getting myself to like the taste of avocado when I didn’t used to).
There can exist “moral experts”, whom we would want to consult on matters of morality. Broadly, we should expect our future views to update towards those of smart careful thinkers who’ve engaged with the questions a lot.
It’s possible that the norms various civilizations converge on represent something like “the optimal(/efficient?/robust?) way for society to self-organize”
I don’t think this is exactly “independent of human or alien minds”, but it also very much doesn’t feel “purely subjective”
I don’t really believe there’s anything more deeply metaphysical than that going on with morality[1], but I do think that there’s a lot that’s important in the above bullets, and that moral realist positions often feel vibewise “more correct” than antirealist positions (in terms of what they imply for real-world actions), even though the antirealist positions feel technically “more correct”.
I guess: there’s also some possibility of getting more convergence for acausal reasons rather than just evolution towards efficiency. I do think this is real, but it mostly feels like a distraction here so I’ll ignore it.
Terminology can be a bugger in these discussions. I think we are accepting, as per BB’s own definition at the start of the thread, that Moral Realism would basically reduce to accepting a stance-independent view that moral truths exist. As for truth, I would mean it in the way it gets used when studying other, stance-independent objects, i.e., electrons exist and their existence is independent of human minds and-or of humans having ever existed, and saying ‘electrons exist’ is true because of their correspondence to objects of an external, human-independent reality.
What I take from your examples (correct me if I am wrong or if I misrepresent you) is that you feel that moral statements are not as evidently subjective as say, ‘Vanilla ice-cream is the best flavor’ but not as objective as, say ‘An electron has a negative charge’, as living in some space of in-betweeness with respect to those two extremes. I’d still call this anti-realism, as you’re just switching from a maximally subjective stance (an individual’s particular culinary tastes) to a more general, but still stance-dependent one ( what a group of experts and-or human and some alien minds might possibly agree upon). I’d say again, an electron doesn’t care for what a human or any other creature thinks about its electric charge.
As for each of the bullet points, what I’d say is:
I can see why you’d feel the change from a previous view can be seen as a mistake rather than a preference change -when I first started thinking about morality I felt very strongly inclined to the strongest moral realism, and I know feel that pov was wrong- but this doesn’t imply moral realism as much as that if feels as if moral principles and beliefs have objective truth status, even if they were actually a reorganization of stance-dependent beliefs.
I, on the contrary, don’t feel like there could be ‘moral experts’ - at most, people who seem to live up to their moral beliefs, whatever the knowledge and reasons for having them. Most surveys I’ve seen -there’s a Rationally Speaking episode on this- show that Philosophers and Moral Philosophers specifically don’t seem to behave more morally than their colleagues and similar social and intellectual peers.
Convergence can be explained through evolutionary game theory, coordination pressures, and social learning, not objective moral truths. That many societies converge on certain norms just shows what tends to work given human psychology and conditions, not that these norms are true in any stance-independent sense. It’s functional success, not moral facthood.
is that you feel that moral statements are not as evidently subjective as say, ‘Vanilla ice-cream is the best flavor’ but not as objective as, say ‘An electron has a negative charge’, as living in some space of in-betweeness with respect to those two extremes
I think that’s roughly right. I think that they are unlikely to be more objective than “blue is a more natural concept than grue”, but that there’s a good chance that they’re about the same as that (and my gut take is that that’s pretty far towards the electron end of the spectrum; but perhaps I’m confused).
I’d say again, an electron doesn’t care for what a human or any other creature thinks about its electric charge.
Yeah, but I think that e.g. facts about economics are in some sense contingent on the thinking of people, but are not contingent on what particular people think, and I think that something similar could be true of morality.
I, on the contrary, don’t feel like there could be ‘moral experts’
The cleanest example I might give is that if I had a message from my near-future self saying “hey I’ve thought really hard about this issue and I really think X is right, sorry I don’t have time to unpack all of that”, I’d be pretty inclined to defer. I wonder if you feel differently?
I don’t think that moral philosophers in our society are necessarily hitting the bar I would like for “moral expert”. I also don’t think that people who are genuinely experts in morality would necessarily act according to moral values. (I’m not sure that these points are very important.)
What would this even mean? If I assert that X is wrong, and someone else asserts that it’s fine, how do we resolve this? We can appeal to common values that derive this conclusion, but that’s pretty arbitrary and largely just feels like my opinion. Claiming that morality is objective just feels groundless.
Are there non-moral disagreements which can be resolved without appeal to common assumptions?
I agree that you need ridiculously fundamental assumptions like “I am not a Boltzmann brain that ephemerally emerged from the aether and is about to vanish” and “we are not in a simulation”. But if you have that kind of thing, I think you can reasonably discuss objective reality
I think if you grant something like “suffering is bad” you get (some form of) ethics, and this seems like a pretty minimal assumption. (Though I agree you can have an internally consistent view that suffering as good just as you can have an internally consistent view that you are a Boltzmann brain.)
I’d argue that you also need some assumptions around is-ought, whether to be a consequentialist or not, what else (if at all) you value and how this trades off against suffering, etc. And you also need to decide on some boundaries for which entities are capable of suffering in a meaningful way, which there’s wide spread disagreement on (in a way that imo goes beyond being empirical)
It’s enough to get you something like “if suffering can be averted costlessly then this is a good thing” but that’s pretty rarely practically relevant. Everything has a cost
Locally, I think that often there will be some cluster of less controversial common values like “caring about the flourishing of society” which can be used to derive something like locally-objective conclusions about moral questions (like whether X is wrong).
Globally, an operationalization of morality being objective might be something like “among civilizations of evolved beings in the multiverse, there’s a decently big attractor state of moral norms that a lot of the civilizations eventually converge on”.
Less controversial is a very long way from objective—why do you think that “caring about the flourishing of society” is objectively ethical?
Re the idea of an attractor, idk, history has sure had lot of popular beliefs I find abhorrent. How do we know there even is convergence at all rather than cycles? And why does being convergent imply objective? If you told me that the supermajority of civilization concluded that torturing criminals was morally good, that would not make me think it was ethical.
My overall take is that objective is just an incredibly strong word for which you need incredibly strong justifications, and your justifications don’t seem close, they seem more about “this is a Schelling point” or “this is a reasonable default that we can build a coalition around”
No, that wouldn’t prove moral realism at all. That would merely show that you and a bunch of aliens happen to have the same opinions.
See my response to Manuel—I don’t think this is “proving moral realism”, but I do think it would be pointing at something deeper and closer-to-objective than “happen to have the same opinions”.
I don’t think I have much to object to that, but I do think that doesn’t look at all like ‘stance independent’ if we’re using that as the criterion for ethical realism. What you’re saying seems to boil down, if I understand it correctly is ‘given a bunch of intelligent creatures with some shared psychological perceptions of the world and some tendency towards collaboration, it is pretty likely they’ll end up arriving at a certain set of shared norms that optimize towards their well-being as a group -and in most cases, as individuals-. That makes the ‘state of moral norms that a lot of the civilizations eventually converge on’ something useful for ends x, y, z, but not ‘true’ and ‘independent of human or alien minds’.
I’m not sure what exactly “true” means here.
Here are some senses in which it would make morality feel “more objective” rather than “more subjective”:
I can have the experience of having a view, and then hearing an argument, and updating. My stance towards my previous view then feels more like “oh, I was mistaken” (like if I’d made a mathematical error) rather than “oh, my view changed” (like getting myself to like the taste of avocado when I didn’t used to).
There can exist “moral experts”, whom we would want to consult on matters of morality. Broadly, we should expect our future views to update towards those of smart careful thinkers who’ve engaged with the questions a lot.
It’s possible that the norms various civilizations converge on represent something like “the optimal(/efficient?/robust?) way for society to self-organize”
I don’t think this is exactly “independent of human or alien minds”, but it also very much doesn’t feel “purely subjective”
I don’t really believe there’s anything more deeply metaphysical than that going on with morality[1], but I do think that there’s a lot that’s important in the above bullets, and that moral realist positions often feel vibewise “more correct” than antirealist positions (in terms of what they imply for real-world actions), even though the antirealist positions feel technically “more correct”.
I guess: there’s also some possibility of getting more convergence for acausal reasons rather than just evolution towards efficiency. I do think this is real, but it mostly feels like a distraction here so I’ll ignore it.
Terminology can be a bugger in these discussions. I think we are accepting, as per BB’s own definition at the start of the thread, that Moral Realism would basically reduce to accepting a stance-independent view that moral truths exist. As for truth, I would mean it in the way it gets used when studying other, stance-independent objects, i.e., electrons exist and their existence is independent of human minds and-or of humans having ever existed, and saying ‘electrons exist’ is true because of their correspondence to objects of an external, human-independent reality.
What I take from your examples (correct me if I am wrong or if I misrepresent you) is that you feel that moral statements are not as evidently subjective as say, ‘Vanilla ice-cream is the best flavor’ but not as objective as, say ‘An electron has a negative charge’, as living in some space of in-betweeness with respect to those two extremes. I’d still call this anti-realism, as you’re just switching from a maximally subjective stance (an individual’s particular culinary tastes) to a more general, but still stance-dependent one ( what a group of experts and-or human and some alien minds might possibly agree upon). I’d say again, an electron doesn’t care for what a human or any other creature thinks about its electric charge.
As for each of the bullet points, what I’d say is:
I can see why you’d feel the change from a previous view can be seen as a mistake rather than a preference change -when I first started thinking about morality I felt very strongly inclined to the strongest moral realism, and I know feel that pov was wrong- but this doesn’t imply moral realism as much as that if feels as if moral principles and beliefs have objective truth status, even if they were actually a reorganization of stance-dependent beliefs.
I, on the contrary, don’t feel like there could be ‘moral experts’ - at most, people who seem to live up to their moral beliefs, whatever the knowledge and reasons for having them. Most surveys I’ve seen -there’s a Rationally Speaking episode on this- show that Philosophers and Moral Philosophers specifically don’t seem to behave more morally than their colleagues and similar social and intellectual peers.
Convergence can be explained through evolutionary game theory, coordination pressures, and social learning, not objective moral truths. That many societies converge on certain norms just shows what tends to work given human psychology and conditions, not that these norms are true in any stance-independent sense. It’s functional success, not moral facthood.
I think that’s roughly right. I think that they are unlikely to be more objective than “blue is a more natural concept than grue”, but that there’s a good chance that they’re about the same as that (and my gut take is that that’s pretty far towards the electron end of the spectrum; but perhaps I’m confused).
Yeah, but I think that e.g. facts about economics are in some sense contingent on the thinking of people, but are not contingent on what particular people think, and I think that something similar could be true of morality.
The cleanest example I might give is that if I had a message from my near-future self saying “hey I’ve thought really hard about this issue and I really think X is right, sorry I don’t have time to unpack all of that”, I’d be pretty inclined to defer. I wonder if you feel differently?
I don’t think that moral philosophers in our society are necessarily hitting the bar I would like for “moral expert”. I also don’t think that people who are genuinely experts in morality would necessarily act according to moral values. (I’m not sure that these points are very important.)