This essay comes across as confused about the is-ought problem. Science in the classical sense studies facts about physical reality, not moral qualities. Once you already decided something is valuable, you can use science to maximize it (e.g. using medicine to maximize health). Similarly if you already decided hedonistic utilitarianism is correct you can use science to find the best strategy for maximizing hedonistic utility.
I am convinced that ethics is subjective, not in the sense that any claim about ethics is as good as any other claim, but in the sense that different people and different cultures can possess different ethics (although perhaps the differences are not very substantial) and there is no objective measure by which one is better than the other. In other words, I think there is an objective function that takes a particular intelligent agent and produces a system of ethics but it is not the constant function.
Assessing the quality of conscious experiences using neuroscience might be a good tool for helping moral judgement, but again it is only useful in light of assumptions about ethics that come from elsewhere. On the other hand neuroscience might be useful for computing the “ethics function” above.
The step from ethical subjectivism to the claim it’s wrong to interfere with other cultures seems to me completely misguided, even backwards. If according to my ethics your culture is doing something bad then it is completely rational for me to stop your culture from doing it (at the same time it can be completely rational for you to resist). There is no universal value of “respecting other cultures” anymore than any other value is universal. If my ethics happens to include the value of “respecting other cultures” then I need to find the optimal trade-off between allowing the bad thing to continue and violating “respect”.
The is-ought distinction wasn’t discussed explicitly to help include those unfamiliar with Hume. However, the opening section of the essay attempts to establish morality as just another domain of the physical world. There are no moral qualities over and above the ones we can measure, either a) in the consequences of an act, or b) in the behavioural profiles or personality traits in people that reliably lead to certain acts. Both these things are physical (or, at least, material in the latter case), and therefore measurable. Science studies physical reality, and the ambit of morality is a subset of physical reality. Therefore, science studies morality too.
The essay is silent on ‘hedonistic’ utilitarianism (we do not endorse it, either), as again, a) we think these aren’t useful terms with which to structure the debate with as wide an audience as possible, and b) because they are concerns outside the present scope. This essay focuses on establishing the moral domain as just a subset of the physical, and therefore, that there will be moral facts to be obtained scientifically—even if we don’t know how to obtain them just yet. How to perfectly balance competing interests, for example, is for a later discussion. First, convincing people that you actually can do that with any semblance of objectivity is required. The baby needs to walk before it can run.
We discuss cross-cultural claims in the section on everyday empiricism.
There are no moral qualities over and above the ones we can measure, either a) in the consequences of an act, or b) in the behavioural profiles or personality traits in people that reliably lead to certain acts
This is the nub of the issue (and in my view the crucial flaw in Harris’ thesis.
You are measuring various physical (or, more broadly, ‘natural’ properties), but you require an entirely separate philosophical (and largely non-empirical) argument to establish that those properties are moral properties. Whether or not that argument works will be a largely non-empirical question.
The argument you, in fact, give seems to rely on a thought experiment where people imagine a low well-being world and introspectively access their thoughts about it. That’s very much non-empirical, non-scientific and not uncontroversial.
Both these comments are zeroing in on the same issue which is at the core of the essay. The thesis above is deflationary about morality and ethics—the central point is that there is no separate realm of moral significance or quality, added on top of and divorced from material facts.
The chain is that 1) the only thing that possesses, in and of itself, a tint in value whilst still being an entirely material quantity is conscious experience. This move assumes materialism/physicalism, which is mostly uncontroversial now among scientists and philosophers alike.
2) We know the kinds of conscious experiences that are bad. Dying famished and hungry is not merely subjective. It is a subjective state, but one that is universally and always negative. This is not a moral assignment—it is an observable, material fact about the world and about psychological states.
3) The material conditions that lead to changes in conscious experiences are amenable to objective inquiry. The same external stimuli may move different people in different conscious directions, but we can study that relationship objectively. “Dying is bad” is not always a true claim in medical science—it depends on the material context. If you can’t save people from the WPW, killing them could be a good thing. This is the principle that euthanasia leans on. Sometimes dying is better, in light of the facts about the further possibility for positive conscious experiences. That doesn’t make medical science subjective.
4) The only “non-empirical” assumption you have to make is that what we mean by bad or wrong is movement of consciousness towards, or setting up systems that reliably contain people within, a negative state-space of consciousness.
5) This is how all other physical sciences operate.
We don’t try to give additional argument to demonstrate that those properties are moral properties, we argue that moral properties are a subset of natural properties. In the same sense that ‘health’ is a subset of biological properties, or ‘good plumbing’ is a subset of various structural/engineering & hydrodynamic properties. Everything we value makes reference to material facts and their utility towards a goal set which must be assumed. But only in the case of morality does anyone ever demand a secondary and unreachable standard of objectivity.
Our thesis is therefore a realist, but deflationary (or ‘naturalised’) position on morality.
Hi Robert, I’m familiar with moral naturalism, it’s a well known philosophical position.
But I still think you’ve simply not given the philosophical argument necessary to establish moral naturalism, you’ve merely asserted it.
1) the only thing that possesses, in and of itself, a tint in value whilst still being an entirely material quantity is conscious experience.
This assumption (the first line of your argument) contains the very conclusion you’re arguing for. What even is “value”? This is the question you need to answer, so you can’t just assume it at the start.
You say “This move assumes materialism/physicalism, which is mostly uncontroversial now among scientists and philosophers alike.” But physicalism isn’t the issue here: the issue is explicating what value is in naturalistic terms.
2) We know the kinds of conscious experiences that are bad. Dying famished and hungry is not merely subjective. It is a subjective state, but one that is universally and always negative. This is not a moral assignment—it is an observable, material fact about the world and about psychological states.
Again, this is a position statement not an argument or a step in an argument. The core question here, as above, is what does “bad” even mean? I’m not sure, but it reads like you are saying “bad” = (subjectively bad to a person; negatively valenced) in the first sentence, referring to a thing being objectively, universally and always morally “negative” in the second. And referring to some uncontroversial material property (don’t know which one) in the third. The whole question that needs to be answered is what “bad”/negative value means. And an argument is needed to show what the connection is between subjectively bad experience, objective value and such and such material properties.
4) The only “non-empirical” assumption you have to make is that what we mean by bad or wrong is movement of consciousness towards, or setting up systems that reliably contain people within, a negative state-space of consciousness.
This is the very thing you need an argument to establish! You say “the only… assumption you have to make” is, and then describe the conclusion you need to argue for. What’s to stop me assuming that “bad” refers to some totally different natural property?
5) We don’t try to give additional argument to demonstrate that those properties are moral properties, we argue that moral properties are a subset of natural
properties.
I’m not sure how to interpret this sentence, so I’ll just address the latter claim.
You say “we argue that moral properties are a subset of natural properties”- but I don’t see an argument for this claim. Maybe you’re just starting from the presumption that naturalism is plausible, so moral properties must be a subset of natural properties? But it doesn’t follow that there are any moral properties or that moral properties are this rather than that or that they can be reductively defined.
Everything we value makes reference to material facts and their utility towards a goal set which must be assumed. But only in the case of morality does anyone ever demand a secondary and unreachable standard of objectivity.
Right, but, in fact, people seem to have a lot of different goals. It doesn’t follow that there is a single, over-arching “moral” goal-set, rather than just a plethora of unrelated goals. It doesn’t even follow that there is a single goal for any given domain. For example, many interests and considerations (differing from person to person) influence our plumbing preferences: it doesn’t follow there are plumbing properties in any fundamental sense.
This is potentially pretty damning to your thesis. People may simply have lots of different goals, rather than there being a single, universally accepted moral goal. If so there’ll simply be nothing specifically “moral” or there’ll be moral/value relativism.
I really don’t think I can reply without rewriting the essay again. I feel like I’ve addressed those concerns already (or at least attempted to do so) in the body of the essay, and you’ve found them unsatisfactory, so we’d just be talking passed each other.
materialism/physicalism [...] is mostly uncontroversial now among [...] philosophers
That’s not really true. For example, in the PhilPapers survey, only 56.5% accepted physicalism in philosophy of mind (though 16.4% chose ‘Other’). There’s no knock-down argument for physicalism.
the only thing that possesses, in and of itself, a tint in value whilst still being an entirely material quantity is conscious experience. This move assumes materialism/physicalism, which is mostly uncontroversial now among scientists and philosophers alike.
What’re the arguments that scientists or philosophers use for it?
“There are no moral qualities over and above the ones we can measure, either a) in the consequences of an act, or b) in the behavioural profiles or personality traits in people that reliably lead to certain acts. Both these things are physical (or, at least, material in the latter case), and therefore measurable.”
The parameters you measure are physical properties to which you assign moral significance. The parameters themselves are science, the assignment of moral significance is “not science” in the sense that it depends on the entity doing the assignment.
The problem with your breatharianism example is that the claim “you can eat nothing and stay alive” is objectively wrong but the claim “dying is bad” is a moral judgement and therefore subjective. That is, the only sense in which “dying is bad” is a true claim is by interpreting it as “I prefer that people won’t die.”
but the claim “dying is bad” is a moral judgement and therefore subjective. That is, the only sense in which “dying is bad” is a true claim is by interpreting it as “I prefer that people won’t die.”
Then by extension you have to say that medical science has no normative force. If it’s just subjective, then when medicine says you ought not to smoke if you want to avoid lung cancer, they’re completely unjustified when they say ought not to.
Yes, medical science has no normative force. The fact smoking leads to cancer is a claim about causal relationship between phenomena in the physical world. The fact cancer causes suffering and death is also such a relationship. The idea that suffering and death are evil is already a subjective preference (subjective not in the sense that it is undefined but in the sense that different people might have different preferences; almost all people prefer avoiding suffering and death but other preferences might have more variance).
“The step from ethical subjectivism to the claim it’s wrong to interfere with other cultures seems to me completely misguided, even backwards.” I’m not entirely sure what you mean here. We don’t argue that it’s wrong to interfere with other cultures.
It is our view that there is a general attitude that each person can have whatever moral code they like and be justified, and we believe that attitude is wrong. If someone claims they kill other humans because it’s their moral code and it’s the most good thing to do, that doesn’t matter. We can rightfully say that they are wrong. So why should there be some cut off point somewhere that we suddenly can’t say someone else’s moral code is wrong? To quote Sam Harris, we shouldn’t be afraid to criticise bad ideas.
“I am convinced that ethics is subjective, not in the sense that any claim about ethics is as good as any other claim, but in the sense that different people and different cultures can possess different ethics (although perhaps the differences are not very substantial) and there is no objective measure by which one is better than the other. ” I agree with the first part in that different people and cultures can possess different ethics, but I reject your notion that there is no objective measure by which one is better than the other. If a culture’s ethical code was to brutally maim innocent humans, we don’t say ‘We disagree with that but it’s happening in another culture so it’s ok, who are we to say that our version of morality is better?’ We would just say that they are wrong.
“If according to my ethics your culture is doing something bad then it is completely rational for me to stop your culture from doing it” when you say this it sounds like you are trending to our view anyway.
I’ve recently updated toward moral antirealism. You’re not using the terms moral antirealism, moral realism, or metaethics in the essay, so I’m not sure whether your argument is meant as one for moral realism or whether you just want to argue what not all ethical systems are equally powerful – some are more inconsistent, more complex, more unstable, etc. than others – and the latter view is one I share.
“I reject your notion that there is no objective measure by which one is better than the other. If a culture’s ethical code was to brutally maim innocent humans, we don’t say ‘We disagree with that but it’s happening in another culture so it’s ok, who are we to say that our version of morality is better?’ We would just say that they are wrong.”
I would say neither. My morality – probably one shared by many EAs and many, but fewer, people outside EA – is that I care enormously, incomparably more about the suffering of those innocent humans than I care about respecting cultures, traditions, ethical systems, etc., so – other options and opportunity costs aside – I would disagree and intervene decisively, but I wouldn’t feel any need or justification to claim that their morals are factually wrong. This is also my world to shape.
You’re banking on the general moral consensus just being one that favours you, or coincides with your subjectivist take on morality. There can be no moral ‘progress’ if that is the case. We could be completely wrong when we say taking slaves is a bad thing, if the world was under ISIS control and the consensus shared by most people is that morality comes from a holy book.
Having a wide consensus for one’s view is certainly an advantage, but I don’t see how the rest follows from that. The direction that we want to call progress would just depend on what each of us sees as progress.
To use Brian’s article as an example, this would, to me, include interventions among wild animals for example with vaccines and birth control, but that’s probably antithetical to the idea of progress of many environmentalists and even Gene Roddenberry.
What do you mean by being “wrong” about the badness of slavery? Maybe that it would be unwise to address the problem of slavery under an ISIS-like regime because it would have zero tractability and keep us from implementing more tractable improvements since we would be executed?
“I’m not entirely sure what you mean here. We don’t argue that it’s wrong to interfere with other cultures.”
I was refuting what appeared to me as a strawman of ethical subjectivism.
“If someone claims they kill other humans because it’s their moral code and it’s the most good thing to do, that doesn’t matter. We can rightfully say that they are wrong.”
What is “wrong”? The only meaningful thing we can say is “we prefer people not the die therefore we will try to stop this person.” We can find other people who share this value and cooperate with them in stopping the murderer. But if the murderer honestly doesn’t mind killing people, nothing we say will convince them, even if they are completely rational.
By ‘wrong’ I don’t mean the opposite of morally just, I mean the opposite of correct. That is to say, we could rightfully say they are incorrect.
I fundamentally disagree with your final point. I used to be a meat-eater, and did not care one bit about the welfare of animals. To use your wording, I honestly didn’t mind killing animals. Through careful argument over a year from a friend of mine, I was finally convinced that was a morally incorrect point of view. To say that it would be impossible to convince a rational murderer who doesn’t mind killing people that murder is wrong is ludicrous.
I completely don’t understand what you mean by “killing people is incorrect.” I understand that “2+2=5“ is “incorrect” in the sense that there is a formally verifiable proof of “not 2+2=5” from the axioms of Peano arithmetic. I understand that general relativity is “correct” in the sense that we can use it to predict results of experiments and verify our predictions (on a more fundamental level, it is “correct” in the sense that it is the simplest model that produces all previous observations; the distinction is not very important at the moment). I don’t see any verification procedure for the morality of killing people, except checking whether killing people matches the preferences of a particular person or the majority in a particular group of people.
“I used to be a meat-eater, and did not care one bit about the welfare of animals… Through careful argument over a year from a friend of mine, I was finally convinced that was a morally incorrect point of view. To say that it would be impossible to convince a rational murderer who doesn’t mind killing people that murder is wrong is ludicrous.”
The fact you found your friend’s arguments to be persuasive means there was already some foundation in your mind from which “eating meat is wrong” could be derived. The existence of such a foundation is not a logical or physical necessity. To give a radical example, imagine someone builds an artificial general intelligence programmed specifically to kill as many people as it can, unconditionally. Nothing you say to this AGI will convince it that what it’s doing is wrong. In case of humans, there are many shared values because we all have very similar DNA and most of us are part of the same memetic ecosystem, but it doesn’t mean all of our values are precisely identical. It would probably be hard to find someone who has no objection to killing people deep down, although I wouldn’t be surprised if extreme psychopaths like that exist. However other more nuanced values may vary more signicantly.
As we discuss in our post, imagine the worst possible world. Most humans are comfortable in saying that this would be very bad, and any steps towards it would be bad, and if you disagree and think that steps towards the WPW are good, then you’re wrong. In the same vein, holding a ‘version of ethics’ that claims that moving towards the WPW is good, you’re wrong.
To address you second point, humans are not AGIs, our values are fluid.
I completely fail to understand how your WPW example addresses my point. It is absolutely irrelevant what most humans are comfortable in saying. Truth is not a democracy, and in this case the claim is not even wrong (it is ill defined since there is no such thing as “bad” without specifying the agent from whose point of view it is bad). It is true that some preferences are nearly universal for humans but other preferences are less so.
How is the fluidity of human values a point in your favor? If anything it only makes them more subjective.
This is somewhat of an aside, but I know a person who can argue for veganism almost as well as any vegan, and knows it is wrong to be a carnist, yet chooses to eat meat. They are the first to admit that they are selfish and wrong, but they do so anyway.
This essay comes across as confused about the is-ought problem. Science in the classical sense studies facts about physical reality, not moral qualities. Once you already decided something is valuable, you can use science to maximize it (e.g. using medicine to maximize health). Similarly if you already decided hedonistic utilitarianism is correct you can use science to find the best strategy for maximizing hedonistic utility.
I am convinced that ethics is subjective, not in the sense that any claim about ethics is as good as any other claim, but in the sense that different people and different cultures can possess different ethics (although perhaps the differences are not very substantial) and there is no objective measure by which one is better than the other. In other words, I think there is an objective function that takes a particular intelligent agent and produces a system of ethics but it is not the constant function.
Assessing the quality of conscious experiences using neuroscience might be a good tool for helping moral judgement, but again it is only useful in light of assumptions about ethics that come from elsewhere. On the other hand neuroscience might be useful for computing the “ethics function” above.
The step from ethical subjectivism to the claim it’s wrong to interfere with other cultures seems to me completely misguided, even backwards. If according to my ethics your culture is doing something bad then it is completely rational for me to stop your culture from doing it (at the same time it can be completely rational for you to resist). There is no universal value of “respecting other cultures” anymore than any other value is universal. If my ethics happens to include the value of “respecting other cultures” then I need to find the optimal trade-off between allowing the bad thing to continue and violating “respect”.
Thanks for your remarks.
The is-ought distinction wasn’t discussed explicitly to help include those unfamiliar with Hume. However, the opening section of the essay attempts to establish morality as just another domain of the physical world. There are no moral qualities over and above the ones we can measure, either a) in the consequences of an act, or b) in the behavioural profiles or personality traits in people that reliably lead to certain acts. Both these things are physical (or, at least, material in the latter case), and therefore measurable. Science studies physical reality, and the ambit of morality is a subset of physical reality. Therefore, science studies morality too.
The essay is silent on ‘hedonistic’ utilitarianism (we do not endorse it, either), as again, a) we think these aren’t useful terms with which to structure the debate with as wide an audience as possible, and b) because they are concerns outside the present scope. This essay focuses on establishing the moral domain as just a subset of the physical, and therefore, that there will be moral facts to be obtained scientifically—even if we don’t know how to obtain them just yet. How to perfectly balance competing interests, for example, is for a later discussion. First, convincing people that you actually can do that with any semblance of objectivity is required. The baby needs to walk before it can run.
We discuss cross-cultural claims in the section on everyday empiricism.
This is the nub of the issue (and in my view the crucial flaw in Harris’ thesis. You are measuring various physical (or, more broadly, ‘natural’ properties), but you require an entirely separate philosophical (and largely non-empirical) argument to establish that those properties are moral properties. Whether or not that argument works will be a largely non-empirical question.
The argument you, in fact, give seems to rely on a thought experiment where people imagine a low well-being world and introspectively access their thoughts about it. That’s very much non-empirical, non-scientific and not uncontroversial.
Both these comments are zeroing in on the same issue which is at the core of the essay. The thesis above is deflationary about morality and ethics—the central point is that there is no separate realm of moral significance or quality, added on top of and divorced from material facts.
The chain is that 1) the only thing that possesses, in and of itself, a tint in value whilst still being an entirely material quantity is conscious experience. This move assumes materialism/physicalism, which is mostly uncontroversial now among scientists and philosophers alike.
2) We know the kinds of conscious experiences that are bad. Dying famished and hungry is not merely subjective. It is a subjective state, but one that is universally and always negative. This is not a moral assignment—it is an observable, material fact about the world and about psychological states.
3) The material conditions that lead to changes in conscious experiences are amenable to objective inquiry. The same external stimuli may move different people in different conscious directions, but we can study that relationship objectively. “Dying is bad” is not always a true claim in medical science—it depends on the material context. If you can’t save people from the WPW, killing them could be a good thing. This is the principle that euthanasia leans on. Sometimes dying is better, in light of the facts about the further possibility for positive conscious experiences. That doesn’t make medical science subjective.
4) The only “non-empirical” assumption you have to make is that what we mean by bad or wrong is movement of consciousness towards, or setting up systems that reliably contain people within, a negative state-space of consciousness.
5) This is how all other physical sciences operate.
We don’t try to give additional argument to demonstrate that those properties are moral properties, we argue that moral properties are a subset of natural properties. In the same sense that ‘health’ is a subset of biological properties, or ‘good plumbing’ is a subset of various structural/engineering & hydrodynamic properties. Everything we value makes reference to material facts and their utility towards a goal set which must be assumed. But only in the case of morality does anyone ever demand a secondary and unreachable standard of objectivity.
Our thesis is therefore a realist, but deflationary (or ‘naturalised’) position on morality.
Hi Robert, I’m familiar with moral naturalism, it’s a well known philosophical position.
But I still think you’ve simply not given the philosophical argument necessary to establish moral naturalism, you’ve merely asserted it.
This assumption (the first line of your argument) contains the very conclusion you’re arguing for. What even is “value”? This is the question you need to answer, so you can’t just assume it at the start.
You say “This move assumes materialism/physicalism, which is mostly uncontroversial now among scientists and philosophers alike.” But physicalism isn’t the issue here: the issue is explicating what value is in naturalistic terms.
As an aside, there’s an important difference between naturalism/physicalism and reductive physicalism. (http://plato.stanford.edu/entries/physicalism/#RedNonRedPhy) Naturalism is less controversial than reductive physicalism.
Again, this is a position statement not an argument or a step in an argument. The core question here, as above, is what does “bad” even mean? I’m not sure, but it reads like you are saying “bad” = (subjectively bad to a person; negatively valenced) in the first sentence, referring to a thing being objectively, universally and always morally “negative” in the second. And referring to some uncontroversial material property (don’t know which one) in the third. The whole question that needs to be answered is what “bad”/negative value means. And an argument is needed to show what the connection is between subjectively bad experience, objective value and such and such material properties.
This is the very thing you need an argument to establish! You say “the only… assumption you have to make” is, and then describe the conclusion you need to argue for. What’s to stop me assuming that “bad” refers to some totally different natural property?
I’m not sure how to interpret this sentence, so I’ll just address the latter claim. You say “we argue that moral properties are a subset of natural properties”- but I don’t see an argument for this claim. Maybe you’re just starting from the presumption that naturalism is plausible, so moral properties must be a subset of natural properties? But it doesn’t follow that there are any moral properties or that moral properties are this rather than that or that they can be reductively defined.
Right, but, in fact, people seem to have a lot of different goals. It doesn’t follow that there is a single, over-arching “moral” goal-set, rather than just a plethora of unrelated goals. It doesn’t even follow that there is a single goal for any given domain. For example, many interests and considerations (differing from person to person) influence our plumbing preferences: it doesn’t follow there are plumbing properties in any fundamental sense.
This is potentially pretty damning to your thesis. People may simply have lots of different goals, rather than there being a single, universally accepted moral goal. If so there’ll simply be nothing specifically “moral” or there’ll be moral/value relativism.
Hi David,
I really don’t think I can reply without rewriting the essay again. I feel like I’ve addressed those concerns already (or at least attempted to do so) in the body of the essay, and you’ve found them unsatisfactory, so we’d just be talking passed each other.
Your replies are much appreciated though.
OK, I’ll address some of the points made separately in the body of the text in a new comment.
That’s not really true. For example, in the PhilPapers survey, only 56.5% accepted physicalism in philosophy of mind (though 16.4% chose ‘Other’). There’s no knock-down argument for physicalism.
What’re the arguments that scientists or philosophers use for it?
Thanks for replying!
“There are no moral qualities over and above the ones we can measure, either a) in the consequences of an act, or b) in the behavioural profiles or personality traits in people that reliably lead to certain acts. Both these things are physical (or, at least, material in the latter case), and therefore measurable.”
The parameters you measure are physical properties to which you assign moral significance. The parameters themselves are science, the assignment of moral significance is “not science” in the sense that it depends on the entity doing the assignment.
The problem with your breatharianism example is that the claim “you can eat nothing and stay alive” is objectively wrong but the claim “dying is bad” is a moral judgement and therefore subjective. That is, the only sense in which “dying is bad” is a true claim is by interpreting it as “I prefer that people won’t die.”
Then by extension you have to say that medical science has no normative force. If it’s just subjective, then when medicine says you ought not to smoke if you want to avoid lung cancer, they’re completely unjustified when they say ought not to.
Yes, medical science has no normative force. The fact smoking leads to cancer is a claim about causal relationship between phenomena in the physical world. The fact cancer causes suffering and death is also such a relationship. The idea that suffering and death are evil is already a subjective preference (subjective not in the sense that it is undefined but in the sense that different people might have different preferences; almost all people prefer avoiding suffering and death but other preferences might have more variance).
“The step from ethical subjectivism to the claim it’s wrong to interfere with other cultures seems to me completely misguided, even backwards.” I’m not entirely sure what you mean here. We don’t argue that it’s wrong to interfere with other cultures.
It is our view that there is a general attitude that each person can have whatever moral code they like and be justified, and we believe that attitude is wrong. If someone claims they kill other humans because it’s their moral code and it’s the most good thing to do, that doesn’t matter. We can rightfully say that they are wrong. So why should there be some cut off point somewhere that we suddenly can’t say someone else’s moral code is wrong? To quote Sam Harris, we shouldn’t be afraid to criticise bad ideas.
“I am convinced that ethics is subjective, not in the sense that any claim about ethics is as good as any other claim, but in the sense that different people and different cultures can possess different ethics (although perhaps the differences are not very substantial) and there is no objective measure by which one is better than the other. ” I agree with the first part in that different people and cultures can possess different ethics, but I reject your notion that there is no objective measure by which one is better than the other. If a culture’s ethical code was to brutally maim innocent humans, we don’t say ‘We disagree with that but it’s happening in another culture so it’s ok, who are we to say that our version of morality is better?’ We would just say that they are wrong.
“If according to my ethics your culture is doing something bad then it is completely rational for me to stop your culture from doing it” when you say this it sounds like you are trending to our view anyway.
I’ve recently updated toward moral antirealism. You’re not using the terms moral antirealism, moral realism, or metaethics in the essay, so I’m not sure whether your argument is meant as one for moral realism or whether you just want to argue what not all ethical systems are equally powerful – some are more inconsistent, more complex, more unstable, etc. than others – and the latter view is one I share.
“I reject your notion that there is no objective measure by which one is better than the other. If a culture’s ethical code was to brutally maim innocent humans, we don’t say ‘We disagree with that but it’s happening in another culture so it’s ok, who are we to say that our version of morality is better?’ We would just say that they are wrong.”
I would say neither. My morality – probably one shared by many EAs and many, but fewer, people outside EA – is that I care enormously, incomparably more about the suffering of those innocent humans than I care about respecting cultures, traditions, ethical systems, etc., so – other options and opportunity costs aside – I would disagree and intervene decisively, but I wouldn’t feel any need or justification to claim that their morals are factually wrong. This is also my world to shape.
That’s not to say that I wouldn’t compromise for strategic purposes.
You’re banking on the general moral consensus just being one that favours you, or coincides with your subjectivist take on morality. There can be no moral ‘progress’ if that is the case. We could be completely wrong when we say taking slaves is a bad thing, if the world was under ISIS control and the consensus shared by most people is that morality comes from a holy book.
Having a wide consensus for one’s view is certainly an advantage, but I don’t see how the rest follows from that. The direction that we want to call progress would just depend on what each of us sees as progress.
To use Brian’s article as an example, this would, to me, include interventions among wild animals for example with vaccines and birth control, but that’s probably antithetical to the idea of progress of many environmentalists and even Gene Roddenberry.
What do you mean by being “wrong” about the badness of slavery? Maybe that it would be unwise to address the problem of slavery under an ISIS-like regime because it would have zero tractability and keep us from implementing more tractable improvements since we would be executed?
.
“I’m not entirely sure what you mean here. We don’t argue that it’s wrong to interfere with other cultures.”
I was refuting what appeared to me as a strawman of ethical subjectivism.
“If someone claims they kill other humans because it’s their moral code and it’s the most good thing to do, that doesn’t matter. We can rightfully say that they are wrong.”
What is “wrong”? The only meaningful thing we can say is “we prefer people not the die therefore we will try to stop this person.” We can find other people who share this value and cooperate with them in stopping the murderer. But if the murderer honestly doesn’t mind killing people, nothing we say will convince them, even if they are completely rational.
By ‘wrong’ I don’t mean the opposite of morally just, I mean the opposite of correct. That is to say, we could rightfully say they are incorrect.
I fundamentally disagree with your final point. I used to be a meat-eater, and did not care one bit about the welfare of animals. To use your wording, I honestly didn’t mind killing animals. Through careful argument over a year from a friend of mine, I was finally convinced that was a morally incorrect point of view. To say that it would be impossible to convince a rational murderer who doesn’t mind killing people that murder is wrong is ludicrous.
I completely don’t understand what you mean by “killing people is incorrect.” I understand that “2+2=5“ is “incorrect” in the sense that there is a formally verifiable proof of “not 2+2=5” from the axioms of Peano arithmetic. I understand that general relativity is “correct” in the sense that we can use it to predict results of experiments and verify our predictions (on a more fundamental level, it is “correct” in the sense that it is the simplest model that produces all previous observations; the distinction is not very important at the moment). I don’t see any verification procedure for the morality of killing people, except checking whether killing people matches the preferences of a particular person or the majority in a particular group of people.
“I used to be a meat-eater, and did not care one bit about the welfare of animals… Through careful argument over a year from a friend of mine, I was finally convinced that was a morally incorrect point of view. To say that it would be impossible to convince a rational murderer who doesn’t mind killing people that murder is wrong is ludicrous.”
The fact you found your friend’s arguments to be persuasive means there was already some foundation in your mind from which “eating meat is wrong” could be derived. The existence of such a foundation is not a logical or physical necessity. To give a radical example, imagine someone builds an artificial general intelligence programmed specifically to kill as many people as it can, unconditionally. Nothing you say to this AGI will convince it that what it’s doing is wrong. In case of humans, there are many shared values because we all have very similar DNA and most of us are part of the same memetic ecosystem, but it doesn’t mean all of our values are precisely identical. It would probably be hard to find someone who has no objection to killing people deep down, although I wouldn’t be surprised if extreme psychopaths like that exist. However other more nuanced values may vary more signicantly.
As we discuss in our post, imagine the worst possible world. Most humans are comfortable in saying that this would be very bad, and any steps towards it would be bad, and if you disagree and think that steps towards the WPW are good, then you’re wrong. In the same vein, holding a ‘version of ethics’ that claims that moving towards the WPW is good, you’re wrong.
To address you second point, humans are not AGIs, our values are fluid.
I completely fail to understand how your WPW example addresses my point. It is absolutely irrelevant what most humans are comfortable in saying. Truth is not a democracy, and in this case the claim is not even wrong (it is ill defined since there is no such thing as “bad” without specifying the agent from whose point of view it is bad). It is true that some preferences are nearly universal for humans but other preferences are less so.
How is the fluidity of human values a point in your favor? If anything it only makes them more subjective.
This is somewhat of an aside, but I know a person who can argue for veganism almost as well as any vegan, and knows it is wrong to be a carnist, yet chooses to eat meat. They are the first to admit that they are selfish and wrong, but they do so anyway.