“The step from ethical subjectivism to the claim it’s wrong to interfere with other cultures seems to me completely misguided, even backwards.” I’m not entirely sure what you mean here. We don’t argue that it’s wrong to interfere with other cultures.
It is our view that there is a general attitude that each person can have whatever moral code they like and be justified, and we believe that attitude is wrong. If someone claims they kill other humans because it’s their moral code and it’s the most good thing to do, that doesn’t matter. We can rightfully say that they are wrong. So why should there be some cut off point somewhere that we suddenly can’t say someone else’s moral code is wrong? To quote Sam Harris, we shouldn’t be afraid to criticise bad ideas.
“I am convinced that ethics is subjective, not in the sense that any claim about ethics is as good as any other claim, but in the sense that different people and different cultures can possess different ethics (although perhaps the differences are not very substantial) and there is no objective measure by which one is better than the other. ” I agree with the first part in that different people and cultures can possess different ethics, but I reject your notion that there is no objective measure by which one is better than the other. If a culture’s ethical code was to brutally maim innocent humans, we don’t say ‘We disagree with that but it’s happening in another culture so it’s ok, who are we to say that our version of morality is better?’ We would just say that they are wrong.
“If according to my ethics your culture is doing something bad then it is completely rational for me to stop your culture from doing it” when you say this it sounds like you are trending to our view anyway.
I’ve recently updated toward moral antirealism. You’re not using the terms moral antirealism, moral realism, or metaethics in the essay, so I’m not sure whether your argument is meant as one for moral realism or whether you just want to argue what not all ethical systems are equally powerful – some are more inconsistent, more complex, more unstable, etc. than others – and the latter view is one I share.
“I reject your notion that there is no objective measure by which one is better than the other. If a culture’s ethical code was to brutally maim innocent humans, we don’t say ‘We disagree with that but it’s happening in another culture so it’s ok, who are we to say that our version of morality is better?’ We would just say that they are wrong.”
I would say neither. My morality – probably one shared by many EAs and many, but fewer, people outside EA – is that I care enormously, incomparably more about the suffering of those innocent humans than I care about respecting cultures, traditions, ethical systems, etc., so – other options and opportunity costs aside – I would disagree and intervene decisively, but I wouldn’t feel any need or justification to claim that their morals are factually wrong. This is also my world to shape.
You’re banking on the general moral consensus just being one that favours you, or coincides with your subjectivist take on morality. There can be no moral ‘progress’ if that is the case. We could be completely wrong when we say taking slaves is a bad thing, if the world was under ISIS control and the consensus shared by most people is that morality comes from a holy book.
Having a wide consensus for one’s view is certainly an advantage, but I don’t see how the rest follows from that. The direction that we want to call progress would just depend on what each of us sees as progress.
To use Brian’s article as an example, this would, to me, include interventions among wild animals for example with vaccines and birth control, but that’s probably antithetical to the idea of progress of many environmentalists and even Gene Roddenberry.
What do you mean by being “wrong” about the badness of slavery? Maybe that it would be unwise to address the problem of slavery under an ISIS-like regime because it would have zero tractability and keep us from implementing more tractable improvements since we would be executed?
“I’m not entirely sure what you mean here. We don’t argue that it’s wrong to interfere with other cultures.”
I was refuting what appeared to me as a strawman of ethical subjectivism.
“If someone claims they kill other humans because it’s their moral code and it’s the most good thing to do, that doesn’t matter. We can rightfully say that they are wrong.”
What is “wrong”? The only meaningful thing we can say is “we prefer people not the die therefore we will try to stop this person.” We can find other people who share this value and cooperate with them in stopping the murderer. But if the murderer honestly doesn’t mind killing people, nothing we say will convince them, even if they are completely rational.
By ‘wrong’ I don’t mean the opposite of morally just, I mean the opposite of correct. That is to say, we could rightfully say they are incorrect.
I fundamentally disagree with your final point. I used to be a meat-eater, and did not care one bit about the welfare of animals. To use your wording, I honestly didn’t mind killing animals. Through careful argument over a year from a friend of mine, I was finally convinced that was a morally incorrect point of view. To say that it would be impossible to convince a rational murderer who doesn’t mind killing people that murder is wrong is ludicrous.
I completely don’t understand what you mean by “killing people is incorrect.” I understand that “2+2=5“ is “incorrect” in the sense that there is a formally verifiable proof of “not 2+2=5” from the axioms of Peano arithmetic. I understand that general relativity is “correct” in the sense that we can use it to predict results of experiments and verify our predictions (on a more fundamental level, it is “correct” in the sense that it is the simplest model that produces all previous observations; the distinction is not very important at the moment). I don’t see any verification procedure for the morality of killing people, except checking whether killing people matches the preferences of a particular person or the majority in a particular group of people.
“I used to be a meat-eater, and did not care one bit about the welfare of animals… Through careful argument over a year from a friend of mine, I was finally convinced that was a morally incorrect point of view. To say that it would be impossible to convince a rational murderer who doesn’t mind killing people that murder is wrong is ludicrous.”
The fact you found your friend’s arguments to be persuasive means there was already some foundation in your mind from which “eating meat is wrong” could be derived. The existence of such a foundation is not a logical or physical necessity. To give a radical example, imagine someone builds an artificial general intelligence programmed specifically to kill as many people as it can, unconditionally. Nothing you say to this AGI will convince it that what it’s doing is wrong. In case of humans, there are many shared values because we all have very similar DNA and most of us are part of the same memetic ecosystem, but it doesn’t mean all of our values are precisely identical. It would probably be hard to find someone who has no objection to killing people deep down, although I wouldn’t be surprised if extreme psychopaths like that exist. However other more nuanced values may vary more signicantly.
As we discuss in our post, imagine the worst possible world. Most humans are comfortable in saying that this would be very bad, and any steps towards it would be bad, and if you disagree and think that steps towards the WPW are good, then you’re wrong. In the same vein, holding a ‘version of ethics’ that claims that moving towards the WPW is good, you’re wrong.
To address you second point, humans are not AGIs, our values are fluid.
I completely fail to understand how your WPW example addresses my point. It is absolutely irrelevant what most humans are comfortable in saying. Truth is not a democracy, and in this case the claim is not even wrong (it is ill defined since there is no such thing as “bad” without specifying the agent from whose point of view it is bad). It is true that some preferences are nearly universal for humans but other preferences are less so.
How is the fluidity of human values a point in your favor? If anything it only makes them more subjective.
This is somewhat of an aside, but I know a person who can argue for veganism almost as well as any vegan, and knows it is wrong to be a carnist, yet chooses to eat meat. They are the first to admit that they are selfish and wrong, but they do so anyway.
“The step from ethical subjectivism to the claim it’s wrong to interfere with other cultures seems to me completely misguided, even backwards.” I’m not entirely sure what you mean here. We don’t argue that it’s wrong to interfere with other cultures.
It is our view that there is a general attitude that each person can have whatever moral code they like and be justified, and we believe that attitude is wrong. If someone claims they kill other humans because it’s their moral code and it’s the most good thing to do, that doesn’t matter. We can rightfully say that they are wrong. So why should there be some cut off point somewhere that we suddenly can’t say someone else’s moral code is wrong? To quote Sam Harris, we shouldn’t be afraid to criticise bad ideas.
“I am convinced that ethics is subjective, not in the sense that any claim about ethics is as good as any other claim, but in the sense that different people and different cultures can possess different ethics (although perhaps the differences are not very substantial) and there is no objective measure by which one is better than the other. ” I agree with the first part in that different people and cultures can possess different ethics, but I reject your notion that there is no objective measure by which one is better than the other. If a culture’s ethical code was to brutally maim innocent humans, we don’t say ‘We disagree with that but it’s happening in another culture so it’s ok, who are we to say that our version of morality is better?’ We would just say that they are wrong.
“If according to my ethics your culture is doing something bad then it is completely rational for me to stop your culture from doing it” when you say this it sounds like you are trending to our view anyway.
I’ve recently updated toward moral antirealism. You’re not using the terms moral antirealism, moral realism, or metaethics in the essay, so I’m not sure whether your argument is meant as one for moral realism or whether you just want to argue what not all ethical systems are equally powerful – some are more inconsistent, more complex, more unstable, etc. than others – and the latter view is one I share.
“I reject your notion that there is no objective measure by which one is better than the other. If a culture’s ethical code was to brutally maim innocent humans, we don’t say ‘We disagree with that but it’s happening in another culture so it’s ok, who are we to say that our version of morality is better?’ We would just say that they are wrong.”
I would say neither. My morality – probably one shared by many EAs and many, but fewer, people outside EA – is that I care enormously, incomparably more about the suffering of those innocent humans than I care about respecting cultures, traditions, ethical systems, etc., so – other options and opportunity costs aside – I would disagree and intervene decisively, but I wouldn’t feel any need or justification to claim that their morals are factually wrong. This is also my world to shape.
That’s not to say that I wouldn’t compromise for strategic purposes.
You’re banking on the general moral consensus just being one that favours you, or coincides with your subjectivist take on morality. There can be no moral ‘progress’ if that is the case. We could be completely wrong when we say taking slaves is a bad thing, if the world was under ISIS control and the consensus shared by most people is that morality comes from a holy book.
Having a wide consensus for one’s view is certainly an advantage, but I don’t see how the rest follows from that. The direction that we want to call progress would just depend on what each of us sees as progress.
To use Brian’s article as an example, this would, to me, include interventions among wild animals for example with vaccines and birth control, but that’s probably antithetical to the idea of progress of many environmentalists and even Gene Roddenberry.
What do you mean by being “wrong” about the badness of slavery? Maybe that it would be unwise to address the problem of slavery under an ISIS-like regime because it would have zero tractability and keep us from implementing more tractable improvements since we would be executed?
.
“I’m not entirely sure what you mean here. We don’t argue that it’s wrong to interfere with other cultures.”
I was refuting what appeared to me as a strawman of ethical subjectivism.
“If someone claims they kill other humans because it’s their moral code and it’s the most good thing to do, that doesn’t matter. We can rightfully say that they are wrong.”
What is “wrong”? The only meaningful thing we can say is “we prefer people not the die therefore we will try to stop this person.” We can find other people who share this value and cooperate with them in stopping the murderer. But if the murderer honestly doesn’t mind killing people, nothing we say will convince them, even if they are completely rational.
By ‘wrong’ I don’t mean the opposite of morally just, I mean the opposite of correct. That is to say, we could rightfully say they are incorrect.
I fundamentally disagree with your final point. I used to be a meat-eater, and did not care one bit about the welfare of animals. To use your wording, I honestly didn’t mind killing animals. Through careful argument over a year from a friend of mine, I was finally convinced that was a morally incorrect point of view. To say that it would be impossible to convince a rational murderer who doesn’t mind killing people that murder is wrong is ludicrous.
I completely don’t understand what you mean by “killing people is incorrect.” I understand that “2+2=5“ is “incorrect” in the sense that there is a formally verifiable proof of “not 2+2=5” from the axioms of Peano arithmetic. I understand that general relativity is “correct” in the sense that we can use it to predict results of experiments and verify our predictions (on a more fundamental level, it is “correct” in the sense that it is the simplest model that produces all previous observations; the distinction is not very important at the moment). I don’t see any verification procedure for the morality of killing people, except checking whether killing people matches the preferences of a particular person or the majority in a particular group of people.
“I used to be a meat-eater, and did not care one bit about the welfare of animals… Through careful argument over a year from a friend of mine, I was finally convinced that was a morally incorrect point of view. To say that it would be impossible to convince a rational murderer who doesn’t mind killing people that murder is wrong is ludicrous.”
The fact you found your friend’s arguments to be persuasive means there was already some foundation in your mind from which “eating meat is wrong” could be derived. The existence of such a foundation is not a logical or physical necessity. To give a radical example, imagine someone builds an artificial general intelligence programmed specifically to kill as many people as it can, unconditionally. Nothing you say to this AGI will convince it that what it’s doing is wrong. In case of humans, there are many shared values because we all have very similar DNA and most of us are part of the same memetic ecosystem, but it doesn’t mean all of our values are precisely identical. It would probably be hard to find someone who has no objection to killing people deep down, although I wouldn’t be surprised if extreme psychopaths like that exist. However other more nuanced values may vary more signicantly.
As we discuss in our post, imagine the worst possible world. Most humans are comfortable in saying that this would be very bad, and any steps towards it would be bad, and if you disagree and think that steps towards the WPW are good, then you’re wrong. In the same vein, holding a ‘version of ethics’ that claims that moving towards the WPW is good, you’re wrong.
To address you second point, humans are not AGIs, our values are fluid.
I completely fail to understand how your WPW example addresses my point. It is absolutely irrelevant what most humans are comfortable in saying. Truth is not a democracy, and in this case the claim is not even wrong (it is ill defined since there is no such thing as “bad” without specifying the agent from whose point of view it is bad). It is true that some preferences are nearly universal for humans but other preferences are less so.
How is the fluidity of human values a point in your favor? If anything it only makes them more subjective.
This is somewhat of an aside, but I know a person who can argue for veganism almost as well as any vegan, and knows it is wrong to be a carnist, yet chooses to eat meat. They are the first to admit that they are selfish and wrong, but they do so anyway.