I completely don’t understand what you mean by “killing people is incorrect.” I understand that “2+2=5“ is “incorrect” in the sense that there is a formally verifiable proof of “not 2+2=5” from the axioms of Peano arithmetic. I understand that general relativity is “correct” in the sense that we can use it to predict results of experiments and verify our predictions (on a more fundamental level, it is “correct” in the sense that it is the simplest model that produces all previous observations; the distinction is not very important at the moment). I don’t see any verification procedure for the morality of killing people, except checking whether killing people matches the preferences of a particular person or the majority in a particular group of people.
“I used to be a meat-eater, and did not care one bit about the welfare of animals… Through careful argument over a year from a friend of mine, I was finally convinced that was a morally incorrect point of view. To say that it would be impossible to convince a rational murderer who doesn’t mind killing people that murder is wrong is ludicrous.”
The fact you found your friend’s arguments to be persuasive means there was already some foundation in your mind from which “eating meat is wrong” could be derived. The existence of such a foundation is not a logical or physical necessity. To give a radical example, imagine someone builds an artificial general intelligence programmed specifically to kill as many people as it can, unconditionally. Nothing you say to this AGI will convince it that what it’s doing is wrong. In case of humans, there are many shared values because we all have very similar DNA and most of us are part of the same memetic ecosystem, but it doesn’t mean all of our values are precisely identical. It would probably be hard to find someone who has no objection to killing people deep down, although I wouldn’t be surprised if extreme psychopaths like that exist. However other more nuanced values may vary more signicantly.
As we discuss in our post, imagine the worst possible world. Most humans are comfortable in saying that this would be very bad, and any steps towards it would be bad, and if you disagree and think that steps towards the WPW are good, then you’re wrong. In the same vein, holding a ‘version of ethics’ that claims that moving towards the WPW is good, you’re wrong.
To address you second point, humans are not AGIs, our values are fluid.
I completely fail to understand how your WPW example addresses my point. It is absolutely irrelevant what most humans are comfortable in saying. Truth is not a democracy, and in this case the claim is not even wrong (it is ill defined since there is no such thing as “bad” without specifying the agent from whose point of view it is bad). It is true that some preferences are nearly universal for humans but other preferences are less so.
How is the fluidity of human values a point in your favor? If anything it only makes them more subjective.
I completely don’t understand what you mean by “killing people is incorrect.” I understand that “2+2=5“ is “incorrect” in the sense that there is a formally verifiable proof of “not 2+2=5” from the axioms of Peano arithmetic. I understand that general relativity is “correct” in the sense that we can use it to predict results of experiments and verify our predictions (on a more fundamental level, it is “correct” in the sense that it is the simplest model that produces all previous observations; the distinction is not very important at the moment). I don’t see any verification procedure for the morality of killing people, except checking whether killing people matches the preferences of a particular person or the majority in a particular group of people.
“I used to be a meat-eater, and did not care one bit about the welfare of animals… Through careful argument over a year from a friend of mine, I was finally convinced that was a morally incorrect point of view. To say that it would be impossible to convince a rational murderer who doesn’t mind killing people that murder is wrong is ludicrous.”
The fact you found your friend’s arguments to be persuasive means there was already some foundation in your mind from which “eating meat is wrong” could be derived. The existence of such a foundation is not a logical or physical necessity. To give a radical example, imagine someone builds an artificial general intelligence programmed specifically to kill as many people as it can, unconditionally. Nothing you say to this AGI will convince it that what it’s doing is wrong. In case of humans, there are many shared values because we all have very similar DNA and most of us are part of the same memetic ecosystem, but it doesn’t mean all of our values are precisely identical. It would probably be hard to find someone who has no objection to killing people deep down, although I wouldn’t be surprised if extreme psychopaths like that exist. However other more nuanced values may vary more signicantly.
As we discuss in our post, imagine the worst possible world. Most humans are comfortable in saying that this would be very bad, and any steps towards it would be bad, and if you disagree and think that steps towards the WPW are good, then you’re wrong. In the same vein, holding a ‘version of ethics’ that claims that moving towards the WPW is good, you’re wrong.
To address you second point, humans are not AGIs, our values are fluid.
I completely fail to understand how your WPW example addresses my point. It is absolutely irrelevant what most humans are comfortable in saying. Truth is not a democracy, and in this case the claim is not even wrong (it is ill defined since there is no such thing as “bad” without specifying the agent from whose point of view it is bad). It is true that some preferences are nearly universal for humans but other preferences are less so.
How is the fluidity of human values a point in your favor? If anything it only makes them more subjective.