However, the compulsion to bite bullets is at odds with moral anti-realism, another popular belief among EAs.
Well questions of how one ought to act are about ethics, while questions about the nature of morality are about meta-ethics. Meta-ethical principles can inform ethics, but only indirectly. An anti-realist can still have reasons to affirm a consistent view of morality, and a realist can still refuse to accept demanding forms of morality.
So is morality closer to physics or biology? An empirical approach to morality would view it as stemming from evolutionary psychology, social structures, and historical serendipity. Psychology, sociology, and history are some of the only fields even less law-like than biology.
Empiricist approaches to metaethics define morality as something to be learned from human experience. This is notably different from the process of scientific methodology which is applied to fields like psychology and biology, whether law based or not. You generally can’t determine any facts about morality by studying its psychology, genealogy and history in society, as those refer to how people act and moral philosophy refers to how they ought to act. Some would argue for ways that you can derive normative conclusions from social science fields, although I believe those ideas are generally limited and contentious. Nevertheless, the nature of the fields’ scopes and methodologies are entirely different, so I don’t think you can draw meaningful parallels.
A popular bullet to bite is an argument of the form “X is theft/rape/murder”, where X is an act that is widely believed to be morally acceptable but that has superficial similarity to a serious crime.
I’m not aware of this being common. The LessWrong link doesn’t seem to be relevant to legitimate moral philosophy. Can you give some examples?
Typically we are dealing with issues where the conclusions of a moral principle are highly counterintuitive. This can take many forms.
Morality becomes even more complex when it involves competing values. There is no inconsistency in believing that airports should X-ray luggage to reduce security risks and simultaneously believing that widespread surveillance of citizens is unjustified. One can value both security and privacy and believe that in some cases one outweighs the other. This point is often lost in bullet-biting morality, which views “inconsistency” as a product of hypocrisy and cowardice.
This is basically true (except that inconsistency is viewed as being irrational or wrong, rather than being slandered or denigrated) although typical utilitarian approaches also lead to similar conclusions about things with instrumental value such as privacy and security, while optimizing for multiple values can still lead to highly counter-intuitive moral conclusions. Mostly any aggregating ethic will have this feature. If we optimize for both autonomy and well being, for instance, I may still find it morally obligatory to do overly demanding things to maximize those values, and I may find cases where causing serious harm to one is worth the benefits to the other.
You can add more and more values to patch the holes and build a really complicated multivariate utility function which might end up producing normal outputs, but at this point I would question why you’re optimizing at all, when it looks like what you really want to do is use an intuitionist approach.
Similarly, optimizing for naive definitions of utility will lead to paperclipping. For example, if we believe in one definition of utility, we may end up with a universe tiled with thermostats.
Yes, although most people, moral realists included, would affirm a fundamental difference between phenomenal consciousness and movements of simple systems.
Moral anti-realism doesn’t like neat conclusions: though there’s no reason to favor biting bullets, there’s no reason to disfavor it either.
This is sort of true but, again, it is because meta ethics doesn’t have too much to say about ethics in general. Moral realism also doesn’t generally favor bullet biting or not bullet biting: there are tons of moral realists who favor intuitive accounts of morality or other ‘softer’ approaches. Moral principles don’t have to be hard and inflexible; they could presumably be spongy and malleable and fuzzy, while still being true.
The rationale for the anti realist to decide how to face counterintuitive moral cases is going to depend on what their reasons are for affirming morality in the first place. Those reasons may or may not be sufficient to convince them to bite bullets, just as is the case for the moral realist.
“You generally can’t determine any facts about morality by studying its psychology, genealogy and history in society, as those refer to how people act and moral philosophy refers to how they ought to act.”
Moral anti-realists think that questions about how people ought to act are fundamentally confused. For an anti-realist, the only legitimate questions about morality are empirical. What do societies believe about morality? Why do we believe these things (from a social and evolutionary perspective)? We can’t derive normative truth from these questions, but they can still be useful.
“An anti-realist can still have reasons to affirm a consistent view of morality”
Consistent is not the same as principled. Of course I believe in internal consistency. But principled morality is no more rational than unprincipled morality.
“I’m not aware of this being common. The LessWrong link doesn’t seem to be relevant to legitimate moral philosophy. Can you give some examples?”
Some EAs argue that killing animals for meat is the moral equivalent of murder. There are other examples outside EA: abortion is murder, taxation is theft. Ask tumblr what currently counts as rape… Just because some of these views aren’t taken seriously by moral philosophers doesn’t mean they aren’t influential and shouldn’t be engaged with.
“You can add more and more values to patch the holes and build a really complicated multivariate utility function which might end up producing normal outputs, but at this point I would question why you’re optimizing at all, when it looks like what you really want to do is use an intuitionist approach.”
Correct, I don’t think utility function approaches are any better than avoiding utility functions. However, people have many moral values, and under normal circumstances these may approximate utility functions.
“Yes, although most people, moral realists included, would affirm a fundamental difference between phenomenal consciousness and movements of simple systems.”
Consequentialism would require building a definition of consciousness into the utility function. Many definitions of consciousness, such as “complexity” or “integration”, would fall apart in extreme cases.
Moral anti-realists think that questions about how people ought to act are fundamentally confused. For an anti-realist, the only legitimate questions about morality are empirical.
Anti-realists deny that there is such thing as true moral claims, but they don’t think morality is fundamentally confused. There have been many anti-realist philosophers who have proposed some form of ethics: R.M. Hare, J.L. Mackie, the existentialists, etc.
Consistent is not the same as principled. Of course I believe in internal consistency. But principled morality is no more rational than unprincipled morality.
What exactly do you mean by “principled” in this case?
Some EAs argue that killing animals for meat is the moral equivalent of murder. There are other examples outside EA: abortion is murder, taxation is theft.
I think many, hopefully most, of the people who say that have actual moral reasons for saying that. There is no fallacy in claiming a moral equivalency if you base it on actual reasons to believe that it is morally just as bad: it may in fact be the case that there is no significant moral difference between killing animals and killing people. Same goes for those who claim that abortion is murder, taxation is theft, etc. We should be challenged to think about whether, say, abortion is morally bad in the same way that murder is (and if not then why), because sometimes people’s beliefs are inconsistent, and because it very well may be the case that, say, abortion is morally bad in the same way that murder is. Of course, these kinds of arguments should be developed further rather than shortened into (fallacious) assertions. However, I don’t see this argument structure as central to the issue of counterintuitive moral conclusions.
Consequentialism would require building a definition of consciousness into the utility function. Many definitions of consciousness, such as “complexity” or “integration”, would fall apart in extreme cases.
I don’t think those are nearly good enough definitions of consciousness either. The consequentialist is usually concerned with sentience—whether there is “something that it’s like to be” a particular entity. If we decide that there is something that it’s like to be a simple system then we will value their experiences, although in this case it’s no longer so counterintuitive, because we can imagine what it’s like to be a simple system and we can empathize with them. While it’s difficult to find a formal definition for consciousness, and also very difficult to determine what sorts of physical substances and structures are responsible for consciousness, we do have a very clear idea in our heads of what it means to be conscious, and we can easily conceive of the difference between something that is conscious and something that is physically identical but not conscious (e.g. a p-zombie).
Moral anti-realists think that questions about how people ought to act are fundamentally confused. For an anti-realist, the only legitimate questions about morality are empirical. What do societies believe about morality? Why do we believe these things (from a social and evolutionary perspective)? We can’t derive normative truth from these questions, but they can still be useful.
That is not true in the slightest. If I reject that social action can be placed within a scheme of values which has absolute standing, I suffer from no inconsistency from non-absolutist forms of valuation. Thucydides, Vico, Machiavelli, Marx, Nietzsche, Williams and Foucault were neither moral realists nor refrained from evaluative judgement. But then evaluative thought is an inescapable part of human life. How do you suppose that one would fail to perform it?
Well questions of how one ought to act are about ethics, while questions about the nature of morality are about meta-ethics. Meta-ethical principles can inform ethics, but only indirectly. An anti-realist can still have reasons to affirm a consistent view of morality, and a realist can still refuse to accept demanding forms of morality.
Empiricist approaches to metaethics define morality as something to be learned from human experience. This is notably different from the process of scientific methodology which is applied to fields like psychology and biology, whether law based or not. You generally can’t determine any facts about morality by studying its psychology, genealogy and history in society, as those refer to how people act and moral philosophy refers to how they ought to act. Some would argue for ways that you can derive normative conclusions from social science fields, although I believe those ideas are generally limited and contentious. Nevertheless, the nature of the fields’ scopes and methodologies are entirely different, so I don’t think you can draw meaningful parallels.
I’m not aware of this being common. The LessWrong link doesn’t seem to be relevant to legitimate moral philosophy. Can you give some examples?
Typically we are dealing with issues where the conclusions of a moral principle are highly counterintuitive. This can take many forms.
This is basically true (except that inconsistency is viewed as being irrational or wrong, rather than being slandered or denigrated) although typical utilitarian approaches also lead to similar conclusions about things with instrumental value such as privacy and security, while optimizing for multiple values can still lead to highly counter-intuitive moral conclusions. Mostly any aggregating ethic will have this feature. If we optimize for both autonomy and well being, for instance, I may still find it morally obligatory to do overly demanding things to maximize those values, and I may find cases where causing serious harm to one is worth the benefits to the other.
You can add more and more values to patch the holes and build a really complicated multivariate utility function which might end up producing normal outputs, but at this point I would question why you’re optimizing at all, when it looks like what you really want to do is use an intuitionist approach.
Yes, although most people, moral realists included, would affirm a fundamental difference between phenomenal consciousness and movements of simple systems.
This is sort of true but, again, it is because meta ethics doesn’t have too much to say about ethics in general. Moral realism also doesn’t generally favor bullet biting or not bullet biting: there are tons of moral realists who favor intuitive accounts of morality or other ‘softer’ approaches. Moral principles don’t have to be hard and inflexible; they could presumably be spongy and malleable and fuzzy, while still being true.
The rationale for the anti realist to decide how to face counterintuitive moral cases is going to depend on what their reasons are for affirming morality in the first place. Those reasons may or may not be sufficient to convince them to bite bullets, just as is the case for the moral realist.
“You generally can’t determine any facts about morality by studying its psychology, genealogy and history in society, as those refer to how people act and moral philosophy refers to how they ought to act.”
Moral anti-realists think that questions about how people ought to act are fundamentally confused. For an anti-realist, the only legitimate questions about morality are empirical. What do societies believe about morality? Why do we believe these things (from a social and evolutionary perspective)? We can’t derive normative truth from these questions, but they can still be useful.
“An anti-realist can still have reasons to affirm a consistent view of morality”
Consistent is not the same as principled. Of course I believe in internal consistency. But principled morality is no more rational than unprincipled morality.
“I’m not aware of this being common. The LessWrong link doesn’t seem to be relevant to legitimate moral philosophy. Can you give some examples?”
Some EAs argue that killing animals for meat is the moral equivalent of murder. There are other examples outside EA: abortion is murder, taxation is theft. Ask tumblr what currently counts as rape… Just because some of these views aren’t taken seriously by moral philosophers doesn’t mean they aren’t influential and shouldn’t be engaged with.
“You can add more and more values to patch the holes and build a really complicated multivariate utility function which might end up producing normal outputs, but at this point I would question why you’re optimizing at all, when it looks like what you really want to do is use an intuitionist approach.”
Correct, I don’t think utility function approaches are any better than avoiding utility functions. However, people have many moral values, and under normal circumstances these may approximate utility functions.
“Yes, although most people, moral realists included, would affirm a fundamental difference between phenomenal consciousness and movements of simple systems.”
Consequentialism would require building a definition of consciousness into the utility function. Many definitions of consciousness, such as “complexity” or “integration”, would fall apart in extreme cases.
Anti-realists deny that there is such thing as true moral claims, but they don’t think morality is fundamentally confused. There have been many anti-realist philosophers who have proposed some form of ethics: R.M. Hare, J.L. Mackie, the existentialists, etc.
What exactly do you mean by “principled” in this case?
I think many, hopefully most, of the people who say that have actual moral reasons for saying that. There is no fallacy in claiming a moral equivalency if you base it on actual reasons to believe that it is morally just as bad: it may in fact be the case that there is no significant moral difference between killing animals and killing people. Same goes for those who claim that abortion is murder, taxation is theft, etc. We should be challenged to think about whether, say, abortion is morally bad in the same way that murder is (and if not then why), because sometimes people’s beliefs are inconsistent, and because it very well may be the case that, say, abortion is morally bad in the same way that murder is. Of course, these kinds of arguments should be developed further rather than shortened into (fallacious) assertions. However, I don’t see this argument structure as central to the issue of counterintuitive moral conclusions.
I don’t think those are nearly good enough definitions of consciousness either. The consequentialist is usually concerned with sentience—whether there is “something that it’s like to be” a particular entity. If we decide that there is something that it’s like to be a simple system then we will value their experiences, although in this case it’s no longer so counterintuitive, because we can imagine what it’s like to be a simple system and we can empathize with them. While it’s difficult to find a formal definition for consciousness, and also very difficult to determine what sorts of physical substances and structures are responsible for consciousness, we do have a very clear idea in our heads of what it means to be conscious, and we can easily conceive of the difference between something that is conscious and something that is physically identical but not conscious (e.g. a p-zombie).
That is not true in the slightest. If I reject that social action can be placed within a scheme of values which has absolute standing, I suffer from no inconsistency from non-absolutist forms of valuation. Thucydides, Vico, Machiavelli, Marx, Nietzsche, Williams and Foucault were neither moral realists nor refrained from evaluative judgement. But then evaluative thought is an inescapable part of human life. How do you suppose that one would fail to perform it?