Constructive Criticism of Moral Uncertainty (book)

We read the book “Moral Uncertainty” in an EA reading group. We think that the concept of moral uncertainty is very important and definitely deserves more attention. However, we had problems with parts of the book that we want to highlight in the following. Especially since we found other books by the authors such as “The Precipice” or “Doing good better” very excellent, we feel like this book hasn’t reached its full potential yet.

We write this criticism in case the authors plan to write a second version of the book and because we are interested in whether other people share our criticism. If we misunderstood something or you find our assessment unjustified, we are happy to engage in a discussion.

We reached out to the authors who encouraged us to write a forum post.

High-level feedback

Firstly, we found that the book is slightly too vague on its goals and limitations. In the beginning, it is framed around simple everyday decisions and thus evokes the expectation of providing practical guidelines for decision-making under moral uncertainty. Later on, it feels more like the book primarily explains a theoretical framework without providing a guide for improved decision-making.

If, for example, an eager reader wanted to apply the theoretical insights of moral uncertainty into their everyday life they would struggle in multiple ways. First of all, they would have a hard time determining what their credences in different moral theories are, e.g., to which extent they are utilitarian vs. other moral frameworks. While this is not an easy question, we felt like the book should either discuss different heuristics to get one’s credences or reference other work that explores this question in more detail. But even if we assumed that people knew their credences, we think they are left off with an unsatisfying conclusion.

Chapter 8 first tells the reader how a naive application of moral uncertainty could look like to then tell them that the real world is actually more complicated and the naive version can lead to wrong conclusions. The reader is then left off neither with a solution to the problem nor with an acknowledgment of its complexity. We obviously don’t expect that this book would solve all problems of moral uncertainty within one go but we think it should either be much clearer about its limitations or provide partial answers where they exist.

Chapter-by-chapter feedback


We have two pieces of criticism for the introduction. The first is about Table 0.1. This is arguably one of the most important pieces of content in the book as it sets the scene and introduces lots of important concepts. However, there are multiple open questions. Why, for example, is the bottom left corner of the table not applicable? We don’t think this is obvious enough not to be mentioned. Why did the authors choose not to consider all entries with an X? Are they not relevant, too complicated, out of this project’s scope, or something else? A justification would have helped us.
Furthermore, while the rest of the book has a lot of clear and good examples, the text surrounding Table 0.1 has nearly none. We think it would have been helpful to give examples of the different categories of moral theories they investigate, for example, “Can have a pre-order, e.g. <insert moral theory>”.

The second piece of feedback concerns the distinction between first- and second-order uncertainty. Moral uncertainty is second-order uncertainty, i.e. it is between different moral frameworks. First-order uncertainty would be just within one framework such as utilitarianism. The leading example of the book, i.e. whether Alice should donate her 20€ vs. buy an expensive dinner, has first- and second-order uncertainty. Even if you were convinced that Utilitarianism is 100% true, you could still argue to buy the expensive dinner because e.g. this makes you work harder on the world’s most pressing problem and is thus net positive. We would have appreciated a discussion on how the two orders of uncertainty interact or can be disentangled.

Fanaticism (Chapter 5):

When someone assigns an infinitely large value to one outcome, any kind of expected value calculation—and thus moral uncertainty and MEC—breaks. The authors address this issue of ‘fanaticism’ with two arguments. a) A person could also have infinite credence in the opposite outcome, thus breaking their own model, and b) this issue arises under empirical uncertainty as well and thus people should be unable to make claims with infinitely large credences/​values anyway.
While we agree that these arguments are probably true, we feel like they only make sense from a point of view that is already sympathetic to utilitarianism and moral uncertainty. If someone truly believed that infinite values should exist in moral theories, the authors’ answer is probably not convincing. It essentially says “you can have moral uncertainty in your moral theory but you first need to change some fundamental things about it”.

Non-fully-comparable moral theories (Chapter 3, 4, 5, 6):

This criticism extends the previous one. It is a bit vague and we don’t have a good solution either. It feels like many of the ways to introduce moral uncertainty to moral theories that are not fully comparable or allow for infinite credences are by transforming and tweaking the theory until it has some concept of moral uncertainty. Variance voting, for example, is an interesting mathematical way to weigh different kinds of theories and makes sense from a practitioner’s perspective, but we could very well imagine that genuine supporters of non-utilitarian moral frameworks would not agree that these mathematical operations should be allowed.

From this follow two possible conclusions:

a) Moral uncertainty can only cleanly be applied to fully comparable moral theories and its application to non-fully-comparable theories will always feel a bit ‘hacky’.

b) The attempt to apply moral uncertainty to non-fully-comparable moral theories just unveils deeper flaws and inconsistencies in these theories that should be addressed but are independent of moral uncertainty, e.g. infinite credences.

Is Chapter 7 necessary? (Chapter 7):

Some of us were not sure whether this chapter adds a lot of value. In our opinion, it can be summarized as “non-cognitivist theories have a hard time including a notion of moral uncertainty”. While this thesis is elaborated in great detail we are not sure how it relates to the rest of the book. Does it mean non-cognitivist theories are worse because they don’t have an account of moral uncertainty? Is it a limitation of moral uncertainty because it can’t be integrated into non-cognitivist theories? Is any conclusion of the chapter relevant for the goals of the book, i.e. laying out the theory and some practical applications of moral uncertainty? We were a bit confused.

Does MU matter in practice? (Chapter 8):

On a high level, the authors argue that moral uncertainty should influence moral debates e.g. on veganism or abortion—and we agree. They then lay out a naive interpretation of moral uncertainty for multiple examples and conclude that they are too simplistic because of interaction effects and intertheoretic comparisons. So the conclusion of the chapter doesn’t support what they set out to show. If someone is skeptical that moral uncertainty should matter for practical ethics, they won’t be convinced after this chapter. If somebody is already convinced before the chapter they haven’t learned how to apply it to their life. We are not sure how to solve it but we found it unsatisfying.

Unit-comparability and fanaticism (Chapter 0+4+8):

Chapter 0 (page 8) says: “Because we don’t discuss conditions of level-comparability in this book when we refer to intertheoretic comparability we are referring in every instance to unit-comparability”.

In chapter 8 (page 182) it says: “In particular, we will assume that all theories in which the decision-maker has credence are complete, interval-scale measurable and intertheoretically comparable and that the decision-maker doesn’t have credences that are sufficiently small in theories that are sufficiently high stakes that ‘fanaticism’ becomes an issue”.

Our question is then if the intertheoretic comparability of chapter 8 still refers to the unit-comparability of the introduction. If that is the case, how can we assume that ‘fanaticism’ doesn’t become an issue? Even with variance voting (chapter 4), as long as we assume unit-comparability, the ratio between moral theories doesn’t change and fanaticism remains a problem, right?

We hope that our feedback is helpful or possible misunderstandings from our side can be clarified.

If there are resources on ways to get your credences in moral theories, we would appreciate suggestions.

Looking forward to a constructive discussion.