A dilemma for Maximize Expected Choiceworthiness (MEC)

1. Post intro

This post summarizes a paper I’ve been working on that critiques Maximize Expected Choiceworthiness (MEC). I provide (i) the abstract of the paper, (ii) a bullet point summary of the paper, and (iii) some final EA-specific thoughts that aren’t in the paper. Those interested in greater detail are warmly invited to read the paper in full (comments welcome)!

2. Paper abstract

Maximize Expected Choiceworthiness (MEC) is a theory about how we ought to make decisions under ethical uncertainty.[1] It says that we ought to handle ethical uncertainty in the way that Expected Utility Theory (EUT) handles descriptive uncertainty. I argue that MEC faces a dilemma: it either issues zero practical guidance or else is wildly fanatical in the decision-theoretic sense. I then consider four possible responses to the dilemma—including attempts to import solutions to adjacent problems for EUT—and argue that, at minimum, they are less promising than they first appear.

Keywords: ethical (moral) uncertainty, Maximize Expected Choiceworthiness, Expected Utility Theory, fanaticism, infinite ethics, Pascal’s Wager

3. Paper summary

3.1 introduction to MEC

  • According to MEC, ethical theories represent options as having a certain choiceworthiness—the strength of reason favoring the option. When expected choiceworthiness can be calculated, we ought to take the option that maximizes expected choiceworthiness.

  • How does MEC solve the problem of intertheoretic value comparisons? I.e., how do we do stuff like add ‘strength of reason x to φ according to utilitarianism’ to ‘strength of reason y to φ according to deontology’?

    • MacAskill, Bykvist, and Ord (2020) propose what they call a universal scale account. The basic idea is that many ethical theories share a common, theory-neutral scale of choiceworthiness (2020, 133). More specifically, the view is that an option O instantiates different magnitudes of the property of choiceworthiness in possible worlds that differ with respect to which ethical theory is true in them (2020, 145).

    • I grant this account arguendo throughout the paper (though it’s worth noting that many philosophers are skeptical of the account and take the problem of intertheoretic value comparisons to be an outstanding problem for MEC).

3.2 the dilemma: silence or piousness

MEC has primarily been developed as a candidate response to moral uncertainty. However, by its creators’ own design, MEC is also supposed to apply in cases of prudential uncertainty: cases in which an agent is trying to promote her well-being, but uncertain which prudential theory is true. The dilemma I set out is for prudential choice, but I expect that an analogous dilemma will arise for moral choice.

Horn 1: silence

  • According to numerous soteriological hypotheses (i.e., religious doctrines of salvation), we stand to gain an infinite quantity of the summum bonum if we attain salvation, but also to suffer infinitely if we fail to do so.

    • I will refer to such soteriologies as supreme soteriologies to reflect their maximally high stakes.

    • I assume that we should have positive, non-infinitesimal credence in at least one supreme soteriology (and if we accept standard-issue Bayesianism, we’ll have such a credence in every supreme soteriology).

  • Consider an agent with credence in just one supreme soteriology.

    • As Alan Hájek (e.g. 2003) has observed, each of our available options—even ‘flagrantly disregard the religious life and do whatever you want’—has some positive subjective probability of resulting in salvation. By parallel reasoning, each option will also have some positive probability of resulting in damnation. Since ∞ - ∞ is undefined, the expected choiceworthiness of every option is undefined.

  • Consider an agent with credence in at least two supreme soteriologies.

    • Every option has some non-zero probability of resulting in salvation by one supreme soteriology and damnation by another (and vice versa). Again, the expected choiceworthiness of every option is undefined.

  • Either way, MEC is silent about what we ought to do (not even affording us an ordinal ranking of our options). Call this the silence objection.

Horn 2: piousness

  • Part of what generates the silence problem is that in the eyes of EUT—and hence MEC—any prospect of an infinite payoff is as good as any other such prospect, no matter the odds. But this is clearly misguided: a lottery that gives us a 99% chance at ∞ and a 1% chance at 0 is more choiceworthy than a lottery that gives us a 1% chance at ∞ and a 99% chance at 0. Generalizing, we can suggest the following conservative patch to EUT: when faced with two or more infinite prospects, take whichever option offers the highest probability of success.

  • This patch lands us on the second horn of the dilemma, which we will illustrate through the following case.

    • Consider an agent with the following credence distribution over prudential theories: cr(objective list) = 0.5, cr(mental state) = 0.45, cr(preference satisfactionism) = 0.0499, and cr(conservative Mormonism) = 0.0001.

    • The agent is currently leading a flourishing secular life, and we can imagine that she would incur significant losses from refashioning her life around conservative Mormonism. These losses would range from the relatively trivial, like giving up coffee; to the less trivial, such as abjuring premarital sexual activity (including masturbation); to the most wide-ranging and perhaps intolerable of all: reorienting her entire modus vivendi around an ideology of whose falsity she is all but certain, which will include (inter alia) the attempt to bring herself to believe its doctrines and the practical adoption of its values (such as opposition to certain cases of abortion, which are, by her own lights, permissible, and less progressive attitudes towards gender). In addition to the sheer drop in subjective well-being this would entail, one is also struck by the values of authenticity (see e.g. Williams 2002, 172-205) and non-alienation (see e.g. Baker and Maguire 2020) such a reorientation would destroy. And yet, according to MEC, insofar as the agent is trying to act in her own best interests, she ought to adopt the Mormon way of life, because doing so maximizes expected choiceworthiness.

  • I submit that this is false. We are not required to reorient our lives around fanciful dogmas simply because they claim that the stakes of life are very high. But according to MEC, many agents should. Call this the piousness objection.

3.3 first response: partners in guilt (fanaticism)

  • MacAskill, Bykvist, and Ord write that fanaticism “is not a problem that is unique to moral uncertainty…whatever is the best solution to the fanaticism problem under empirical uncertainty is likely to be the best solution to the fanaticism problem under moral uncertainty” (2020, 153).

  • I argue that this response is too quick.

    • First, there is an important way in which fanaticism is worse for MEC than it us for EUT.

      • As Nick Bostrom notes, “If an individual is unwilling to gamble everything on a tiny (but finite) chance of getting to Heaven, then that means that she does not have an infinitely strong preference for Heaven. Decision theory [i.e., EUT] simply reflects this fact. Aggregative ethical theories, on the other hand, defy finitistic strictures because their core commitments imply that certain worlds and outcomes be assigned infinite value” (2011, 34).

      • Our point is made nicely if we swap ‘Supreme soteriologies’ for ‘Aggregative ethical theories’. Since many people would not be willing to “gamble everything”—much less to undergo, e.g., a lifetime of torture—in exchange for some tiny probability of admission to Heaven, their utility functions do not assign Heaven an infinite value. It is therefore not the case that all—or even most—EU maximizers are rationally required to wager for God. In contrast, expected choiceworthiness maximizers will be required to do so much more often, because in the context of MEC, it’s the prudential theories, rather than the decision-makers’ utility functions, that set the values of the possible outcomes.

  • Second, there is no a priori guarantee that adopting whatever is the best solution to fanaticism for EUT will deliver MEC from the piousness objection. There are proposals for EUT that won’t work for MEC (see the paper for examples).

  • Finally, ‘just import the best fanaticism patch from EUT’ is a promising tack only to the extent that a neat solution to fanaticism is in the offing. Recent work on the problem, however, gives us reason to be pessimistic: several philosophers have shown that in order to reject fanaticism, we must accept other implications that are also deeply counterintuitive (see e.g. Russell 2021). Accordingly, whatever is the “best” solution to fanaticism may be best only in the sense of being the least bad choice from a set of unattractive options. MEC will then inherit this unattractive decision-theoretic feature.

  • Upshot: fanaticism is a problem for MEC but not for its competitors (e.g. Moral Parliamentarianism). This gives us a strong (though not necessarily decisive) abductive reason to reject MEC.

3.4 second response: partners in guilt (infinities)

  • It is already well known that infinities cause problems in decision theory. The proponent of MEC might therefore suggest that MEC should avail itself of whatever is the best way for EUT to handle infinities (paralleling her response to the fanaticism charge).

  • Again, I argue that this response is too quick.

    • MEC can’t straightforwardly “import” every candidate solution to infinites that have been developed on behalf of EUT. (In the paper I give the example of Arntzenius’s (2014) expansionist proposal.)

    • MEC can import other solutions, but many of these won’t solve the piousness problem. (In the paper I give the example of Bartha’s relative utility theory (RUT); Chen and Rubio’s (2020) appeal to the surreal numbers is another example.)

  • Upshot: MEC needs to show us solutions to the infinity problems that it can import in the first place and that, when imported, allow it to avoid the silence/​piousness dilemma, plus the other unacceptable infinity results, such as infinitarian paralysis for would-be do-gooders (see Bostrom 2011).

    • This strengthens the abductive case against MEC. The viability of MEC depends on a satisfactory solution to fanaticism and the various problems of infinity—two notoriously difficult sets of issues in decision theory that are even more thorny in the realm of ethical uncertainty. The viability of MEC’s competitors does not. This fact should lower our credence that MEC is the true account of ethical uncertainty.

3.5 third response: mastery of natural reality

  • Thought: maybe we can generate infinite value in a naturalist friendly manner, once we know everything about natural science and engineering. So maybe MEC tells us to try to do that instead, rather than betting on our favorite soteriological hypothesis.

  • Three responses

    1. There is an asymmetry between the types of good on offer in natural reality and the good of salvation.

      • According to most (if not all) soteriological views, salvation is the summum bonum.

      • Naturalist views, in contrast, do not have this feature. (They do not make claims such as: ‘even if non-naturalist salvation is real—which we don’t believe—we still think that the type of well-being one can enjoy in material reality would be even better, or at any rate just as good.’)

      • Since MEC is stakes-sensitive, it must prefer salvation to naturalist goods (all else equal).

    1. Going in for a naturalist view fails to hedge against damnation (infinite suffering). Again, since MEC is stakes-sensitive, it won’t like that.

    2. Personal identity problems for naturalism.

      • If you go to Heaven, you clearly survive.

      • In contrast, it is much less clear that you would survive as an indefinite material continuant or as an artificial entity uploaded to the cloud. (How much psychological change can you undergo before personal identity no longer holds between you at time t and future person-stages that are causally related to you at t?) So even if we invent radical life extension technology, it is not clear that wagering for naturalism over a soteriological hypothesis would maximize your well-being. But prudential choice is about maximizing your well-being.

3.5 fourth response: normalization

  • MacAskill, Bykvist, and Ord (2020, chapter 4) suggest that MEC should sometimes normalize the choiceworthiness functions of competing ethical theories against each other.

    • The goal of the normalization method is to reflect the principle of equal say, which says that for a given quantity of credence, c, every ethical theory should exert the same degree of influence over the deontic verdict given by the theory of ethical uncertainty (MacAskill, Bykvist, and Ord 2020, 90-91; MacAskill, Cotton-Barratt, and Ord 2020, 72).

      • So, to illustrate with the simplest case, if we have equal credence in just two ethical theories, these theories should somehow have equal influence over what our theory of ethical uncertainty tells us to do.

  • Problem 1: we can’t faithfully renormalize a supreme soteriology’s choiceworthiness function, because if we represent a supreme soteriology’s choiceworthiness function as having a finite variance, we will be unable to represent the fact that according to the soteriology, some options are infinitely more choice-worthy than others.

  • Problem 2: normalization conflicts with the core insight of EUT, namely that in choosing amongst options, we need to take into account both the probabilities of the different states of nature and the quantities of value at stake in them.

    • In employing EUT to handle descriptive uncertainty, we would never alter the utilities in the decision matrix via statistical normalization. The utilities are already given by our utility function; all that is left for us to do is to maximize the expectation.

    • To illustrate, imagine an agent with equal credence in (i) an extremely demanding form of consequentialism and (ii) Dudeism, the self-styled “slowest-growing religion in the world” whose central teaching is that “Life is short and complicated and nobody knows what to do about it. So don’t do anything about it. Just take it easy, man.” Intuitively, this agent should allocate many more of her resources (time, effort, money, etc.) to impartial altruism, which is highly choice-worthy according to consequentialism, than to drinking white Russians and bowling, which are highly choice-worthy according to Dudeism. Yet if we normalize these theories against each other, Dudeism will enjoy equal input into what the agent ought to do. (Perhaps this would be better for our agent, but it does not appear to be the rational response to her moral uncertainty.)

  • Upshot

    • Oftentimes we will want our theory of ethical uncertainty to be stakes sensitive, as in the case of consequentialism vs. Dudeism.

    • But, to quote MacAskill, Cotton-Barratt, and Ord, who put the point perfectly, we also want it “to avoid ‘fanatical’ conclusions, where the expected choiceworthiness of [our] options is almost entirely determined by the choiceworthiness function of a theory in which one has vanishingly small credence but which claims that most decision-situations are enormously high stakes” (2020, 73-74).

    • It is difficult to see how MEC, with its commitment to the machinery of EUT, can pull this off. Perhaps a theory with an entirely different structure can do better.

EA-specific thoughts

My impression is that many EAs, following MacAskill, think that MEC “entails something similar to a value-pluralist consequentialism-plus-side-constraints view, with heavy emphasis on consequences that impact the long-run future of the human race” (MacAskill 2019, 244).

If my arguments above are sound, and if they transfer from the prudential to the moral context (which I expect them to, but haven’t thought super deeply about), this is not true. MEC actually entails that soteriological concerns are the predominant global priority (or entails nothing at all, if we go in for horn 1 of the dilemma). Therefore, research in comparative religion and mysticism should become a major EA focus area (if MEC is true)! (I really hope no one modus ponens’s this. This is meant as a reductio.)

Key upshot: It’s not the case that current EA activities/​priorities are supported by the best-going theory of ethical uncertainty. Also, MEC is probably false (imo).

Final reflection: a philosophical worldview that combines (subjective) Bayesianism as the core of one’s epistemology and EUT-style reasoning as the core of one’s treatment of uncertainty has great prima facie appeal. I once went in for something like it myself, and it seems central to much of EA thinking. But ultima facie, I don’t think it works. Standard-issue Bayesianism doesn’t let you assign credence = 0 to anything aside from logical contradictions. And there are a bunch of whacky hypotheses out there which say that there are infinite quantities of value at stake. All this leads us to unacceptable practical verdicts (or none at all). So something from this common epistemic + decision theoretic picture has got to go, or else undergo major revision.

References

You can find all the refs in the paper, which is linked to at the beginning of the post :)

  1. ^

    Many readers will be more familiar with the label ‘moral uncertainty’ than ‘ethical uncertainty’. I think ‘moral uncertainty’ is somewhat misleading, but won’t get into that here (since it’s a bit of an inside baseball debate for analytic philosophers working in normative ethics).