A dilemma for Maximize Expected Choiceworthiness (MEC)
1. Post intro
This post summarizes a paper I’ve been working on that critiques Maximize Expected Choiceworthiness (MEC). I provide (i) the abstract of the paper, (ii) a bullet point summary of the paper, and (iii) some final EA-specific thoughts that aren’t in the paper. Those interested in greater detail are warmly invited to read the paper in full (comments welcome)!
2. Paper abstract
Maximize Expected Choiceworthiness (MEC) is a theory about how we ought to make decisions under ethical uncertainty.[1] It says that we ought to handle ethical uncertainty in the way that Expected Utility Theory (EUT) handles descriptive uncertainty. I argue that MEC faces a dilemma: it either issues zero practical guidance or else is wildly fanatical in the decision-theoretic sense. I then consider four possible responses to the dilemma—including attempts to import solutions to adjacent problems for EUT—and argue that, at minimum, they are less promising than they first appear.
Keywords: ethical (moral) uncertainty, Maximize Expected Choiceworthiness, Expected Utility Theory, fanaticism, infinite ethics, Pascal’s Wager
3. Paper summary
3.1 introduction to MEC
According to MEC, ethical theories represent options as having a certain choiceworthiness—the strength of reason favoring the option. When expected choiceworthiness can be calculated, we ought to take the option that maximizes expected choiceworthiness.
How does MEC solve the problem of intertheoretic value comparisons? I.e., how do we do stuff like add ‘strength of reason x to φ according to utilitarianism’ to ‘strength of reason y to φ according to deontology’?
MacAskill, Bykvist, and Ord (2020) propose what they call a universal scale account. The basic idea is that many ethical theories share a common, theory-neutral scale of choiceworthiness (2020, 133). More specifically, the view is that an option O instantiates different magnitudes of the property of choiceworthiness in possible worlds that differ with respect to which ethical theory is true in them (2020, 145).
I grant this account arguendo throughout the paper (though it’s worth noting that many philosophers are skeptical of the account and take the problem of intertheoretic value comparisons to be an outstanding problem for MEC).
3.2 the dilemma: silence or piousness
MEC has primarily been developed as a candidate response to moral uncertainty. However, by its creators’ own design, MEC is also supposed to apply in cases of prudential uncertainty: cases in which an agent is trying to promote her well-being, but uncertain which prudential theory is true. The dilemma I set out is for prudential choice, but I expect that an analogous dilemma will arise for moral choice.
Horn 1: silence
According to numerous soteriological hypotheses (i.e., religious doctrines of salvation), we stand to gain an infinite quantity of the summum bonum if we attain salvation, but also to suffer infinitely if we fail to do so.
I will refer to such soteriologies as supreme soteriologies to reflect their maximally high stakes.
I assume that we should have positive, non-infinitesimal credence in at least one supreme soteriology (and if we accept standard-issue Bayesianism, we’ll have such a credence in every supreme soteriology).
Consider an agent with credence in just one supreme soteriology.
As Alan Hájek (e.g. 2003) has observed, each of our available options—even ‘flagrantly disregard the religious life and do whatever you want’—has some positive subjective probability of resulting in salvation. By parallel reasoning, each option will also have some positive probability of resulting in damnation. Since ∞ - ∞ is undefined, the expected choiceworthiness of every option is undefined.
Consider an agent with credence in at least two supreme soteriologies.
Every option has some non-zero probability of resulting in salvation by one supreme soteriology and damnation by another (and vice versa). Again, the expected choiceworthiness of every option is undefined.
Either way, MEC is silent about what we ought to do (not even affording us an ordinal ranking of our options). Call this the silence objection.
Horn 2: piousness
Part of what generates the silence problem is that in the eyes of EUT—and hence MEC—any prospect of an infinite payoff is as good as any other such prospect, no matter the odds. But this is clearly misguided: a lottery that gives us a 99% chance at ∞ and a 1% chance at 0 is more choiceworthy than a lottery that gives us a 1% chance at ∞ and a 99% chance at 0. Generalizing, we can suggest the following conservative patch to EUT: when faced with two or more infinite prospects, take whichever option offers the highest probability of success.
This patch lands us on the second horn of the dilemma, which we will illustrate through the following case.
Consider an agent with the following credence distribution over prudential theories: cr(objective list) = 0.5, cr(mental state) = 0.45, cr(preference satisfactionism) = 0.0499, and cr(conservative Mormonism) = 0.0001.
The agent is currently leading a flourishing secular life, and we can imagine that she would incur significant losses from refashioning her life around conservative Mormonism. These losses would range from the relatively trivial, like giving up coffee; to the less trivial, such as abjuring premarital sexual activity (including masturbation); to the most wide-ranging and perhaps intolerable of all: reorienting her entire modus vivendi around an ideology of whose falsity she is all but certain, which will include (inter alia) the attempt to bring herself to believe its doctrines and the practical adoption of its values (such as opposition to certain cases of abortion, which are, by her own lights, permissible, and less progressive attitudes towards gender). In addition to the sheer drop in subjective well-being this would entail, one is also struck by the values of authenticity (see e.g. Williams 2002, 172-205) and non-alienation (see e.g. Baker and Maguire 2020) such a reorientation would destroy. And yet, according to MEC, insofar as the agent is trying to act in her own best interests, she ought to adopt the Mormon way of life, because doing so maximizes expected choiceworthiness.
I submit that this is false. We are not required to reorient our lives around fanciful dogmas simply because they claim that the stakes of life are very high. But according to MEC, many agents should. Call this the piousness objection.
3.3 first response: partners in guilt (fanaticism)
MacAskill, Bykvist, and Ord write that fanaticism “is not a problem that is unique to moral uncertainty…whatever is the best solution to the fanaticism problem under empirical uncertainty is likely to be the best solution to the fanaticism problem under moral uncertainty” (2020, 153).
I argue that this response is too quick.
First, there is an important way in which fanaticism is worse for MEC than it us for EUT.
As Nick Bostrom notes, “If an individual is unwilling to gamble everything on a tiny (but finite) chance of getting to Heaven, then that means that she does not have an infinitely strong preference for Heaven. Decision theory [i.e., EUT] simply reflects this fact. Aggregative ethical theories, on the other hand, defy finitistic strictures because their core commitments imply that certain worlds and outcomes be assigned infinite value” (2011, 34).
Our point is made nicely if we swap ‘Supreme soteriologies’ for ‘Aggregative ethical theories’. Since many people would not be willing to “gamble everything”—much less to undergo, e.g., a lifetime of torture—in exchange for some tiny probability of admission to Heaven, their utility functions do not assign Heaven an infinite value. It is therefore not the case that all—or even most—EU maximizers are rationally required to wager for God. In contrast, expected choiceworthiness maximizers will be required to do so much more often, because in the context of MEC, it’s the prudential theories, rather than the decision-makers’ utility functions, that set the values of the possible outcomes.
Second, there is no a priori guarantee that adopting whatever is the best solution to fanaticism for EUT will deliver MEC from the piousness objection. There are proposals for EUT that won’t work for MEC (see the paper for examples).
Finally, ‘just import the best fanaticism patch from EUT’ is a promising tack only to the extent that a neat solution to fanaticism is in the offing. Recent work on the problem, however, gives us reason to be pessimistic: several philosophers have shown that in order to reject fanaticism, we must accept other implications that are also deeply counterintuitive (see e.g. Russell 2021). Accordingly, whatever is the “best” solution to fanaticism may be best only in the sense of being the least bad choice from a set of unattractive options. MEC will then inherit this unattractive decision-theoretic feature.
Upshot: fanaticism is a problem for MEC but not for its competitors (e.g. Moral Parliamentarianism). This gives us a strong (though not necessarily decisive) abductive reason to reject MEC.
3.4 second response: partners in guilt (infinities)
It is already well known that infinities cause problems in decision theory. The proponent of MEC might therefore suggest that MEC should avail itself of whatever is the best way for EUT to handle infinities (paralleling her response to the fanaticism charge).
Again, I argue that this response is too quick.
MEC can’t straightforwardly “import” every candidate solution to infinites that have been developed on behalf of EUT. (In the paper I give the example of Arntzenius’s (2014) expansionist proposal.)
MEC can import other solutions, but many of these won’t solve the piousness problem. (In the paper I give the example of Bartha’s relative utility theory (RUT); Chen and Rubio’s (2020) appeal to the surreal numbers is another example.)
Upshot: MEC needs to show us solutions to the infinity problems that it can import in the first place and that, when imported, allow it to avoid the silence/piousness dilemma, plus the other unacceptable infinity results, such as infinitarian paralysis for would-be do-gooders (see Bostrom 2011).
This strengthens the abductive case against MEC. The viability of MEC depends on a satisfactory solution to fanaticism and the various problems of infinity—two notoriously difficult sets of issues in decision theory that are even more thorny in the realm of ethical uncertainty. The viability of MEC’s competitors does not. This fact should lower our credence that MEC is the true account of ethical uncertainty.
3.5 third response: mastery of natural reality
Thought: maybe we can generate infinite value in a naturalist friendly manner, once we know everything about natural science and engineering. So maybe MEC tells us to try to do that instead, rather than betting on our favorite soteriological hypothesis.
Three responses
There is an asymmetry between the types of good on offer in natural reality and the good of salvation.
According to most (if not all) soteriological views, salvation is the summum bonum.
Naturalist views, in contrast, do not have this feature. (They do not make claims such as: ‘even if non-naturalist salvation is real—which we don’t believe—we still think that the type of well-being one can enjoy in material reality would be even better, or at any rate just as good.’)
Since MEC is stakes-sensitive, it must prefer salvation to naturalist goods (all else equal).
Going in for a naturalist view fails to hedge against damnation (infinite suffering). Again, since MEC is stakes-sensitive, it won’t like that.
Personal identity problems for naturalism.
If you go to Heaven, you clearly survive.
In contrast, it is much less clear that you would survive as an indefinite material continuant or as an artificial entity uploaded to the cloud. (How much psychological change can you undergo before personal identity no longer holds between you at time t and future person-stages that are causally related to you at t?) So even if we invent radical life extension technology, it is not clear that wagering for naturalism over a soteriological hypothesis would maximize your well-being. But prudential choice is about maximizing your well-being.
3.5 fourth response: normalization
MacAskill, Bykvist, and Ord (2020, chapter 4) suggest that MEC should sometimes normalize the choiceworthiness functions of competing ethical theories against each other.
The goal of the normalization method is to reflect the principle of equal say, which says that for a given quantity of credence, c, every ethical theory should exert the same degree of influence over the deontic verdict given by the theory of ethical uncertainty (MacAskill, Bykvist, and Ord 2020, 90-91; MacAskill, Cotton-Barratt, and Ord 2020, 72).
So, to illustrate with the simplest case, if we have equal credence in just two ethical theories, these theories should somehow have equal influence over what our theory of ethical uncertainty tells us to do.
Problem 1: we can’t faithfully renormalize a supreme soteriology’s choiceworthiness function, because if we represent a supreme soteriology’s choiceworthiness function as having a finite variance, we will be unable to represent the fact that according to the soteriology, some options are infinitely more choice-worthy than others.
Problem 2: normalization conflicts with the core insight of EUT, namely that in choosing amongst options, we need to take into account both the probabilities of the different states of nature and the quantities of value at stake in them.
In employing EUT to handle descriptive uncertainty, we would never alter the utilities in the decision matrix via statistical normalization. The utilities are already given by our utility function; all that is left for us to do is to maximize the expectation.
To illustrate, imagine an agent with equal credence in (i) an extremely demanding form of consequentialism and (ii) Dudeism, the self-styled “slowest-growing religion in the world” whose central teaching is that “Life is short and complicated and nobody knows what to do about it. So don’t do anything about it. Just take it easy, man.” Intuitively, this agent should allocate many more of her resources (time, effort, money, etc.) to impartial altruism, which is highly choice-worthy according to consequentialism, than to drinking white Russians and bowling, which are highly choice-worthy according to Dudeism. Yet if we normalize these theories against each other, Dudeism will enjoy equal input into what the agent ought to do. (Perhaps this would be better for our agent, but it does not appear to be the rational response to her moral uncertainty.)
Upshot
Oftentimes we will want our theory of ethical uncertainty to be stakes sensitive, as in the case of consequentialism vs. Dudeism.
But, to quote MacAskill, Cotton-Barratt, and Ord, who put the point perfectly, we also want it “to avoid ‘fanatical’ conclusions, where the expected choiceworthiness of [our] options is almost entirely determined by the choiceworthiness function of a theory in which one has vanishingly small credence but which claims that most decision-situations are enormously high stakes” (2020, 73-74).
It is difficult to see how MEC, with its commitment to the machinery of EUT, can pull this off. Perhaps a theory with an entirely different structure can do better.
EA-specific thoughts
My impression is that many EAs, following MacAskill, think that MEC “entails something similar to a value-pluralist consequentialism-plus-side-constraints view, with heavy emphasis on consequences that impact the long-run future of the human race” (MacAskill 2019, 244).
If my arguments above are sound, and if they transfer from the prudential to the moral context (which I expect them to, but haven’t thought super deeply about), this is not true. MEC actually entails that soteriological concerns are the predominant global priority (or entails nothing at all, if we go in for horn 1 of the dilemma). Therefore, research in comparative religion and mysticism should become a major EA focus area (if MEC is true)! (I really hope no one modus ponens’s this. This is meant as a reductio.)
Key upshot: It’s not the case that current EA activities/priorities are supported by the best-going theory of ethical uncertainty. Also, MEC is probably false (imo).
Final reflection: a philosophical worldview that combines (subjective) Bayesianism as the core of one’s epistemology and EUT-style reasoning as the core of one’s treatment of uncertainty has great prima facie appeal. I once went in for something like it myself, and it seems central to much of EA thinking. But ultima facie, I don’t think it works. Standard-issue Bayesianism doesn’t let you assign credence = 0 to anything aside from logical contradictions. And there are a bunch of whacky hypotheses out there which say that there are infinite quantities of value at stake. All this leads us to unacceptable practical verdicts (or none at all). So something from this common epistemic + decision theoretic picture has got to go, or else undergo major revision.
References
You can find all the refs in the paper, which is linked to at the beginning of the post :)
- ^
Many readers will be more familiar with the label ‘moral uncertainty’ than ‘ethical uncertainty’. I think ‘moral uncertainty’ is somewhat misleading, but won’t get into that here (since it’s a bit of an inside baseball debate for analytic philosophers working in normative ethics).
- Resolving moral uncertainty with randomization by 29 Mar 2023 10:10 UTC; 29 points) (
- Monthly Overload of EA—October 2022 by 1 Oct 2022 12:32 UTC; 13 points) (
- Resolving moral uncertainty with randomization by 29 Sep 2023 11:23 UTC; 7 points) (LessWrong;
- 13 Jun 2023 8:51 UTC; 3 points) 's comment on Family Empowerment Media: track record, cost-effectiveness, and main uncertainties by (
- 29 Mar 2023 11:51 UTC; 1 point) 's comment on Effective Altruism’s Implicit Epistemology by (
There are other possible infinities that could dominate. See my point 5 here: https://forum.effectivealtruism.org/posts/qcqTJEfhsCDAxXzNf/what-reason-is-there-not-to-accept-pascal-s-wager?commentId=Ydbz56hhEwxg9aPh8
And this post and discussion: https://forum.effectivealtruism.org/posts/sEnkD8sHP6pZztFc2/fanatical-eas-should-support-very-weird-projects
Another possible response could be that MEC has limited applicability. Maybe you apply MEC separately to clusters of compatible views under MEC (and pairwise compatibility might not be enough; I think you’d want a single common scale across all of them), and then another approach to moral uncertainty to the results of each cluster. Of course, this leaves open the question when views are compatible in this way.
Great post (and paper)! Thanks for sharing!
Have you looked into “amplifications” of theories? This is discussed a bit in the Moral Uncertainty book. You could imagine versions of standard classical utilitarianism where everything is lexically amplified relative to standard CU, and so could possibly compete with other views with infinities. Of course, those other views could be further amplified lexically, too, all ad infinitum.
I’ve been thinking about how MEC works with lexical threshold utilitarian views and leximin, including with lexical amplifications of standard non-lexical theories.
Hi Michael, thanks for your comments! A few replies:
Re: amplification, I’m not sure about this proposal (I’m familiar with that section of the book). From the perspective of a supreme soteriology (e.g. (certain conceptions of) Christianity), attaining salvation is the best possible outcome, full stop. It is, to use MacAskill, Bykvist, and Ord’s terminology, maximally choiceworthy. It therefore seems to me wrong that ‘those other views could be further amplified lexically, too, all ad infinitum.’ To insist that we could lexically amplify a supreme soteriology would be to fail to take it seriously from its own internal perspective. But that is precisely what MacAskill, Bykvist, and Ord’s universal scale account requires us to do.
Of course, I agree that we can amplify other ethical theories that do not, in their standard forms, represent options or outcomes as maximally choiceworthy, such that the amplified theories do represent certain options/outcomes as maximally choiceworthy. But this is rather ad hoc.
Re: the ‘limited applicability’ suggestion, this strikes me as prima facie implausible on abudctive grounds (principally, parsimony, and to a lesser extent, elegance).
Re: the point that ‘there are other possible infinities that could dominate’: I’m not sure how the term ‘dominate’ is being used here. It’s not the case that other ethical theories which assign infinite choiceworthiness to certain options dominate supreme soteriologies in the game-theoretic useage of ‘dominate’ (on which option A dominates option B iff the outcome associated with A is at least as good as the corresponding outcome associated with B in every state of nature and strictly better in at least one).
But if the point is rather simply that MEC does not require all agents—regardless of their credence distribution over descriptive and ethical hypotheses—to become religionists, I agree. To take a simplistic but illustrative example, MEC will tell an agent who has credence = 1 that doing whatever they feel like will generate an infinite quantity of the summum bonum to go ahead and do whatever they feel like. My thought is just that MEC will deliver sufficiently implausible verdicts to sufficiently many agents to cast serious doubt on its truth qua theory of what we ought to do in response to ethical uncertainty. This is particularly pressing in the context of prudential choice, due to the three factors highlighted in subsection 3.5 above. The points you make in the linked response to the question ‘why not accept Pascal’s Wager?’ are solid, and lead me to think that the extension of my argument from prudence to morality might not be quite as quick as I suggest at the end of the post. But if we can show that MEC is in big trouble in the domain of prudence, that seems to me like evidence against its candidacy in the domain of morality. (I don’t agree with MacAskill, Bykvist, and Ord’s suggestion that, on priors, we should expect the correct way to handle descriptive uncertainty to be more-or-less the correct way to handle ethical uncertainty. The descriptive and the ethical are quite different! But it would be relatively more surprising to me if the correct way to handle prudential uncertainty were wildly different from the correct way to handle moral uncertainty.)
I agree with most of this.
With respect to domination, I just mean that MEC could still give more weight to their recommendations over those of supreme soteriology, because their infinities could compete with those of supreme soteriology (I don’t mean anything like stochastic dominance or Pareto improvement). I don’t think we’re required to take for granted that salvation is better than everything else across all theories under a universal scale account. Other theories will have other plausible candidates that should compete. Some may even directly refer to salvation and make claims that other things are better.
I agree that lexical amplifications of theories that don’t have infinities do seem ad hoc, but I don’t think we should assign them 0 probability. (Similarly, we shouldn’t assign 0 probability to other lexical views.) So, it’s not obvious that we should bet on supreme soteriology, until we also check the plausibility of and weigh other infinities. Of course, I still think this “solution” is unsatisfying and I think the principled objection of fanaticism still holds, even if it turns out not to hold in practice.
I would say I don’t know if MEC will deliver sufficiently implausible verdicts to sufficiently many agents without checking more closely given other possible infinities, but I think if it does give plausible verdicts most of the time (or even almost all of the time), this is mostly by luck and too contingent on our current circumstances and beliefs. Giving the right answers for the wrong reasons is still deeply unsatisfying.
Really interesting! Do you have anything in mind for goods identified by competing ethical theories that you think would compete with, e.g., the beatific vision for the Christian or nirvana for the Buddhist? (A clear example here would be a valuable update for me.)
+1 on your comment that ‘Giving the right answers for the wrong reasons is still deeply unsatisfying.’ I think this is an under appreciated part of ethical theorizing and would even take a stronger methodological stance: getting the right explanatory answers (why we ought to do what we ought to) is just as important as getting the right extensional answers (what we ought to do). If an ethical theory gives you the wrong explanation, it’s not the right ethical theory!
You could have infinitely many (and, in principle, even more than countably many) instances of finite goods in an infinite universe/multiverse, or lexically dominating pleasures (e.g. Mill’s higher pleasures), or just set a lexical threshold for positive goods or good lives. Any of the goods in objective list theories could be claimed to be infinitely valuable. Some people think life is infinitely valuable, although often also on religious grounds.
I’d interpret supreme soteriology as claiming finite amounts of Earthly (or non-Heavenly) goods have merely finite value while salvation has infinite value, but this doesn’t extend to infinite amounts of Earthly goods, and other theories can simply reject the claim that all individual instances of Earthly goods have merely finite value.
I don’t claim that these other possible infinities have much to defend them, but I think this applies to supreme soteriology, too. The history and number of people believing supreme soteriology only very slightly adds to its plausibility, because we have good reasons to believe the believers are mistaken in their beliefs and the reasons for their beliefs aren’t much supported by evidence, but anything that’s plausibly a good at all could be about as plausible as a candidate for generating infinite good, and maybe even more plausible, depending on your views. There are many such candidates, so they could add up together to outweigh supreme soteriology if they correlate, or some of them could just be much easier to achieve.