Standard expected utility theory (EUT) assumes moral certainty, but also embeds epistemic/ontological uncertainty about the state of the world that may occur as a result of our actions. Harsanyi expected utility theory (HEUT) allows us to assign probabilities to our potential moral viewpoints, and thus gives us a mechanism by which to handle moral uncertainty.

Unfortunately, there are several problems with EUT and HEUT. First, the St. Petersburg paradox shows that unbounded utility valuations can justify almost any action, even if the probability of a good outcome is almost zero. For example, a banker may be in a situation where the probability of a bank run is nearly one, but because potential returns of being overleveraged in a near zero probability world are so high, the banker may foolishly still choose to be overleveraged to maximize expected utility. Second, diminishing returns typically force us to produce or consume more in order to realize the same amounts of utility; this is usually a recipe for us to consume and produce in unsustainable ways. Third, as Herbert Simon noted, optimizing expected utility is often computationally intractable.

A response of early effective altruism research to these problems was maxipok (i.e., maximizing probabilities of an okay outcome). Under this construct, constraints of an okay outcome are identified, a probability of satisfying those constraints is assigned to each action, and the action that maximizes the probability of satisfying the constraints is adopted.

The problem with maxipok is that it assumes moral certainty about the constraints of what constitutes an okay outcome. For example, if we believe a trolley problem is inevitable, one might infer it is an okay outcome for someone to die, given its unavoidability. On the other hand, if a trolley problem is avoidable, one may infer that someone dying is not okay. Thus in that overall scenario, what constitutes an okay outcome is contingent on what probabilities we assign to the inevitability of a trolley problem.

Success maximization is a mechanism by which to generalize maxipok for moral uncertainty. Let a_{i} be an action i from the set of m actions A = {a_{1}, a_{2}, …, a_{m}}. Let s_{x} be a definition of moral success, namely x, from S = {s_{1}, s_{2}, …, s_{n}}. The probability π that i satisfies the constraints of s_{x} is 0 ≤ π_{i}(s_{x}) ≤ 1. Let p(s_{x}) be the estimated probability that x is the correct definition of moral success, where p(s_{1}) + p(s_{2}) + … + p(s_{n}) = 1. Thus, the expected success of action i is 0 ≤ π_{i}(s_{1})p(s_{1}) + π_{i}(s_{2})p(s_{2}) + … + π_{i}(s_{n})p(s_{n}) ≤ 1. A success maximizing agent will choose an action a_{j} є A such that π_{j}(s_{1})p(s_{1}) + π_{j}(s_{2})p(s_{2}) + … + π_{j}(s_{n})p(s_{n}) ≥π_{i}(s_{1})p(s_{1}) + π_{i}(s_{2})p(s_{2}) + … + π_{i}(s_{n})p(s_{n}) for all a_{i} є A where i ≠ j.

Success maximization resolves many of the problems of von Neumann-Morgenstern and Harsanyi expected utility theories. First, because success valuations are bounded between 0 and 1, it is much less likely we will encounter St. Petersburg paradox situations where any action is justified by extremely high utility valuations despite near zero probabilities of occurrence. Second, unsustainable behaviors produced by chasing diminishing returns is much less likely in the world of maximizing probabilities of constraint satisfaction than it is in the world of maximizing unbounded expected utilities. Third, because probabilities of success are bounded between zero and one, terms of the linear combination (where p(s_{x}) is relatively low) can often be ignored to make for quicker calculations, making calculations more tractable.

## Success Maximization: An Alternative to Expected Utility Theory and a Generalization of Maxipok to Moral Uncertainty

Standard expected utility theory (EUT) assumes moral certainty, but also embeds epistemic/ontological uncertainty about the state of the world that may occur as a result of our actions. Harsanyi expected utility theory (HEUT) allows us to assign probabilities to our potential moral viewpoints, and thus gives us a mechanism by which to handle moral uncertainty.

Unfortunately, there are several problems with EUT and HEUT. First, the St. Petersburg paradox shows that unbounded utility valuations can justify almost any action, even if the probability of a good outcome is almost zero. For example, a banker may be in a situation where the probability of a bank run is nearly one, but because potential returns of being overleveraged in a near zero probability world are so high, the banker may foolishly still choose to be overleveraged to maximize expected utility. Second, diminishing returns typically force us to produce or consume more in order to realize the same amounts of utility; this is usually a recipe for us to consume and produce in unsustainable ways. Third, as Herbert Simon noted, optimizing expected utility is often computationally intractable.

A response of early effective altruism research to these problems was maxipok (i.e., maximizing probabilities of an okay outcome). Under this construct, constraints of an okay outcome are identified, a probability of satisfying those constraints is assigned to each action, and the action that maximizes the probability of satisfying the constraints is adopted.

The problem with maxipok is that it assumes moral certainty about the constraints of what constitutes an okay outcome. For example, if we believe a trolley problem is inevitable, one might infer it is an okay outcome for someone to die, given its unavoidability. On the other hand, if a trolley problem is avoidable, one may infer that someone dying is not okay. Thus in that overall scenario, what constitutes an okay outcome is contingent on what probabilities we assign to the inevitability of a trolley problem.

Success maximization is a mechanism by which to generalize maxipok for moral uncertainty. Let

abe an action_{i}ifrom the set ofmactionsA= {a_{1},a_{2}, …,a}. Let_{m}sbe a definition of moral success, namely_{x}x, fromS= {s_{1},s_{2}, …,s}. The probability_{n}πthatisatisfies the constraints ofsis 0 ≤_{x}π(_{i}s) ≤ 1. Let_{x}p(s) be the estimated probability that_{x}xis the correct definition of moral success, wherep(s_{1}) +p(s_{2}) + … +p(s) = 1. Thus, the expected success of action_{n}iis 0 ≤π(_{i}s_{1})p(s_{1}) +π(_{i}s_{2})p(s_{2}) + … +π(_{i}s)_{n}p(s) ≤ 1. A success maximizing agent will choose an action_{n}aє_{j}Asuch thatπ(_{j}s_{1})p(s_{1}) +π(_{j}s_{2})p(s_{2}) + … +π(_{j}s)_{n}p(s) ≥_{n}π(_{i}s_{1})p(s_{1}) +π(_{i}s_{2})p(s_{2}) + … +π(_{i}s)_{n}p(s) for all_{n}aє_{i}Awherei≠j.Success maximization resolves many of the problems of von Neumann-Morgenstern and Harsanyi expected utility theories. First, because success valuations are bounded between 0 and 1, it is much less likely we will encounter St. Petersburg paradox situations where any action is justified by extremely high utility valuations despite near zero probabilities of occurrence. Second, unsustainable behaviors produced by chasing diminishing returns is much less likely in the world of maximizing probabilities of constraint satisfaction than it is in the world of maximizing unbounded expected utilities. Third, because probabilities of success are bounded between zero and one, terms of the linear combination (where

p(s) is relatively low) can often be ignored to make for quicker calculations, making calculations more tractable._{x}