I think the alternatives also have important problems that are worth pointing out.
Suppose instead weāre maximizing expected utility for a utility function over states of the world.
If itās unbounded, then
At least in principle (Iād guess not in practice), we also need to check cases and make careful commitments, or else we could violate the sure-thing principle or be vulnerable to Dutch books or money pumps. See here for an example. Some take unbounded utility functions to therefore be irrational.
Itās fanatical, and so you need to deal with Pascalās wager, Pascalās mugging, tiny probabilities of infinities.
On the other hand, if itās bounded, then
It canāt be stochastically separable and what you should do could depend on things you canāt predictably change (even acausally) like the welfare of ancient Egyptians or those in causally separated parts of the universe (and making decisions independently from your own), AND
Thereās a good chance it will be far too egoistic in practice*. The most natural forms** will tend to promote weighing your own interests more than any othersā in practice, and possibly far more, because (i) youāre more sure of your own existence than othersā due to the possibility of solipsism (only you exist), (ii) differences to highly populated universes with value approaching either bound will tend to matter far less than those where only you exist, and (iii) it would be surprising for the value to be close to 0 in a highly populated universe. For further illustration and explanation, see:
* Or else will need to be set based on your beliefs about how many moral patients there are, which seems like motivated reasoning, and if you come to believe sufficiently more exist, then you could be stuck with the egoistic conclusion again.
** E.g. a sigmoid function like arctan applied to the total utilitarian sum of welfares, average utilitarianism and other variable value theories, other functions symmetric around the empty universe, āconvexā to the left and āconcaveā to the right.
Stochastic dominance as a decision rule seems to fare better, although it may leave multiple options permissible, and the options we actually choose may suffer from the kinds of problems above anyway or otherwise violate some other requirement of rationality. Selecting uniformly at random among available permissible options (including policies over future actions) could at least reduce egoistic biases, but I wouldnāt be surprised if it had other serious problems.
Great post, thanks for writing this!
I think the alternatives also have important problems that are worth pointing out.
Suppose instead weāre maximizing expected utility for a utility function over states of the world.
If itās unbounded, then
At least in principle (Iād guess not in practice), we also need to check cases and make careful commitments, or else we could violate the sure-thing principle or be vulnerable to Dutch books or money pumps. See here for an example. Some take unbounded utility functions to therefore be irrational.
Itās fanatical, and so you need to deal with Pascalās wager, Pascalās mugging, tiny probabilities of infinities.
On the other hand, if itās bounded, then
It canāt be stochastically separable and what you should do could depend on things you canāt predictably change (even acausally) like the welfare of ancient Egyptians or those in causally separated parts of the universe (and making decisions independently from your own), AND
Thereās a good chance it will be far too egoistic in practice*. The most natural forms** will tend to promote weighing your own interests more than any othersā in practice, and possibly far more, because (i) youāre more sure of your own existence than othersā due to the possibility of solipsism (only you exist), (ii) differences to highly populated universes with value approaching either bound will tend to matter far less than those where only you exist, and (iii) it would be surprising for the value to be close to 0 in a highly populated universe. For further illustration and explanation, see:
This thread by Derek Shiller.
The average utilitarianās solipsism wager by Caspar Oesterheld
Average Utilitarianism Implies Solipsistic Egoism by Christian Tarsney (also covers rank-discounted utilitarianism and variable value theories, depending on the marginal returns to additional population).
* Or else will need to be set based on your beliefs about how many moral patients there are, which seems like motivated reasoning, and if you come to believe sufficiently more exist, then you could be stuck with the egoistic conclusion again.
** E.g. a sigmoid function like arctan applied to the total utilitarian sum of welfares, average utilitarianism and other variable value theories, other functions symmetric around the empty universe, āconvexā to the left and āconcaveā to the right.
Stochastic dominance as a decision rule seems to fare better, although it may leave multiple options permissible, and the options we actually choose may suffer from the kinds of problems above anyway or otherwise violate some other requirement of rationality. Selecting uniformly at random among available permissible options (including policies over future actions) could at least reduce egoistic biases, but I wouldnāt be surprised if it had other serious problems.