I think the alternatives also have important problems that are worth pointing out.
Suppose instead we’re maximizing expected utility for a utility function over states of the world.
If it’s unbounded, then
At least in principle (I’d guess not in practice), we also need to check cases and make careful commitments, or else we could violate the sure-thing principle or be vulnerable to Dutch books or money pumps. See here for an example. Some take unbounded utility functions to therefore be irrational.
It’s fanatical, and so you need to deal with Pascal’s wager, Pascal’s mugging, tiny probabilities of infinities.
On the other hand, if it’s bounded, then
It can’t be stochastically separable and what you should do could depend on things you can’t predictably change (even acausally) like the welfare of ancient Egyptians or those in causally separated parts of the universe (and making decisions independently from your own), AND
There’s a good chance it will be far too egoistic in practice*. The most natural forms** will tend to promote weighing your own interests more than any others’ in practice, and possibly far more, because (i) you’re more sure of your own existence than others’ due to the possibility of solipsism (only you exist), (ii) differences to highly populated universes with value approaching either bound will tend to matter far less than those where only you exist, and (iii) it would be surprising for the value to be close to 0 in a highly populated universe. For further illustration and explanation, see:
* Or else will need to be set based on your beliefs about how many moral patients there are, which seems like motivated reasoning, and if you come to believe sufficiently more exist, then you could be stuck with the egoistic conclusion again.
** E.g. a sigmoid function like arctan applied to the total utilitarian sum of welfares, average utilitarianism and other variable value theories, other functions symmetric around the empty universe, “convex” to the left and “concave” to the right.
Stochastic dominance as a decision rule seems to fare better, although it may leave multiple options permissible, and the options we actually choose may suffer from the kinds of problems above anyway or otherwise violate some other requirement of rationality. Selecting uniformly at random among available permissible options (including policies over future actions) could at least reduce egoistic biases, but I wouldn’t be surprised if it had other serious problems.
Great post, thanks for writing this!
I think the alternatives also have important problems that are worth pointing out.
Suppose instead we’re maximizing expected utility for a utility function over states of the world.
If it’s unbounded, then
At least in principle (I’d guess not in practice), we also need to check cases and make careful commitments, or else we could violate the sure-thing principle or be vulnerable to Dutch books or money pumps. See here for an example. Some take unbounded utility functions to therefore be irrational.
It’s fanatical, and so you need to deal with Pascal’s wager, Pascal’s mugging, tiny probabilities of infinities.
On the other hand, if it’s bounded, then
It can’t be stochastically separable and what you should do could depend on things you can’t predictably change (even acausally) like the welfare of ancient Egyptians or those in causally separated parts of the universe (and making decisions independently from your own), AND
There’s a good chance it will be far too egoistic in practice*. The most natural forms** will tend to promote weighing your own interests more than any others’ in practice, and possibly far more, because (i) you’re more sure of your own existence than others’ due to the possibility of solipsism (only you exist), (ii) differences to highly populated universes with value approaching either bound will tend to matter far less than those where only you exist, and (iii) it would be surprising for the value to be close to 0 in a highly populated universe. For further illustration and explanation, see:
This thread by Derek Shiller.
The average utilitarian’s solipsism wager by Caspar Oesterheld
Average Utilitarianism Implies Solipsistic Egoism by Christian Tarsney (also covers rank-discounted utilitarianism and variable value theories, depending on the marginal returns to additional population).
* Or else will need to be set based on your beliefs about how many moral patients there are, which seems like motivated reasoning, and if you come to believe sufficiently more exist, then you could be stuck with the egoistic conclusion again.
** E.g. a sigmoid function like arctan applied to the total utilitarian sum of welfares, average utilitarianism and other variable value theories, other functions symmetric around the empty universe, “convex” to the left and “concave” to the right.
Stochastic dominance as a decision rule seems to fare better, although it may leave multiple options permissible, and the options we actually choose may suffer from the kinds of problems above anyway or otherwise violate some other requirement of rationality. Selecting uniformly at random among available permissible options (including policies over future actions) could at least reduce egoistic biases, but I wouldn’t be surprised if it had other serious problems.