Pursuing infinite positive utility at any cost

In this post I defend that we should make all our decisions on the basis of how likely a given action set would be to elicit a positive infinite utility. I then suggest some possible mechanisms and actions that we might want to consider given this. Lastly, I offer some responses to anticipated objections.

Note that in order to keep my discussion simple I assume that we are totally sure that a moral realist form of hedonistic utilitarianism is true. I don’t think the method of analysis used would need to change if we relaxed this assumption to a different form of consequentialism.

Argument

Many here will be sympathetic to the following claim:

(1) An agent ought to take the set of actions which maximise global expected utility

However, ignoring the possibility of infinite negative utilities (see objection e below for more on this), all possible actions seem to have infinite positive utility in expectation. For all actions have a non-zero chance of resulting in infinite positive utility. For instance it seems that for any action there’s a very small chance that I might get an infinite bliss pill as a result.

As such, classical expected utility theory won’t be action guiding unless we add an additional decision rule: that we ought to pick the action which is most likely to bring about the infinite utility. This addition seems intuitive to me, imagine two bets: one where there is 0.99 chance of getting infinite utility and one where there is a 0.01 chance. It seems irrational to not take the 0.99 deal even though they have the same expected utility. Therefore it seems that we should really be sympathetic to:

(2) An agent ought to take the set of actions which make it most likely that infinite utility results

If you’re sympathetic to (2) the question we now need to consider is what things are most likely to elicit the infinite utility. One imperfect but useful taxonomy is that there two types of relevant options: direct and indirect options.

Direct options are the proximate causes which would elicit the infinite utility, i.e. the action of swallowing an infinite bliss pill. Indirect options are those that make it more likely we get to the stage of being able to take the direct option, i.e. avoiding existential risk so that humanity has longer to search for the infinite bliss pill.

It’s quite possible that the indirect options dominate all the current direct options we can currently think of. However it’s possible that although the indirect options are our better hope that there are also some direct options that we could (nearly) costlessly add to our best indirect option action set.

The rest of this post will focus on what the most plausible direct options for eliciting the infinite utility might be. For simplicity’s sake I’ll also assume that we are justified in having a credence of one in a moral realist form of hedonic utilitarianism being the correct moral theory.

Three potential possibilities of infinite utility bringing mechanisms initially spring to my mind that we should pay attention to: a vastly powerful superintelligence; the God of classical theism (or something similar); or some scientific quirk which ends up in infinite replications, i.e. perhaps creating a maximal multiverse. I don’t pretend that there couldn’t be other mechanisms which could create actual infinities, but these are the most plausible ones that spring to mind, feel free to suggest others. Note that currently unknown mechanisms are unlikely to be action guiding for us when it comes to direct options in our current position.

Given these potential mechanisms, the relevant options that might be worth exploring are:

(i) Working towards creating a superintelligence with the best chance of having the computational power necessary to make infinite positive valence experience moments. (Perhaps this is sufficiently far-off that really it’s an indirect option.)

Quick review: if we help engineer a superintelligence quickly and badly then it might reduce rather than increase our possibility of eliciting an infinite utility in the long run.

(ii) Taking the set of options that the ‘revealed’ religions suggest might elicit the infinite number of positive valence experience moments (typically described as heaven). The commonly suggested routes to achieve this are conversion and good behaviour. We’d presumably want to make sure that a good number of people converted to each of theisms in order to have the best chance of at least one of them eliciting the infinite utility.

Quick review: this action might already be covered, there are a lot of religious believers and if those religions are correct then the infinite utility will probably have already been secured. Still it might be best to encourage as many people as possible to adopt some form of religious belief to maximise our chances.

(iii) Presumably doing more foundational scientific research to understand how we might unlock an infinite replication dynamic? However this would probably fall in the indirect options category.

Quick review: this seems inoffensive and a good idea generally.

Responses to anticipated objections

Objection (a): There is good reason to believe that there is already infinite utility out there, so your argument would just mean that we don’t need to do anything else.

Response (a): Unless you have a credence of one that this is the case your best option is still to do all you can to try push the probability of an infinite utility as close to one as possible, so the ideas in this blog are going to be decision relevant?


Objection (b): This is just a Pascal’s Mugging. I have reason to think that Pascal’s Mugging style arguments are always unsuccessful.

Response (b): If you do have some knock down argument against Pascal’s Mugging then maybe you can (and please do tell me!). However if you’re at all uncertain about your argument against Pascal’s Mugging then you might want to consider the post above as an insurance in case you are wrong.


Objection (c): I have a zero credence that morality objectively requires us to do anything as I’m a 100% convinced moral anti-realist. So your argument doesn’t bite with me.

Response (c): You can run a version of the above argument not on the basis of an infinite utility being achieved by *someone* (the morality version) but instead about *you* achieving the infinite utility (the prudential version). If you think think you have normative reason to maximise your expected utility then the above post will be relevant for you, though the suggestions will be different. Presumably direct options become much more attractive than indirect options as you’ll need to get yourself the infinite utility before you cease to exist.


Objection (d): Your post doesn’t take into account different cardinalities of infinity. We shouldn’t be aiming for just any possible positive infinity but only the highest cardinality of infinity.

Response (d): This seems plausible to me, though I don’t understand the maths/​philosophy of infinity to have a strong view. If we should pursue higher order cardinalities of infinities over lower ones then I expect this means we should just focus on God mechanisms as Gods, if they exist, presumably are more likely to have access to higher cardinalities of infinity than anything else.


Objection (e): You conveniently ignore possibilities of infinite negative utilities. These wreck your analysis.

Response (e): I imagine they might. My understanding from Alan Hájek on this is that any action which has a non-zero chance of bringing about both a positive and negative infinite utility would have an undefined expected utility. My view is that this will probably be true of all actions and so all actions would have an undefined expected utility. However if that’s the case then this fact won’t be decision relevant. So perhaps it is rational to make decisions on the basis of bracketing the possibility of either positive or negative infinite utilities (but not both). I’d be very interested in people’s views here.