All options maximize expected utility (EU), since the expected utility will be undefined (or infinite) regardless. There’s always a nonzero chance you will end up choosing the right religion and be rewarded infinitely and a nonzero chance you will end up choosing badly and be punished infinitely, so the EU is +infinity + (-infinity) = undefined. (I got this from a paper, but I forget which one; maybe something by Alan Hájek.)
In response to 1, you might say that we should maximize the probability of +infinity and minimize the probability of -infinity before considering finite values. This could be justified through the application of plausible rationality axioms directly, in particular the independence axiom. This could reduce to EU maximization with some prior steps where we ignore equiprobable parts of distributions with the same values. However, infinite and unbounded values violate the continuity axiom. Furthermore, if we’re allowing infinities, especially as limits of aggregates of finite values like an eternity of positive welfare, then it would be suspicious to not allow unbounded finite values at least in principle. Unbounded finite values can lead to violations of the sure-thing principle, as well as vulnerability to Dutch books and money pumps (e.g. see here, here and my reply, and here). If the bases for allowing and treating infinities this way require the violation of some plausible requirements of rationality or require ad hoc and suspicious beliefs about what kinds of values are possible (infinity is possible but finite values must be bounded), then it’s at least not obvious that we’re normatively required to treat infinities this way. Some other decision theory might be preferable, or we can allow decision-theoretic uncertainty.
There are plausible alternative decision theories that don’t require (but may permit) choosing extremely low probability bets of extremely high payoffs, like EU maximization with bounded utility functions and stochastic dominance. Under decision-theoretic uncertainty assigning some credence to unbounded EU maximization with infinities, low probabilities of Heaven or Hell might still not dominate.
Under impartial views, conditional on a given god (or gods), you won’t change the aggregate: it’s already undefined, +infinity or -infinity, and adding one more person to Heaven or Hell won’t make a difference to that value. Some possible responses:
More complex approaches to aggregation (e.g. ignoring unaffected individuals, the Pareto principle), so that getting one more person into Heaven or keeping one more person out of Hell is still infinitely better.
Maybe you can decrease the probability that Heaven will be empty, or increase the probability that Hell will be empty.
There might be more promising infinities we can pursue in practice, potentially
Creating or preventing infinite universes or infinitely many universes.
The universe is already infinite and has an infinite amount of value in expectation even on highly plausible physics, because it’s very plausibly infinite in spatial extent (or temporally), and under evidential decision theory, we already acausally affect an infinite amount of value, because infinitely many agents make decisions correlated with our own.
[Sorry I’ve had to edit the wording of parts of this comment 2 years later because I just can’t have super cringey writing from 16yo me sitting on the most upvoted post on my permanent profile]
Wow this is exactly the reply I was looking for, and more. Thank you so much!
Since I’m pretty new into philosophy, I believe what you say although I don’t understand it. However you have given me a ton of invaluable starting points from which I can now begin learning how to answer these kind of questions myself.
I imagine at some point in my life I’ll use these ideas to engage in major reflection on my life goals since it sounds like utilitarianism in the form I have always followed is flawed and will need to be revised or even scrapped entirely.
All options maximize expected utility (EU), since the expected utility will be undefined (or infinite) regardless. There’s always a nonzero chance you will end up choosing the right religion and be rewarded infinitely and a nonzero chance you will end up choosing badly and be punished infinitely, so the EU is +infinity + (-infinity) = undefined. (I got this from a paper, but I forget which one; maybe something by Alan Hájek.)
In response to 1, you might say that we should maximize the probability of +infinity and minimize the probability of -infinity before considering finite values. This could be justified through the application of plausible rationality axioms directly, in particular the independence axiom. This could reduce to EU maximization with some prior steps where we ignore equiprobable parts of distributions with the same values. However, infinite and unbounded values violate the continuity axiom. Furthermore, if we’re allowing infinities, especially as limits of aggregates of finite values like an eternity of positive welfare, then it would be suspicious to not allow unbounded finite values at least in principle. Unbounded finite values can lead to violations of the sure-thing principle, as well as vulnerability to Dutch books and money pumps (e.g. see here, here and my reply, and here). If the bases for allowing and treating infinities this way require the violation of some plausible requirements of rationality or require ad hoc and suspicious beliefs about what kinds of values are possible (infinity is possible but finite values must be bounded), then it’s at least not obvious that we’re normatively required to treat infinities this way. Some other decision theory might be preferable, or we can allow decision-theoretic uncertainty.
There are plausible alternative decision theories that don’t require (but may permit) choosing extremely low probability bets of extremely high payoffs, like EU maximization with bounded utility functions and stochastic dominance. Under decision-theoretic uncertainty assigning some credence to unbounded EU maximization with infinities, low probabilities of Heaven or Hell might still not dominate.
Under impartial views, conditional on a given god (or gods), you won’t change the aggregate: it’s already undefined, +infinity or -infinity, and adding one more person to Heaven or Hell won’t make a difference to that value. Some possible responses:
More complex approaches to aggregation (e.g. ignoring unaffected individuals, the Pareto principle), so that getting one more person into Heaven or keeping one more person out of Hell is still infinitely better.
Maybe you can decrease the probability that Heaven will be empty, or increase the probability that Hell will be empty.
There might be more promising infinities we can pursue in practice, potentially
Creating or preventing infinite universes or infinitely many universes.
The universe is already infinite and has an infinite amount of value in expectation even on highly plausible physics, because it’s very plausibly infinite in spatial extent (or temporally), and under evidential decision theory, we already acausally affect an infinite amount of value, because infinitely many agents make decisions correlated with our own.
Some others are discussed here.
[Sorry I’ve had to edit the wording of parts of this comment 2 years later because I just can’t have super cringey writing from 16yo me sitting on the most upvoted post on my permanent profile]
Wow this is exactly the reply I was looking for, and more. Thank you so much!
Since I’m pretty new into philosophy, I believe what you say although I don’t understand it. However you have given me a ton of invaluable starting points from which I can now begin learning how to answer these kind of questions myself.
I imagine at some point in my life I’ll use these ideas to engage in major reflection on my life goals since it sounds like utilitarianism in the form I have always followed is flawed and will need to be revised or even scrapped entirely.
Once again, thanks so much!