Besides the person-affecting views and disvalue of life covered here, if an individual has an Epicurean view of life and death (another kind of person-affecting view), i.e. death is not bad, then improving wellbeing should probably take priority. And while Epicureanism assigns 0 disvalue to death (ignoring effects on others), one could assign values arbitrarily close to 0.
There are also issues with dealing with infinities that make utilitarianism non-action guiding (it doesn’t tell us what to do in most practical cases); you could probably throw these in with nihilism. E.g. if the universe is unbounded (“infinite”) in space or time, then we can’t change the total sum of utility, and that number is not even well-defined (not even +infinity or -infinity) with the usual definitions of convergence in the real numbers. If you assign any nonzero probability to an infinite universe, you end up with the same problem, but it’s actually pretty likely that the universe is spatially unbounded. There are several attempts at solutions, but all of them have pretty major flaws, AFAIK.
Some person-affecting views can help, i.e. using a Pareto principle, but then it’s not clear how to deal with individuals whose exact identities depend on your decisions (or maybe we just ignore them; many won’t like that solution), and there are still many cases that can’t be handled. There’s discussion in this podcast, with some links for more reading (ctrl-F “Pareto” after expanding the transcript): https://80000hours.org/podcast/episodes/amanda-askell-moral-empathy/
Rounding sufficiently small probabilities to 0 and considering only parts of the universe we’re extremely confident we can affect can help, too. This proposed solution and a few others are discussed here: https://nickbostrom.com/ethics/infinite.pdf
You could also have a bounded vNM utility function, but this means assigning decreasing marginal value to saving lives, and how you divide decisions/events matters, e.g. “saving 1 life and then saving 1 life” > “saving 2 lives and then saving 0 lives”.
Besides the person-affecting views and disvalue of life covered here, if an individual has an Epicurean view of life and death (another kind of person-affecting view), i.e. death is not bad, then improving wellbeing should probably take priority. And while Epicureanism assigns 0 disvalue to death (ignoring effects on others), one could assign values arbitrarily close to 0.
There are also issues with dealing with infinities that make utilitarianism non-action guiding (it doesn’t tell us what to do in most practical cases); you could probably throw these in with nihilism. E.g. if the universe is unbounded (“infinite”) in space or time, then we can’t change the total sum of utility, and that number is not even well-defined (not even +infinity or -infinity) with the usual definitions of convergence in the real numbers. If you assign any nonzero probability to an infinite universe, you end up with the same problem, but it’s actually pretty likely that the universe is spatially unbounded. There are several attempts at solutions, but all of them have pretty major flaws, AFAIK.
Some person-affecting views can help, i.e. using a Pareto principle, but then it’s not clear how to deal with individuals whose exact identities depend on your decisions (or maybe we just ignore them; many won’t like that solution), and there are still many cases that can’t be handled. There’s discussion in this podcast, with some links for more reading (ctrl-F “Pareto” after expanding the transcript): https://80000hours.org/podcast/episodes/amanda-askell-moral-empathy/
Rounding sufficiently small probabilities to 0 and considering only parts of the universe we’re extremely confident we can affect can help, too. This proposed solution and a few others are discussed here: https://nickbostrom.com/ethics/infinite.pdf
You could also have a bounded vNM utility function, but this means assigning decreasing marginal value to saving lives, and how you divide decisions/events matters, e.g. “saving 1 life and then saving 1 life” > “saving 2 lives and then saving 0 lives”.
For the unbounded time case (assuming we can handle or avoid issues with unbounded space, and people might prefer not to treat time and space differently): https://forum.effectivealtruism.org/posts/9D6zKRPfaALiBhnnN/problems-and-solutions-in-infinite-ethics