It looks to me like it’s unsolvable without some nonzero exogenous extinction risk
There’s also the possibility that the space we would otherwise occupy if we didn’t go extinct will become occupied by sentient individuals anyway, e.g. life reevolves, or aliens. These are examples of what Tarsney calls positive exogenous nullifying events, with extinction being a typical negative exogenous nullifying event.
There’s also the heat death of the universe, although it’s only a conjecture.
because otherwise there will be multiple parameter choices that result in infinite utility, so you can’t say which one is best.
Christian Tarsney has done a sensitivity analysis for the parameters in such a model in The Epistemic Challenge to Longtermism for GPI.
There’s also the possibility that the space we would otherwise occupy if we didn’t go extinct will become occupied by sentient individuals anyway, e.g. life reevolves, or aliens. These are examples of what Tarsney calls positive exogenous nullifying events, with extinction being a typical negative exogenous nullifying event.
There’s also the heat death of the universe, although it’s only a conjecture.
There are some approaches to infinite ethics that might allow you to rank some different infinite outcomes, although not necessarily all of them. See the overtaking criterion. These might make assumptions about order of summation, though, which is perhaps undesirable for an impartial consequentialist, and without such assumptions, conditionally convergent series can be made to sum to anything or diverge just by reordering them, which is not so nice.