Do you have any notion as to the solution to this model (for some reasonable parameter values)? I’ve tried to solve models like this one and haven’t succeeded, although I’m not good at differential equations.
It looks to me like it’s unsolvable without some nonzero exogenous extinction risk, because otherwise there will be multiple parameter choices that result in infinite utility, so you can’t say which one is best. But it’s not clear what rate of exogenous x-risk to use, and our distribution over possible values might still result in infinite utility in expectation.
Perhaps you could simplify the model by leaving out the concept of improving technology, and just say you can either spend on safety, spend on consumption, or invest to grow your capital. That might make the model easier to solve, and I don’t think it loses much explanatory power. (It would still have the infinity problem.)
It looks to me like it’s unsolvable without some nonzero exogenous extinction risk
There’s also the possibility that the space we would otherwise occupy if we didn’t go extinct will become occupied by sentient individuals anyway, e.g. life reevolves, or aliens. These are examples of what Tarsney calls positive exogenous nullifying events, with extinction being a typical negative exogenous nullifying event.
There’s also the heat death of the universe, although it’s only a conjecture.
because otherwise there will be multiple parameter choices that result in infinite utility, so you can’t say which one is best.
Do you have any notion as to the solution to this model (for some reasonable parameter values)? I’ve tried to solve models like this one and haven’t succeeded, although I’m not good at differential equations.
It looks to me like it’s unsolvable without some nonzero exogenous extinction risk, because otherwise there will be multiple parameter choices that result in infinite utility, so you can’t say which one is best. But it’s not clear what rate of exogenous x-risk to use, and our distribution over possible values might still result in infinite utility in expectation.
Perhaps you could simplify the model by leaving out the concept of improving technology, and just say you can either spend on safety, spend on consumption, or invest to grow your capital. That might make the model easier to solve, and I don’t think it loses much explanatory power. (It would still have the infinity problem.)
Christian Tarsney has done a sensitivity analysis for the parameters in such a model in The Epistemic Challenge to Longtermism for GPI.
There’s also the possibility that the space we would otherwise occupy if we didn’t go extinct will become occupied by sentient individuals anyway, e.g. life reevolves, or aliens. These are examples of what Tarsney calls positive exogenous nullifying events, with extinction being a typical negative exogenous nullifying event.
There’s also the heat death of the universe, although it’s only a conjecture.
There are some approaches to infinite ethics that might allow you to rank some different infinite outcomes, although not necessarily all of them. See the overtaking criterion. These might make assumptions about order of summation, though, which is perhaps undesirable for an impartial consequentialist, and without such assumptions, conditionally convergent series can be made to sum to anything or diverge just by reordering them, which is not so nice.
My model here is riffing on Jones (2016); you might look there for solving the model.
Re infinite utility, Jones does say (fn 6): “As usual, ρ must be sufficiently large given growth so that utility is finite.”