If you had to (and think itās appropriate to do so), what do you think the default assumptions of xRisk mitigation efforts in EA currently believe to be true?
My guess would be Time of Perils, but with a risk decaying exponentially to 0 after it (instead of a low constant risk).
Your finding on convergence is I think very important, not least because it undercuts one of the most common criticisms of xRisk/ālongtermist work āthis assigns infinite value to future people which justifies arbitrary moral harm to current peopleā which just turns out to not hold under your models here.
Something similar to that critique (replacing infinite by astronomically large, and arbitrary by significant) could still hold if the risk decays to 0.
Itās true there are other scenarios that would recover infinite value. And the proof fails, as mentioned in the convergence section, with changes like rā=0, or when the logistic cap cāā and we end up in the exponential case.
All that said, it is plausible that the universe has a finite length after all, which would provide that finite upper bound. Heat death, proton decay or even just the amount of accessible matter could provide physical limits. Itād be great to see more discussions on this informed by updated astrophysical theories.
Personally, I do not think allowing the risk to decay to 0 is problematic. For a sufficiently long timeframe, there will be evidential symmetry between the risk profiles of any 2 actions (e.g. maybe everything that is bound together will dissolve), so the expected value of mitigation will eventually reach 0. As a result, the expected cumulative value of mitigation always converges.
Nice comments!
My guess would be Time of Perils, but with a risk decaying exponentially to 0 after it (instead of a low constant risk).
Something similar to that critique (replacing infinite by astronomically large, and arbitrary by significant) could still hold if the risk decays to 0.
Itās true there are other scenarios that would recover infinite value. And the proof fails, as mentioned in the convergence section, with changes like rā=0, or when the logistic cap cāā and we end up in the exponential case.
All that said, it is plausible that the universe has a finite length after all, which would provide that finite upper bound. Heat death, proton decay or even just the amount of accessible matter could provide physical limits. Itād be great to see more discussions on this informed by updated astrophysical theories.
Thanks for following up!
Personally, I do not think allowing the risk to decay to 0 is problematic. For a sufficiently long timeframe, there will be evidential symmetry between the risk profiles of any 2 actions (e.g. maybe everything that is bound together will dissolve), so the expected value of mitigation will eventually reach 0. As a result, the expected cumulative value of mitigation always converges.