One key issue with this model is that I expect that the majority of x-risk from my perspective doesn’t correspond to extinction and instead corresponds to some undesirable group unding up with control over the long run future (either AIs seizing control (AI takeover) or undesirable human groups).
So, I would reject:
We can model extinction here by n(t) going to zero.
You might be able to recover things by supposing n(t) gets transformed by some constant multiple on x-risk maybe?
(Further, even if AI takeover does result in extinction there will probably still be some value due to acausal trade and potentially some value due to the AI’s preferences.)
(Regardless, I expect that if you think the singularity is plausible, the effects of discounting are more complex because we could very plausibly have >10^20 experience years per year within 5 years of the singularity due to e.g. building a Dyson sphere around the sun. If we just look at AI takeover, ignore (acausal) trade, and assume for simplicity that AI preferences have no value, then it is likely that the vast, vast majority of value is contingent on retaining human control. If we allow for acausal trade, then the discount rates of the AI will also be important to determine how much trade should happen.)
(Separately, pure temporal discounting seems pretty insane and incoherent with my view of the universe works.)
One key issue with this model is that I expect that the majority of x-risk from my perspective doesn’t correspond to extinction and instead corresponds to some undesirable group unding up with control over the long run future (either AIs seizing control (AI takeover) or undesirable human groups).
So, I would reject:
You might be able to recover things by supposing n(t) gets transformed by some constant multiple on x-risk maybe?
(Further, even if AI takeover does result in extinction there will probably still be some value due to acausal trade and potentially some value due to the AI’s preferences.)
(Regardless, I expect that if you think the singularity is plausible, the effects of discounting are more complex because we could very plausibly have >10^20 experience years per year within 5 years of the singularity due to e.g. building a Dyson sphere around the sun. If we just look at AI takeover, ignore (acausal) trade, and assume for simplicity that AI preferences have no value, then it is likely that the vast, vast majority of value is contingent on retaining human control. If we allow for acausal trade, then the discount rates of the AI will also be important to determine how much trade should happen.)
(Separately, pure temporal discounting seems pretty insane and incoherent with my view of the universe works.)