Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
You could generalize a bit further by looking at the behavior of
The integral of the ratio of the world under two interventions, ∫∞0u(t)v(t) or ∫∞0u(t)−v(t). This integral could have a value even if the integral of each intervention is indefinite.
The ratio of the limit of integrals under two interventions, limt→∞∫t0u(t)∫t0v(t) or limt→∞∫t0u(t)−∫t0v(t). This could likewise have a value even if ∫∞0u(t) isn’t defined
I agree!
I didn’t want to get too distracted with these complication in the piece, but I’m sympathetic to these and other approaches to avoid the technical issue of divergent integrals of value when studying longterm effects.
In the case in question (where u(t) always equals k v(t)) we get an even stronger constraint the ratio of progressively longer integrals doesn’t just limit to a constant, but equals a constant.
There are some issues that come up with these approaches though. One is that they are all tacitly assuming that comparing things at a time is the right comparison. But suppose (contra my assumptions in the post) that population was always half as high in one outcome as the other. Then it may be doing worse at any time, but still have all the same people eventually come into existence and be equally good for all of them. Issues like this where the ratio depends on what variable is being integrated over don’t come up in the convergent integral cases.
All that said, the integrating to infinity in economic modelling is presumably not to be taken literally, and for any finite time horizon — no matter how mindbendingly large — my result that the discounting function doesn’t matter holds (even if the infinite integral were to diverge).
One key issue with this model is that I expect that the majority of x-risk from my perspective doesn’t correspond to extinction and instead corresponds to some undesirable group unding up with control over the long run future (either AIs seizing control (AI takeover) or undesirable human groups).
So, I would reject:
You might be able to recover things by supposing n(t) gets transformed by some constant multiple on x-risk maybe?
(Further, even if AI takeover does result in extinction there will probably still be some value due to acausal trade and potentially some value due to the AI’s preferences.)
(Regardless, I expect that if you think the singularity is plausible, the effects of discounting are more complex because we could very plausibly have >10^20 experience years per year within 5 years of the singularity due to e.g. building a Dyson sphere around the sun. If we just look at AI takeover, ignore (acausal) trade, and assume for simplicity that AI preferences have no value, then it is likely that the vast, vast majority of value is contingent on retaining human control. If we allow for acausal trade, then the discount rates of the AI will also be important to determine how much trade should happen.)
(Separately, pure temporal discounting seems pretty insane and incoherent with my view of the universe works.)