the lower the level of x-risk, the more valuable it is to reduce x-risk by a fixed proportion
Do you mean āthe lower the level of x-risk per century, the more valuable it is to reduce x-risk in a particular century by a fixed proportionā? And this is in a model where the level of existential risk per century is the same across all centuries, right? Given that interpretation and that model, I see how your claim is true.
But the lower the total level of x-risk (across all time) is, the less valuable it is to reduce it by a fixed portion, I think. E.g., if the total risk is 10%, that probably reduces the expected value of the long-term future by something like 10%. (Though it also matters what portion of the possible good stuff might happen already before a catastrophe happens, and I havenāt really thought about this carefully.) If we reduce the risk to 5%, we boost the EV of the long-term future by something like 5%. If the total risk had been 1%, and we reduced the risk to 0.5%, weād have boosted the EV of the future by less. Would you agree with that?
Also, one could contest the idea that we should assume the existential risk level per century āstarts outā the same in each century (before we intervene). I think people like Ord typically believe that:
existential risk is high over the next century/āfew centuries due to particular developments that may occur (e.g., transition to AGI)
thereās no particular reason to assume this risk level means thereāll be a similar risk level in later centuries
at some point, weāll likely reach technological maturity
if weāve gotten to that point without a catastrophe, existential risk from then on is probably very low[1], and very hard to reduce
Given beliefs 1 and 2, if we learn the next few centuries are less risky than we thought, that doesnāt necessarily affect our beliefs about how risky later centuries will be. Thus, it doesnāt necessarily increase how long we expect civilisation to last (without catastrophe) conditional on surviving these centuries, or how valuable reducing the x-risk over these next few centuries is. Right?
And given beliefs 3 and 4, we have the idea that reducing existential risk is much more tractable now than it will be in the far future.
(Thanks for the post, I found it interesting.)
Do you mean āthe lower the level of x-risk per century, the more valuable it is to reduce x-risk in a particular century by a fixed proportionā? And this is in a model where the level of existential risk per century is the same across all centuries, right? Given that interpretation and that model, I see how your claim is true.
But the lower the total level of x-risk (across all time) is, the less valuable it is to reduce it by a fixed portion, I think. E.g., if the total risk is 10%, that probably reduces the expected value of the long-term future by something like 10%. (Though it also matters what portion of the possible good stuff might happen already before a catastrophe happens, and I havenāt really thought about this carefully.) If we reduce the risk to 5%, we boost the EV of the long-term future by something like 5%. If the total risk had been 1%, and we reduced the risk to 0.5%, weād have boosted the EV of the future by less. Would you agree with that?
Also, one could contest the idea that we should assume the existential risk level per century āstarts outā the same in each century (before we intervene). I think people like Ord typically believe that:
existential risk is high over the next century/āfew centuries due to particular developments that may occur (e.g., transition to AGI)
thereās no particular reason to assume this risk level means thereāll be a similar risk level in later centuries
at some point, weāll likely reach technological maturity
if weāve gotten to that point without a catastrophe, existential risk from then on is probably very low[1], and very hard to reduce
Given beliefs 1 and 2, if we learn the next few centuries are less risky than we thought, that doesnāt necessarily affect our beliefs about how risky later centuries will be. Thus, it doesnāt necessarily increase how long we expect civilisation to last (without catastrophe) conditional on surviving these centuries, or how valuable reducing the x-risk over these next few centuries is. Right?
And given beliefs 3 and 4, we have the idea that reducing existential risk is much more tractable now than it will be in the far future.