Thanks for writing this, I like that it’s short and has a section on subjective probability estimates.
What would you class as longterm x-risk (reduction) vs. nearterm? Is it entirely about the timescale rather than the approach? Eg. hypothetically very fast institutional reform could be nearterm, and doing AI safety field building research in academia could hypothetically be longterm if you thought it would pay off very late. Or do you think the longterm stuff necessarily has to be investment or intitutional reform?
Is the main crux for ‘Long-term x-risk matters more than short-term risk’ around how transformative the next two centuries will be? If we start getting technologically mature, then x-risk might decrease significantly. Or do you think we might reach technological maturity, and x-risk will be low, but we should still work on reducing it?
What do you think about the assumption that ‘efforts can reduce x-risk by an amount proportional to the current risk’? That seems maybe appropriate for medium levels of risk eg. 1-10%, but if risk is small, like 0.01-1%, it might get very difficult to halve the risk.
I don’t have strong beliefs about what could reduce long-term x-risk. Longtermist institutional reform just seemed like the best idea I could think of.
As I said in the essay, the lower the level of x-risk, the more valuable it is to reduce x-risk by a fixed proportion. The only way you can claim that reducing short-term x-risk matters more is by saying that it will become too intractable to reduce x-risk below a certain level, and that we will reach that level at some point in the future (if we survive long enough). I think this claim is plausible. But simply claiming that x-risk is currently high is not sufficient to prioritize reducing current x-risk over long-term x-risk, and in fact argues in the opposite direction.
I mentioned this in my answer to #2—I think it’s more likely that reducing x-risk by a fixed proportion becomes more difficult as x-risk gets lower. But others (e.g., Yew-Kwang Ng and Tom Sittler) have used this assumption that reducing x-risk by a fixed proportion has constant difficulty.
the lower the level of x-risk, the more valuable it is to reduce x-risk by a fixed proportion
Do you mean “the lower the level of x-risk per century, the more valuable it is to reduce x-risk in a particular century by a fixed proportion”? And this is in a model where the level of existential risk per century is the same across all centuries, right? Given that interpretation and that model, I see how your claim is true.
But the lower the total level of x-risk (across all time) is, the less valuable it is to reduce it by a fixed portion, I think. E.g., if the total risk is 10%, that probably reduces the expected value of the long-term future by something like 10%. (Though it also matters what portion of the possible good stuff might happen already before a catastrophe happens, and I haven’t really thought about this carefully.) If we reduce the risk to 5%, we boost the EV of the long-term future by something like 5%. If the total risk had been 1%, and we reduced the risk to 0.5%, we’d have boosted the EV of the future by less. Would you agree with that?
Also, one could contest the idea that we should assume the existential risk level per century “starts out” the same in each century (before we intervene). I think people like Ord typically believe that:
existential risk is high over the next century/few centuries due to particular developments that may occur (e.g., transition to AGI)
there’s no particular reason to assume this risk level means there’ll be a similar risk level in later centuries
at some point, we’ll likely reach technological maturity
if we’ve gotten to that point without a catastrophe, existential risk from then on is probably very low[1], and very hard to reduce
Given beliefs 1 and 2, if we learn the next few centuries are less risky than we thought, that doesn’t necessarily affect our beliefs about how risky later centuries will be. Thus, it doesn’t necessarily increase how long we expect civilisation to last (without catastrophe) conditional on surviving these centuries, or how valuable reducing the x-risk over these next few centuries is. Right?
And given beliefs 3 and 4, we have the idea that reducing existential risk is much more tractable now than it will be in the far future.
Thanks for writing this, I like that it’s short and has a section on subjective probability estimates.
What would you class as longterm x-risk (reduction) vs. nearterm? Is it entirely about the timescale rather than the approach? Eg. hypothetically very fast institutional reform could be nearterm, and doing AI safety field building research in academia could hypothetically be longterm if you thought it would pay off very late. Or do you think the longterm stuff necessarily has to be investment or intitutional reform?
Is the main crux for ‘Long-term x-risk matters more than short-term risk’ around how transformative the next two centuries will be? If we start getting technologically mature, then x-risk might decrease significantly. Or do you think we might reach technological maturity, and x-risk will be low, but we should still work on reducing it?
What do you think about the assumption that ‘efforts can reduce x-risk by an amount proportional to the current risk’? That seems maybe appropriate for medium levels of risk eg. 1-10%, but if risk is small, like 0.01-1%, it might get very difficult to halve the risk.
Thanks for the questions!
I don’t have strong beliefs about what could reduce long-term x-risk. Longtermist institutional reform just seemed like the best idea I could think of.
As I said in the essay, the lower the level of x-risk, the more valuable it is to reduce x-risk by a fixed proportion. The only way you can claim that reducing short-term x-risk matters more is by saying that it will become too intractable to reduce x-risk below a certain level, and that we will reach that level at some point in the future (if we survive long enough). I think this claim is plausible. But simply claiming that x-risk is currently high is not sufficient to prioritize reducing current x-risk over long-term x-risk, and in fact argues in the opposite direction.
I mentioned this in my answer to #2—I think it’s more likely that reducing x-risk by a fixed proportion becomes more difficult as x-risk gets lower. But others (e.g., Yew-Kwang Ng and Tom Sittler) have used this assumption that reducing x-risk by a fixed proportion has constant difficulty.
(Thanks for the post, I found it interesting.)
Do you mean “the lower the level of x-risk per century, the more valuable it is to reduce x-risk in a particular century by a fixed proportion”? And this is in a model where the level of existential risk per century is the same across all centuries, right? Given that interpretation and that model, I see how your claim is true.
But the lower the total level of x-risk (across all time) is, the less valuable it is to reduce it by a fixed portion, I think. E.g., if the total risk is 10%, that probably reduces the expected value of the long-term future by something like 10%. (Though it also matters what portion of the possible good stuff might happen already before a catastrophe happens, and I haven’t really thought about this carefully.) If we reduce the risk to 5%, we boost the EV of the long-term future by something like 5%. If the total risk had been 1%, and we reduced the risk to 0.5%, we’d have boosted the EV of the future by less. Would you agree with that?
Also, one could contest the idea that we should assume the existential risk level per century “starts out” the same in each century (before we intervene). I think people like Ord typically believe that:
existential risk is high over the next century/few centuries due to particular developments that may occur (e.g., transition to AGI)
there’s no particular reason to assume this risk level means there’ll be a similar risk level in later centuries
at some point, we’ll likely reach technological maturity
if we’ve gotten to that point without a catastrophe, existential risk from then on is probably very low[1], and very hard to reduce
Given beliefs 1 and 2, if we learn the next few centuries are less risky than we thought, that doesn’t necessarily affect our beliefs about how risky later centuries will be. Thus, it doesn’t necessarily increase how long we expect civilisation to last (without catastrophe) conditional on surviving these centuries, or how valuable reducing the x-risk over these next few centuries is. Right?
And given beliefs 3 and 4, we have the idea that reducing existential risk is much more tractable now than it will be in the far future.