This seems false, since the construction of AGI is probably an event we can influence. Having aligned AGI should reduce other x-risks permanently.
It could reduce other x-risks, but the hypothesis that it would lower all x-risks to almost zero for the rest of time seems like wishful thinking.
One of the interesting calculations from the paper: if the value of 1 century is v, and the current risk of extinction every century is 20%, and you invent an AGI that permanently lowers this by half to 10% for the rest of time… you would only increase the expected value in the world from 4*v to 9*v. Definitely a good result, but pretty far from the the astronomical result you might expect.
What is a plausible source of x-risk that is 10% per century for the rest of time? It seems pretty likely to me that not long after reaching technological maturity, future civilization would reduce x-risk per century to a much lower level, because you could build a surveillance/defense system against all known x-risks, and not have to worry about new technology coming along and surprising you.
It seems that to get a constant 10% per century risk, you’d need some kind of existential threat for which there is no defense (maybe vacuum collapse), or for which the defense is so costly that that the public goods problem prevents it from being built (e.g., no single star system can afford it on their own). But the likelihood of such a threat existing in our universe doesn’t seem that high to me (maybe 20%?) which I think upper bounds the long term x-risk.
“the attainment of capabilities affording a level of economic productivity and control over nature close to the maximum that could feasibly be achieved.” (Nick Bostrom (2013) ‘Existential risk prevention as global priority’, Global Policy, vol. 4, no. 1, p. 19.)
It could reduce other x-risks, but the hypothesis that it would lower all x-risks to almost zero for the rest of time seems like wishful thinking.
One of the interesting calculations from the paper: if the value of 1 century is v, and the current risk of extinction every century is 20%, and you invent an AGI that permanently lowers this by half to 10% for the rest of time… you would only increase the expected value in the world from 4*v to 9*v. Definitely a good result, but pretty far from the the astronomical result you might expect.
What is a plausible source of x-risk that is 10% per century for the rest of time? It seems pretty likely to me that not long after reaching technological maturity, future civilization would reduce x-risk per century to a much lower level, because you could build a surveillance/defense system against all known x-risks, and not have to worry about new technology coming along and surprising you.
It seems that to get a constant 10% per century risk, you’d need some kind of existential threat for which there is no defense (maybe vacuum collapse), or for which the defense is so costly that that the public goods problem prevents it from being built (e.g., no single star system can afford it on their own). But the likelihood of such a threat existing in our universe doesn’t seem that high to me (maybe 20%?) which I think upper bounds the long term x-risk.
Curious how your model differs from this.
What does technological maturity mean?
“the attainment of capabilities affording a level of economic productivity and control over nature close to the maximum that could feasibly be achieved.” (Nick Bostrom (2013) ‘Existential risk prevention as global priority’, Global Policy, vol. 4, no. 1, p. 19.)