For some reason, when writing order of magnitude, I was thinking about existential risks that may have a 0.1% or 1% chance of happening being multiplied into the 1-10% range (e.g. nuclear war). However, I wasn’t considering many of the existential risks I was actually talking about (like biosafety, AI safety, etc) - it’d be ridiculous for AI safety risk to be multiplied from 10% to 100%.
I think the estimate of a great power war increasing the total existential risk by 10% is much more fair than my estimate; because of this, in response to your feedback, I’ve modified my EA forum post to state that a total existential risk increase of 10% is a fair estimate given expected climate politics scenarios, citing Toby Ord’s estimates of existential risk increase under global power conflict.
Thanks a ton for the thoughtful feedback! It is greatly appreciated.
This is very fair criticism and I agree.
For some reason, when writing order of magnitude, I was thinking about existential risks that may have a 0.1% or 1% chance of happening being multiplied into the 1-10% range (e.g. nuclear war). However, I wasn’t considering many of the existential risks I was actually talking about (like biosafety, AI safety, etc) - it’d be ridiculous for AI safety risk to be multiplied from 10% to 100%.
I think the estimate of a great power war increasing the total existential risk by 10% is much more fair than my estimate; because of this, in response to your feedback, I’ve modified my EA forum post to state that a total existential risk increase of 10% is a fair estimate given expected climate politics scenarios, citing Toby Ord’s estimates of existential risk increase under global power conflict.
Thanks a ton for the thoughtful feedback! It is greatly appreciated.