I have noticed the CCM and this post of the CURVE sequence mention extinction risk in some places, and extinction risk in others (emphasis mine):
In the CCM:
âAssuming that the intervention does succeed, we expect that it reduces absolute existential risk by a factor between 0.001 and 0.05 before it is rendered obsoleteâ.
âIn this model, civilization will experience a series of ârisk erasâ. We face some fixed annual probability of extinction that lasts for the duration of a risk eraâ.
Did you intend to use the 2 terms interchangeably? I think that can potentially lead to confusion and inconsistencies:
Human extinction does not necessarily imply an existential catastrophe. It would in 2023, but it may be for the better not so far from now (e.g. in 1000 years, i.e. at the default end of CCMâs 3rd risk era). It may be caused by benevolent AI (or an alien civilisation) which correctly realises humans are not the best way to increase welfare.
An existential catastrophe does not necessarily imply human extinction. For example, we could have longterm totalitarianism enabled by powerful AI.
Did you want âextinctionâ to be interpreted as something like âextinction of the best steerers of the futureâ (arguably, humans now, benevolent AI in the future), instead of âextinction of humansâ? I guess so, but then:
There is a risk people are interpreting âextinctionâ as âhuman extinctionâ, even if humans are no longer the best steerers of the future (e.g. in the 3rd and 4th default risk eras).
Defining âbest steeres of the futureâ seems pretty crucial.
I would still not use âexistentialâ and âextinctionâ interchangeably. I would go for âextinctionâ. âI think the concept of existential risk is sufficiently vague for it to be better to mostly focus on clearer metrics (e.g. a suffering-free collapse of all value would be maximally good for negative utilitarians, but would be an existential risk for most people). For example, extinction risk, probability of a given drop in global population /â GDP /â democracy index, or probability of global population /â GDP /â democracy index remaining smaller than the previous maximum for a certain timeâ.
Hi,
I have noticed the CCM and this post of the CURVE sequence mention extinction risk in some places, and extinction risk in others (emphasis mine):
In the CCM:
âAssuming that the intervention does succeed, we expect that it reduces absolute existential risk by a factor between 0.001 and 0.05 before it is rendered obsoleteâ.
âIn this model, civilization will experience a series of ârisk erasâ. We face some fixed annual probability of extinction that lasts for the duration of a risk eraâ.
In How bad would human extinction be?:
Did you intend to use the 2 terms interchangeably? I think that can potentially lead to confusion and inconsistencies:
Human extinction does not necessarily imply an existential catastrophe. It would in 2023, but it may be for the better not so far from now (e.g. in 1000 years, i.e. at the default end of CCMâs 3rd risk era). It may be caused by benevolent AI (or an alien civilisation) which correctly realises humans are not the best way to increase welfare.
An existential catastrophe does not necessarily imply human extinction. For example, we could have longterm totalitarianism enabled by powerful AI.
Did you want âextinctionâ to be interpreted as something like âextinction of the best steerers of the futureâ (arguably, humans now, benevolent AI in the future), instead of âextinction of humansâ? I guess so, but then:
The title of the post I mentioned is not ideal.
There is a risk people are interpreting âextinctionâ as âhuman extinctionâ, even if humans are no longer the best steerers of the future (e.g. in the 3rd and 4th default risk eras).
Defining âbest steeres of the futureâ seems pretty crucial.
I would still not use âexistentialâ and âextinctionâ interchangeably. I would go for âextinctionâ. âI think the concept of existential risk is sufficiently vague for it to be better to mostly focus on clearer metrics (e.g. a suffering-free collapse of all value would be maximally good for negative utilitarians, but would be an existential risk for most people). For example, extinction risk, probability of a given drop in global population /â GDP /â democracy index, or probability of global population /â GDP /â democracy index remaining smaller than the previous maximum for a certain timeâ.