I have noticed the CCM and this post of the CURVE sequence mention extinction risk in some places, and extinction risk in others (emphasis mine):
In the CCM:
“Assuming that the intervention does succeed, we expect that it reduces absolute existential risk by a factor between 0.001 and 0.05 before it is rendered obsolete”.
“In this model, civilization will experience a series of “risk eras”. We face some fixed annual probability of extinction that lasts for the duration of a risk era”.
Did you intend to use the 2 terms interchangeably? I think that can potentially lead to confusion and inconsistencies:
Human extinction does not necessarily imply an existential catastrophe. It would in 2023, but it may be for the better not so far from now (e.g. in 1000 years, i.e. at the default end of CCM’s 3rd risk era). It may be caused by benevolent AI (or an alien civilisation) which correctly realises humans are not the best way to increase welfare.
An existential catastrophe does not necessarily imply human extinction. For example, we could have longterm totalitarianism enabled by powerful AI.
Did you want “extinction” to be interpreted as something like “extinction of the best steerers of the future” (arguably, humans now, benevolent AI in the future), instead of “extinction of humans”? I guess so, but then:
There is a risk people are interpreting “extinction” as “human extinction”, even if humans are no longer the best steerers of the future (e.g. in the 3rd and 4th default risk eras).
Defining “best steeres of the future” seems pretty crucial.
I would still not use “existential” and “extinction” interchangeably. I would go for “extinction”. “I think the concept of existential risk is sufficiently vague for it to be better to mostly focus on clearer metrics (e.g. a suffering-free collapse of all value would be maximally good for negative utilitarians, but would be an existential risk for most people). For example, extinction risk, probability of a given drop in global population / GDP / democracy index, or probability of global population / GDP / democracy index remaining smaller than the previous maximum for a certain time”.
Hi,
I have noticed the CCM and this post of the CURVE sequence mention extinction risk in some places, and extinction risk in others (emphasis mine):
In the CCM:
“Assuming that the intervention does succeed, we expect that it reduces absolute existential risk by a factor between 0.001 and 0.05 before it is rendered obsolete”.
“In this model, civilization will experience a series of “risk eras”. We face some fixed annual probability of extinction that lasts for the duration of a risk era”.
In How bad would human extinction be?:
Did you intend to use the 2 terms interchangeably? I think that can potentially lead to confusion and inconsistencies:
Human extinction does not necessarily imply an existential catastrophe. It would in 2023, but it may be for the better not so far from now (e.g. in 1000 years, i.e. at the default end of CCM’s 3rd risk era). It may be caused by benevolent AI (or an alien civilisation) which correctly realises humans are not the best way to increase welfare.
An existential catastrophe does not necessarily imply human extinction. For example, we could have longterm totalitarianism enabled by powerful AI.
Did you want “extinction” to be interpreted as something like “extinction of the best steerers of the future” (arguably, humans now, benevolent AI in the future), instead of “extinction of humans”? I guess so, but then:
The title of the post I mentioned is not ideal.
There is a risk people are interpreting “extinction” as “human extinction”, even if humans are no longer the best steerers of the future (e.g. in the 3rd and 4th default risk eras).
Defining “best steeres of the future” seems pretty crucial.
I would still not use “existential” and “extinction” interchangeably. I would go for “extinction”. “I think the concept of existential risk is sufficiently vague for it to be better to mostly focus on clearer metrics (e.g. a suffering-free collapse of all value would be maximally good for negative utilitarians, but would be an existential risk for most people). For example, extinction risk, probability of a given drop in global population / GDP / democracy index, or probability of global population / GDP / democracy index remaining smaller than the previous maximum for a certain time”.