Estimation of existential risk is the set of methods for assessing the probability of human extinction and other existential catastrophes.
Further reading
Beard, Simon, Thomas Rowe & James Fox (2020) An analysis and evaluation of methods currently used to quantify the likelihood of existential hazards, Futures, vol. 115, pp. 1–14.
Besiroglu, Tamay (2019) Ragnarök Series — results so far, Metaculus, October 15.
Bostrom, Nick (2002) Existential risks: Analyzing human extinction scenarios and related hazards, Journal of evolution and technology, vol. 9.
The section “Assessing the Probability of Existential Risks” discusses methods of existential risk estimation.
Goth, Aidan, Stephen Clare & Christian Ruhl (2022) Professor Philip Tetlock’s research on improving judgments of existential risk, Founders Pledge, March 8.
Karger, Ezra, Pavel D. Atanasov & Philip Tetlock (2022) Improving judgments of existential risk: Better forecasts, questions, explanations, policies, SSRN Electronic Journal.
Muehlhauser, Luke (2019) How feasible is long-range forecasting?, Open Philanthropy, October 10.
Sandberg, A. & Bostrom, N. (2008) Global Catastrophic Risks Survey, Technical Report #2008-1, Future of Humanity Institute, University of Oxford.
Tonn, Bruce & Dorian Stiefel (2013) Evaluating methods for estimating existential risks, Risk Analysis, vol. 33, pp. 1772–1787.
Related entries
AI forecasting | anthropic shadow | existential risk | forecasting | long-range forecasting
I’ve removed the following from the human extinction entry:
I think this could be incorporated here, though it’s a bit outdated and superseded by better estimates.
It’s possible that this entry is redundant since we already have entries on Existential risk and on Forecasting, so e.g. someone could just filter for both of those tags at once and get something similar to filtering for this tag.
But:
People might not think to filter for two tags at once
People might also use a single tag/entry as a collection of posts on a topic, e.g. for sending to interesting people, and a combo of two tags doesn’t seem to work properly for that purpose
That’s all just about the tagging functionality, not the wiki functionality. This seems to me like an important and large enough topic to warrant its own entry.
The fact we have a specific entry for “AI forecasting” rather than just relying on the intersection of “AI alignment” (or whatever) and “Forecasting” seems in line with having a specific entry for this topic as well.
Some alternative name options:
Existential risk estimates
Estimation of existential risks
(Various permutations of these sorts of phrases)
I would prefer ‘existential risk estimates’ over ‘estimating existential risks’.
EDIT: I realize I also prefer ‘estimation of existential risks’ over the two above.
Intuitively, it seems Wikipedia and other reference works tend to prefer nominalized verbs over gerundive nominalization (see here for discussion and examples of this distinction). So I would be inclined to adopt this as our general policy, though this is just based on my subjective sense of how reference works name articles than on any explicit statement, which I wasn’t able to find after a few minutes of research (if anyone would like to look into this further, I’d be happy to defer to their findings).
Ok, I have no strong view, so I’ll change it to “estimation of existential risks”.
I think that, compared to “existential risk estimates”, the new name a bit less intuitively captures posts that don’t discuss the process, pros, cons, etc. of existential risk estimation but rather just give some estimates. But I think “existential risk estimates” would have the opposite problem. I think there’s probably no perfectly ideal name, if we want the tag to capture both types of posts (which I currently do), but that all of these names are probably “good enough” anyway.
Ironically, I think the one thing we can now rule as dominated by other options is my original choice of “Estimating existential risks”.
I don’t have time to write the text for this entry at the moment. Maybe I could in a few weeks, but not sure, and other editors should definitely feel free to go on without me!)
But I think the text could draw on some of the tagged posts and the stuff in Further reading. In particular, if I was writing this, I’d probably:
Draw heavily on the transcript of my lightning talk on this
Draw on other elements of my database post
Draw on Beard et al., as well as Baum’s reply
Draw on Muelhauser’s post on long-range forecasting
I’d also make sure to explicitly note that this is not necessarily just about extinction, and conversely that many of the tagged posts will also/only discuss estimates of less potentially extreme outcomes than existential catastrophes (e.g. GCRs).