Your estimates are presented as numerical values similar to probabilities. Is it actually probabilities and if yes, are they frequentist probabilities or Bayesian? And more generally: How we can define the “probability of end of the world”?
I believe that all the numbers I’ve shown were probabilities. I’m pretty sure they were always presented in the original source as percentages, decimals, or “1 in [number]”.
What was the alternative you had in mind? I’ve seen some related estimates presented like “This can be expected to happen every x years”, or “There’s an x year return rate”. Is that what you were thinking of? Several such estimates are given in Beard et al.’s appendix. But I don’t think any are in the database. That wasn’t primarily because they not quite probabilities (or not quite the right type), but rather because they were typically of things like the chance of an asteroid impact of a certain size, rather than direct estimates of the chance of existential catastrophe. (It’s possible the asteroid impact wouldn’t cause such a catastrophe.)
As for whether the probabilities are frequentist or Bayesian, I think many sources weren’t explicit about that. But I generally assumed they were meant as Bayesian, though they might have been based on frequentist probabilities. E.g., Toby Ord’s estimates of natural risks seems to be based on the frequency in the past, but then they’re explicitly about what’ll happen in the next 100 years, and they’re modified based on knowledge of whether our other knowledge suggests this period is more or less likely than average to have e.g. an asteroid impact. But to be certain how a given number was meant to be interpreted, one might have to check the original source (which I provide links or references to in the database).
Yes, natural catastrophes probabilities could be presented as frequentist probabilities, but some estimates are based on logical uncertainty of the claims like “AGI is possible”.
Also, are these probabilities conditioned on “all possible prevention measures are taken”? If yes, they are final probabilities which can’t be made lower.
In the main sheet, the estimates are all unconditional (unless I made mistakes). They’re just people’s estimates of the probabilities that things will actually occur. There’s a separate sheet for conditional estimates.
So presumably people’s estimates of the chances these catastrophes occur would be lower conditional on people put in unexpectedly much effort to solve the problems.
Also, here’s a relevant quote from The Precipice, which helps contextualise Ord’s estimates. He writes that his estimates already:
incorporate the possibility that we get our act together and start taking these risks very seriously. Future risks are often estimated with an assumption of ‘business as usual’: that our levels of concern and resources devoted to addressing the risks stay where they are today. If I had assumed business as usual, my risk estimates would have been substantially higher. But I think they would have been misleading, overstating the chance that we actually suffer an existential catastrophe. So instead, I’ve made allowances for the fact that we will likely respond to the escalating risks, with substantial efforts to reduce them.
The numbers therefore represent my actual best guesses of the chance the threats materialise, taking our responses into account. If we outperform my expectations, we could bring the remaining risk down below these estimates. Perhaps one could say that we were heading towards Russian roulette with two bullets in the gun, but that I think we will remove one of these before it’s time to pull the trigger. And there might just be time to remove the last one too, if we really try.
Great database!
Your estimates are presented as numerical values similar to probabilities. Is it actually probabilities and if yes, are they frequentist probabilities or Bayesian? And more generally: How we can define the “probability of end of the world”?
I believe that all the numbers I’ve shown were probabilities. I’m pretty sure they were always presented in the original source as percentages, decimals, or “1 in [number]”.
What was the alternative you had in mind? I’ve seen some related estimates presented like “This can be expected to happen every x years”, or “There’s an x year return rate”. Is that what you were thinking of? Several such estimates are given in Beard et al.’s appendix. But I don’t think any are in the database. That wasn’t primarily because they not quite probabilities (or not quite the right type), but rather because they were typically of things like the chance of an asteroid impact of a certain size, rather than direct estimates of the chance of existential catastrophe. (It’s possible the asteroid impact wouldn’t cause such a catastrophe.)
As for whether the probabilities are frequentist or Bayesian, I think many sources weren’t explicit about that. But I generally assumed they were meant as Bayesian, though they might have been based on frequentist probabilities. E.g., Toby Ord’s estimates of natural risks seems to be based on the frequency in the past, but then they’re explicitly about what’ll happen in the next 100 years, and they’re modified based on knowledge of whether our other knowledge suggests this period is more or less likely than average to have e.g. an asteroid impact. But to be certain how a given number was meant to be interpreted, one might have to check the original source (which I provide links or references to in the database).
Yes, natural catastrophes probabilities could be presented as frequentist probabilities, but some estimates are based on logical uncertainty of the claims like “AGI is possible”.
Also, are these probabilities conditioned on “all possible prevention measures are taken”? If yes, they are final probabilities which can’t be made lower.
In the main sheet, the estimates are all unconditional (unless I made mistakes). They’re just people’s estimates of the probabilities that things will actually occur. There’s a separate sheet for conditional estimates.
So presumably people’s estimates of the chances these catastrophes occur would be lower conditional on people put in unexpectedly much effort to solve the problems.
Also, here’s a relevant quote from The Precipice, which helps contextualise Ord’s estimates. He writes that his estimates already: