Definitely quite a difference (just to check, are the metaculus numbers the likelihood of that risk being picked as the most likely one, not their likelihood ratings?).
I was struck, though not surprised, by the very strong political differences for the risks. It suggests to me that some people might also be giving some kind of signalling of ‘what we should be most worried about right now’ or perhaps even picking what a ‘good person’ on their side is supposed to pick, as opposed to really carefully sitting and thinking specifically about the most likely thing to cause extinction. That is sort of the opposite way of how I imagine a forecaster would approach such a question.
just to check, are the metaculus numbers the likelihood of that risk being picked as the most likely one, not their likelihood ratings?
No, this is not apples to apples. Metaculus is predicting the probability of the actual risk, whereas the “public (RP)” is the percentage of people who think this is the highest risk regardless of the probability they give.
While there are some Metaculus questions that ask for predictions of the actual risk, the ones I selected are all conditional of the form, “If a global catastrophe occurs, will it be due X”. So they should be more comparable to the RP question “Which of the following do you think is most likely to cause human extinction?”
Given the differences in the questions it doesn’t seem correct to compare the raw probabilities provided across these—also our question was specifically about extinction rather than just a catastrophe. That being said there may be some truth to this implying some difference between the population estimates and what the metaculus estimates imply if we rank them—AI risk comes out top on the metaculus ratings and bottom in the public, and climate change also shows a sizable rank difference.
One wrinkle in taking the rankings like this would be that people were only allowed to pick one item in our questions, and so it is also possible that the rankings could be different if people actually rated each one and then we ranked their ratings. This would be the case if e.g., all the other risks are more likely than AI to be the absolute top risk across people, but many people have AI risk as their second risk, which would suggest a very high ordinal ranking that we can’t see from looking at the distribution of top picks.
Definitely quite a difference (just to check, are the metaculus numbers the likelihood of that risk being picked as the most likely one, not their likelihood ratings?).
I was struck, though not surprised, by the very strong political differences for the risks. It suggests to me that some people might also be giving some kind of signalling of ‘what we should be most worried about right now’ or perhaps even picking what a ‘good person’ on their side is supposed to pick, as opposed to really carefully sitting and thinking specifically about the most likely thing to cause extinction. That is sort of the opposite way of how I imagine a forecaster would approach such a question.
No, this is not apples to apples. Metaculus is predicting the probability of the actual risk, whereas the “public (RP)” is the percentage of people who think this is the highest risk regardless of the probability they give.
While there are some Metaculus questions that ask for predictions of the actual risk, the ones I selected are all conditional of the form, “If a global catastrophe occurs, will it be due X”. So they should be more comparable to the RP question “Which of the following do you think is most likely to cause human extinction?”
Given the differences in the questions it doesn’t seem correct to compare the raw probabilities provided across these—also our question was specifically about extinction rather than just a catastrophe. That being said there may be some truth to this implying some difference between the population estimates and what the metaculus estimates imply if we rank them—AI risk comes out top on the metaculus ratings and bottom in the public, and climate change also shows a sizable rank difference.
One wrinkle in taking the rankings like this would be that people were only allowed to pick one item in our questions, and so it is also possible that the rankings could be different if people actually rated each one and then we ranked their ratings. This would be the case if e.g., all the other risks are more likely than AI to be the absolute top risk across people, but many people have AI risk as their second risk, which would suggest a very high ordinal ranking that we can’t see from looking at the distribution of top picks.