Let me restate the “5% means smaller” case, because I don’t think you are responding to the strongest version of the argument here.
The concern is that these are cases of anchoring bias, and it’s inherent in the methodology because you are asking in terms of percentages. The vast majority of times we encounter percentages, they are in the range from 1-99%. I’m guessing that in the actual questionaire, they have been answering other percentage questions in that same range. Answering these questions with an answer like 0.0001%, for a question when they are just guessing and having done any precise calculations, does not come naturally to people.
So when someone has the viewpoint that AI x-risk is “extremely unlikely but not impossible”, and they are asked to give something in percentage terms, the answer they give is anchored to the 1-99% range, and they give something that seems “extremely low” when you are thinking in percentage terms.
But as the other paper showed, when you switch to talking about 1 in n odds, suddenly people are not anchored to 1-99% anymore. When placed next to “1 in 300 thousand odds of asteroid strike”, “1 in 20 odds” sounds incredibly high, not extremely low. This explains why people dropped their estimate by six orders of magnitude in this framing, compared to the percentage one. In an odds framework, “1 in a million” feels more like “extremely unlikely but not impossible”.
I’m a little concerned that you dismissed this as a fluke when it seems like it has a completely normal explanation.
I think these peoples actual opinion is that AI doom is “extremely unlikely but not impossible”. The numbers people give are ill thought out quantifications by people who are not used to quantifying that kind of thing. Worse, people who have given out their ill thought out quantifications in the percentage form are now anchored to that, and will have difficulty changing their mind later on.
Let me restate the “5% means smaller” case, because I don’t think you are responding to the strongest version of the argument here.
The concern is that these are cases of anchoring bias, and it’s inherent in the methodology because you are asking in terms of percentages. The vast majority of times we encounter percentages, they are in the range from 1-99%. I’m guessing that in the actual questionaire, they have been answering other percentage questions in that same range. Answering these questions with an answer like 0.0001%, for a question when they are just guessing and having done any precise calculations, does not come naturally to people.
So when someone has the viewpoint that AI x-risk is “extremely unlikely but not impossible”, and they are asked to give something in percentage terms, the answer they give is anchored to the 1-99% range, and they give something that seems “extremely low” when you are thinking in percentage terms.
But as the other paper showed, when you switch to talking about 1 in n odds, suddenly people are not anchored to 1-99% anymore. When placed next to “1 in 300 thousand odds of asteroid strike”, “1 in 20 odds” sounds incredibly high, not extremely low. This explains why people dropped their estimate by six orders of magnitude in this framing, compared to the percentage one. In an odds framework, “1 in a million” feels more like “extremely unlikely but not impossible”.
I’m a little concerned that you dismissed this as a fluke when it seems like it has a completely normal explanation.
I think these peoples actual opinion is that AI doom is “extremely unlikely but not impossible”. The numbers people give are ill thought out quantifications by people who are not used to quantifying that kind of thing. Worse, people who have given out their ill thought out quantifications in the percentage form are now anchored to that, and will have difficulty changing their mind later on.