Oh, your survey also frames the questions very differently, in a way that seems important to me. You give multiple-choice questions like :
Which of these is closest to your estimate of the probability that there will be an existential catastrophe due to AI (at any point in time)?
0.0001%
0.001%
0.01%
0.1%
0.5%
1%
2%
3%
4%
5%
6%
7%
8%
9%
10%
15%
20%
25%
30%
35%
40%
45%
50%
55%
60%
65%
70%
75%
80%
85%
90%
95%
100%
… whereas I just asked for a probability.
Overall, you give fourteen options for probabilities below 10%, and two options above 90%. (One of which is the dreaded-by-rationalists “100%”.)
By giving many fine gradations of ‘AI x-risk is low probability’ without giving as many gradations of ‘AI x-risk is high probability’, you’re communicating that low-probability answers are more normal/natural/expected.
The low probabilities are also listed first, which is a natural choice but could still have a priming effect. (Anchoring to 0.0001% and adjusting from that point, versus anchoring to 95%.) On my screen’s resolution, you have to scroll down three pages to even see numbers as high as 65% or 80%. I lean toward thinking ‘low probabilities listed first’ wasn’t a big factor, though.
Oh, your survey also frames the questions very differently, in a way that seems important to me. You give multiple-choice questions like :
… whereas I just asked for a probability.
Overall, you give fourteen options for probabilities below 10%, and two options above 90%. (One of which is the dreaded-by-rationalists “100%”.)
By giving many fine gradations of ‘AI x-risk is low probability’ without giving as many gradations of ‘AI x-risk is high probability’, you’re communicating that low-probability answers are more normal/natural/expected.
The low probabilities are also listed first, which is a natural choice but could still have a priming effect. (Anchoring to 0.0001% and adjusting from that point, versus anchoring to 95%.) On my screen’s resolution, you have to scroll down three pages to even see numbers as high as 65% or 80%. I lean toward thinking ‘low probabilities listed first’ wasn’t a big factor, though.