My impression is that people at MIRI would probably have a mean x-risk from AI estimate of ~50%, while people at the other places you mentioned would have a mean estimate of ~10% and a median of 8%.
Looking only at people who declared their affiliation: MIRI people’s mean probability for x-catastrophes from “AI systems not doing/optimizing what the people deploying them wanted/intended” was 80% (though I’m not sure this is what you mean by “x-risk from AI” here), with median 70%.
People who declared a non-MIRI affiliation had a mean Q2 probability of 27.8%, median 26%.
With (even) less confidence, I’d say people at MIRI would give a mean of 40% to question 1, and people elsewhere would give a mean of 7% and a median of 5%.
For Q1, MIRI-identified people gave mean 70% (and median 80%). Non-MIRI-identified people gave mean ~18.7%, median 10%.
I’m guessing MIRI people will be something like a quarter of your respondents.
5⁄27 of respondents who specified an affiliation said they work at MIRI (~19%). (By comparison, 17/~117 ~= 15% of recipients work at MIRI.)
Thanks for registering your predictions, Michael!
Results (hover to read):
Mean answer for Q1 was ~30.1%, median answer 20%.
Looking only at people who declared their affiliation: MIRI people’s mean probability for x-catastrophes from “AI systems not doing/optimizing what the people deploying them wanted/intended” was 80% (though I’m not sure this is what you mean by “x-risk from AI” here), with median 70%.
People who declared a non-MIRI affiliation had a mean Q2 probability of 27.8%, median 26%.
For Q1, MIRI-identified people gave mean 70% (and median 80%). Non-MIRI-identified people gave mean ~18.7%, median 10%.
5⁄27 of respondents who specified an affiliation said they work at MIRI (~19%). (By comparison, 17/~117 ~= 15% of recipients work at MIRI.)
Interesting, thanks!
(I’ve added some ruminations on my failings and confusions in a comment on your results post.)