(I’m an FRI employee, but responding here in my personal capacity.)
Yeah, in general we thought about various types of framing effects a lot in designing the tournament, but this was one we hadn’t devoted much time to. I think we were all pretty surprised by the magnitude of the effect in the public survey.
Personally, I think this likely affected our normal tournament participants less than it did members of the public. Our “expert” sample mostly had considered pre-existing views on the topics, so there was less room for the elicitation of their probabilities to affect things. And superforecasters should be more fluent in probabilistic reasoning than educated members of the public, so should be less caught out by probability vs. odds.
In any case, forecasting low probabilities is very little studied, and an FRI project to remedy that is currently underway.
I agree, in that I predict that the effect would be lessened for experts and lessened still more for superforecasters.
However, that doesn’t tell us how much less. A six order of magnitude discrepancy leaves a lot of room! If switching to odds only dropped superforecasters by three orders of magnitude and experts by four orders of magnitude, everything you said above would be true, but it would still make a massive difference to risk estimates. The people in EA may already have a (P|Doom) before going in, but everyone else won’t. Being an AI expert does not make one immune to anchoring bias.
I think it’s very important to follow up on this for domain experts. I often see “median AI expert thinks thinks there is 2% chance AI x-risk” used as evidence to take AI risk seriously, but is there an alternate universe where the factoid is “median AI expert thinks there is 1 in 50,000 odds of AI x-risk” ? We really need to find out.
(I’m an FRI employee, but responding here in my personal capacity.)
Yeah, in general we thought about various types of framing effects a lot in designing the tournament, but this was one we hadn’t devoted much time to. I think we were all pretty surprised by the magnitude of the effect in the public survey.
Personally, I think this likely affected our normal tournament participants less than it did members of the public. Our “expert” sample mostly had considered pre-existing views on the topics, so there was less room for the elicitation of their probabilities to affect things. And superforecasters should be more fluent in probabilistic reasoning than educated members of the public, so should be less caught out by probability vs. odds.
In any case, forecasting low probabilities is very little studied, and an FRI project to remedy that is currently underway.
I agree, in that I predict that the effect would be lessened for experts and lessened still more for superforecasters.
However, that doesn’t tell us how much less. A six order of magnitude discrepancy leaves a lot of room! If switching to odds only dropped superforecasters by three orders of magnitude and experts by four orders of magnitude, everything you said above would be true, but it would still make a massive difference to risk estimates. The people in EA may already have a (P|Doom) before going in, but everyone else won’t. Being an AI expert does not make one immune to anchoring bias.
I think it’s very important to follow up on this for domain experts. I often see “median AI expert thinks thinks there is 2% chance AI x-risk” used as evidence to take AI risk seriously, but is there an alternate universe where the factoid is “median AI expert thinks there is 1 in 50,000 odds of AI x-risk” ? We really need to find out.