The smart people were selected for having a good predictive track record on geopolitical questions with resolution times measured in months, a track record equaled or bettered by several* members of the concerned group. I think this is much less strong evidence of forecasting ability on the kinds of question discussed than you do.
*For what it’s worth, I’d expect the skeptical group to do slightly better overall on e.g. non-AI GJP questions over the next 2 years, they do have better forecasting track records as a group on this kind of question, it’s just not a stark difference.
I agree this is quite different from the standard GJ forecasting problem. And that GJ forecasters* are primarily selected for and experienced with forecasting quite different sorts of questions.
But my claim is not “trust them, they are well-calibrated on this”. It’s more “if your reason for thinking X will happen is a complex multi-stage argument, and a bunch of smart people with no particular reason to be biased, who are also selected for being careful and rational on at least some complicated emotive stuff, spend hours and hours on your argument and come away with a very different opinion on its strength, you probably shouldn’t trust the argument much (though this is less clear if the argument depends on technical scientific or mathematical knowledge they lack**)”. That is, I am not saying “supers are well-calibrated, so the risk probably is about 1 in 1000”. I agree the case for that is not all that strong. I am saying “if the concerned group’s credences are based in a multi-step, non-formal argument whose persuasiveness the supers feel very differently about, that is bad sign for how well-justified those credences are.”
Actually, in some ways, it might look better for AI X-risk work being a good use of money if the supers were obviously well-calibrated on this. A 1 in 1000 chance of an outcome as bad as extinction is likely worth spending some small portion of world GDP on preventing. And AI safety spending so far is a drop a bucket compared to world GDP. (Yeah, I know technical the D stands for domestic so “world GDP” can’t be quite the right term, but I forget the right one!). Indeed “AI risk is at least 1 in 1000″ is how Greaves and MacAskill justify the “we can make a big difference to the long-term future in expectation” in ‘The Case for Strong Longtermism’. (If a 1 in 1000 estimate is relatively robust, I think it is a big mistake to call this “Pascal’s Mugging”.)
*(of whom I’m one as it happens, though I didn’t work on this: did work on the original X-risk forecasting tournament.)
**I am open to argument that this actually is the case here.
Why do you think superforecasters who were selected specifically for assigning a low probability to AI x-risk are well described as “a bunch of smart people with no particular reason to be biased”?
For the avoidance of doubt, I’m not upset that the supers were selected in this way, it’s the whole point of the study, made very clear in the write-up, and was clear to me as a participant. It’s just that “your arguments failed to convince randomly selected superforecasters” and “your arguments failed to convince a group of superforecasters who were specifically selected for confidentiality disagreeing with you” are very different pieces of evidence.
One small clarification: the skeptical group was not all superforecasters. There were two domain experts as well. I was one of them.
I’m sympathetic to David’s point here. Even though the skeptic camp was selected for their skepticism, I think we still get some information from the fact that many hours of research and debate didn’t move their opinions. I think there are plausible alternative worlds where the skeptics come in with low probabilities (by construction), but update upward by a few points after deeper engagement reveals holes in their early thinking.
Ok, I slightly overstated the point. This time, the supers selected were not a (mostly) random draw from the set of supers. But they were in the original X-risk tournament, and in that case too, they were not persuaded to change their credences via further interaction with the concerned (that is the X-risk experts.) Then, when we took the more skeptical of them and gave them yet more exposure to AI safety arguments, that still failed to move the skeptics. I think taken together, these two results show that AI safety arguments are not all that persuasive to the average super. (More precisely, that no amount of exposure to them will persuade all supers as a group to the point where they get a median significantly above 0.75% in X-risk by the centuries end.)
TL;DR Lots of things are believed by some smart, informed, mostly well calibrated people. It’s when your arguments are persuasive to (roughly) randomly selected smart, informed, well-calibrated people that we should start being really confident in them. (As a rough heuristic, not an exceptionless rule.)
But this is a follow-up to the original X-risk tournament, where the selection really was fairly random (obviously not perfectly so, but it’s not clear in what direction selection effects in which supers participated biased things.) And in the original tournament, the supers were also fairly unpersuaded (mostly) by the case for AI X-risk. Or rather, to avoid putting it in too binary a way, they didn’t not move their credences further on hearing more argument after the initial round of forecasting. (I do think the supers level of concern was enough to motivate worrying about AI given how bad extinction is, so “unpersuaded” is a little misleading.) At that point, people then said ‘they didn’t spend the enough time on it, and they didn’t get the right experts’. Now, we have tried further with different experts, more time and effort lots of back and forth etc. and those who participated in the second round are still not moved. Now, it is possible that the only reason the participants were not moved 2nd time round was because they were more skeptical than some other supers the first time round. (Though the difference between medians of 0.1% and 0.3% medians in X-risk by 2100 is not that great.) But I think if you get ‘in imperfect conditions, a random smart crowd were not moved at all, then we tried the more skeptical ones in much better conditions and they still weren’t moved at all’, the most likely conclusion is that even people from the less skeptical half of the distribution from the first go round would not have moved their credences either had they participated in the second round. Of course, the evidence would be even stronger if the people had been randomly selected the first time as well as the second.
The smart people were selected for having a good predictive track record on geopolitical questions with resolution times measured in months, a track record equaled or bettered by several* members of the concerned group. I think this is much less strong evidence of forecasting ability on the kinds of question discussed than you do.
*For what it’s worth, I’d expect the skeptical group to do slightly better overall on e.g. non-AI GJP questions over the next 2 years, they do have better forecasting track records as a group on this kind of question, it’s just not a stark difference.
I agree this is quite different from the standard GJ forecasting problem. And that GJ forecasters* are primarily selected for and experienced with forecasting quite different sorts of questions.
But my claim is not “trust them, they are well-calibrated on this”. It’s more “if your reason for thinking X will happen is a complex multi-stage argument, and a bunch of smart people with no particular reason to be biased, who are also selected for being careful and rational on at least some complicated emotive stuff, spend hours and hours on your argument and come away with a very different opinion on its strength, you probably shouldn’t trust the argument much (though this is less clear if the argument depends on technical scientific or mathematical knowledge they lack**)”. That is, I am not saying “supers are well-calibrated, so the risk probably is about 1 in 1000”. I agree the case for that is not all that strong. I am saying “if the concerned group’s credences are based in a multi-step, non-formal argument whose persuasiveness the supers feel very differently about, that is bad sign for how well-justified those credences are.”
Actually, in some ways, it might look better for AI X-risk work being a good use of money if the supers were obviously well-calibrated on this. A 1 in 1000 chance of an outcome as bad as extinction is likely worth spending some small portion of world GDP on preventing. And AI safety spending so far is a drop a bucket compared to world GDP. (Yeah, I know technical the D stands for domestic so “world GDP” can’t be quite the right term, but I forget the right one!). Indeed “AI risk is at least 1 in 1000″ is how Greaves and MacAskill justify the “we can make a big difference to the long-term future in expectation” in ‘The Case for Strong Longtermism’. (If a 1 in 1000 estimate is relatively robust, I think it is a big mistake to call this “Pascal’s Mugging”.)
*(of whom I’m one as it happens, though I didn’t work on this: did work on the original X-risk forecasting tournament.)
**I am open to argument that this actually is the case here.
Why do you think superforecasters who were selected specifically for assigning a low probability to AI x-risk are well described as “a bunch of smart people with no particular reason to be biased”?
For the avoidance of doubt, I’m not upset that the supers were selected in this way, it’s the whole point of the study, made very clear in the write-up, and was clear to me as a participant. It’s just that “your arguments failed to convince randomly selected superforecasters” and “your arguments failed to convince a group of superforecasters who were specifically selected for confidentiality disagreeing with you” are very different pieces of evidence.
One small clarification: the skeptical group was not all superforecasters. There were two domain experts as well. I was one of them.
I’m sympathetic to David’s point here. Even though the skeptic camp was selected for their skepticism, I think we still get some information from the fact that many hours of research and debate didn’t move their opinions. I think there are plausible alternative worlds where the skeptics come in with low probabilities (by construction), but update upward by a few points after deeper engagement reveals holes in their early thinking.
Ok, I slightly overstated the point. This time, the supers selected were not a (mostly) random draw from the set of supers. But they were in the original X-risk tournament, and in that case too, they were not persuaded to change their credences via further interaction with the concerned (that is the X-risk experts.) Then, when we took the more skeptical of them and gave them yet more exposure to AI safety arguments, that still failed to move the skeptics. I think taken together, these two results show that AI safety arguments are not all that persuasive to the average super. (More precisely, that no amount of exposure to them will persuade all supers as a group to the point where they get a median significantly above 0.75% in X-risk by the centuries end.)
TL;DR Lots of things are believed by some smart, informed, mostly well calibrated people. It’s when your arguments are persuasive to (roughly) randomly selected smart, informed, well-calibrated people that we should start being really confident in them. (As a rough heuristic, not an exceptionless rule.)
They weren’t randomly selected, they were selected specifically for scepticism!
Ok yes, in this case they were.
But this is a follow-up to the original X-risk tournament, where the selection really was fairly random (obviously not perfectly so, but it’s not clear in what direction selection effects in which supers participated biased things.) And in the original tournament, the supers were also fairly unpersuaded (mostly) by the case for AI X-risk. Or rather, to avoid putting it in too binary a way, they didn’t not move their credences further on hearing more argument after the initial round of forecasting. (I do think the supers level of concern was enough to motivate worrying about AI given how bad extinction is, so “unpersuaded” is a little misleading.) At that point, people then said ‘they didn’t spend the enough time on it, and they didn’t get the right experts’. Now, we have tried further with different experts, more time and effort lots of back and forth etc. and those who participated in the second round are still not moved. Now, it is possible that the only reason the participants were not moved 2nd time round was because they were more skeptical than some other supers the first time round. (Though the difference between medians of 0.1% and 0.3% medians in X-risk by 2100 is not that great.) But I think if you get ‘in imperfect conditions, a random smart crowd were not moved at all, then we tried the more skeptical ones in much better conditions and they still weren’t moved at all’, the most likely conclusion is that even people from the less skeptical half of the distribution from the first go round would not have moved their credences either had they participated in the second round. Of course, the evidence would be even stronger if the people had been randomly selected the first time as well as the second.