But this is a follow-up to the original X-risk tournament, where the selection really was fairly random (obviously not perfectly so, but itâs not clear in what direction selection effects in which supers participated biased things.) And in the original tournament, the supers were also fairly unpersuaded (mostly) by the case for AI X-risk. Or rather, to avoid putting it in too binary a way, they didnât not move their credences further on hearing more argument after the initial round of forecasting. (I do think the supers level of concern was enough to motivate worrying about AI given how bad extinction is, so âunpersuadedâ is a little misleading.) At that point, people then said âthey didnât spend the enough time on it, and they didnât get the right expertsâ. Now, we have tried further with different experts, more time and effort lots of back and forth etc. and those who participated in the second round are still not moved. Now, it is possible that the only reason the participants were not moved 2nd time round was because they were more skeptical than some other supers the first time round. (Though the difference between medians of 0.1% and 0.3% medians in X-risk by 2100 is not that great.) But I think if you get âin imperfect conditions, a random smart crowd were not moved at all, then we tried the more skeptical ones in much better conditions and they still werenât moved at allâ, the most likely conclusion is that even people from the less skeptical half of the distribution from the first go round would not have moved their credences either had they participated in the second round. Of course, the evidence would be even stronger if the people had been randomly selected the first time as well as the second.
Ok yes, in this case they were.
But this is a follow-up to the original X-risk tournament, where the selection really was fairly random (obviously not perfectly so, but itâs not clear in what direction selection effects in which supers participated biased things.) And in the original tournament, the supers were also fairly unpersuaded (mostly) by the case for AI X-risk. Or rather, to avoid putting it in too binary a way, they didnât not move their credences further on hearing more argument after the initial round of forecasting. (I do think the supers level of concern was enough to motivate worrying about AI given how bad extinction is, so âunpersuadedâ is a little misleading.) At that point, people then said âthey didnât spend the enough time on it, and they didnât get the right expertsâ. Now, we have tried further with different experts, more time and effort lots of back and forth etc. and those who participated in the second round are still not moved. Now, it is possible that the only reason the participants were not moved 2nd time round was because they were more skeptical than some other supers the first time round. (Though the difference between medians of 0.1% and 0.3% medians in X-risk by 2100 is not that great.) But I think if you get âin imperfect conditions, a random smart crowd were not moved at all, then we tried the more skeptical ones in much better conditions and they still werenât moved at allâ, the most likely conclusion is that even people from the less skeptical half of the distribution from the first go round would not have moved their credences either had they participated in the second round. Of course, the evidence would be even stronger if the people had been randomly selected the first time as well as the second.