The base case / historical precedent for existential AI risk is: - AGI has never been developed - ASI has never been developed - Existentially deadly technology has never been developed (I don’t count nuclear war or engineered pandemics, as they’ll likely leave survivors) - Highly deadly technology (>1M deaths) has never been cheap and easily copied - We’ve never had supply chains so fully automated end-to-end that they could become self-sufficient with enough intelligence - We’ve never had technology so networked that it could all be taken over by a strong enough hacker
Therefore, if you’re in the skeptic camp, you don’t have to make as much of an argument about specific scenarios where many things happen. You can just wave your arms and say it’s never happened before because it’s really hard and rare, as supported by the historical record.
In contrast, if you’re in the concerned camp, you’re making more of a positive claim about an imminent departure from historical precedent, so the burden of proof is on you. You have to present some compelling model or principles for explaining why the future is going to be different from the past.
Therefore, I think the concerned camp relying on theoretical arguments with multiple steps of logic might be a structural side effect of them having to argue against the historical precedent, rather than any innate preference for that type of argument.
I think that is probably the explanation yes. But I don’t think it gets rid of the problem for the concerned camp that usually, long complex arguments about how the future will go are wrong. This is not a sporting contest, where the concerned camp are doing well if they take a position that’s harder to argue for and make a good go of it. It’s closer to the mark to say that if you want to track truth you should (usually, mostly) avoid the positions that are hard to argue for.
I’m not saying no one should ever be moved by a big long complicated argument*. But I think that if your argument fails to move a bunch of smart people, selected for good predictive track record to anything like your view of the matter, that is an extremely strong signal that your complicated argument is nowhere near good enough to escape the general sensible prior that long complicated arguments about how the future will go are wrong. This is particularly the case when your assessment of the argument might be biased, which I think is true for AI safety people: if they are right, then they are some of the most important people, maybe even THE most important people in history, not to mention the quasi-religious sense of meaning people always draw from apocalyptic salvation v. damnation type stories. Meanwhile the GJ superforecasters don’t really have much to lose if they decide “oh, I am wrong, looking at the arguments, the risk is more like 2-3% than 1 in 1000”. (I am not claiming that there is zero reason for the supers to be biased against the hypothesis, but just that the situation is not very symmetric.) I think I would feel quite different about what this exercise (probably) shows, if the supers had all gone up to 1-2%, even though that is a lot lower than the concerned group.
I do wonder (though I think other factors are more important in explaining the opinions of the concerned group) whether familiarity with academic philosophy helps people be less persuaded by long complicated arguments. Philosophy is absolutely full of arguments that have plausible premises and are very convincing to their proponents, but which nonetheless fail to produce convergence amongst the community. After seeing a lot of that, I got used to not putting that much faith in argument. (Though plenty philosophers remain dogmatic, and there are controversial philosophical views I hold with a reasonable amount of confidence.) I wonder if LessWrong functions a bit like a version of academic philosophy where there is-like philosophy-a strong culture of taking arguments seriously and trying to have them shape your views-but where consensus actually is reached on some big picture stuff. That might make people who were shaped by LW intellectually rather more optimistic about the power of argument (even as many of them would insist LW is not “philosophy”.) But it could just be an effect of homogeneity of personalities among LW users, rather than a sign that LW was converging on truth.
*(Although personally, I am much more moved by “hmmm, creating a new class of agents more powerful than us could end with them on top; probably very bad from our perspective” than I am by anything more complicated. This is, I think a kind of base rate argument, based off of things like the history of colonialism and empire; but of course the analogy is quite weak, given that we get to create the new agents ourselves.)
The smart people were selected for having a good predictive track record on geopolitical questions with resolution times measured in months, a track record equaled or bettered by several* members of the concerned group. I think this is much less strong evidence of forecasting ability on the kinds of question discussed than you do.
*For what it’s worth, I’d expect the skeptical group to do slightly better overall on e.g. non-AI GJP questions over the next 2 years, they do have better forecasting track records as a group on this kind of question, it’s just not a stark difference.
I agree this is quite different from the standard GJ forecasting problem. And that GJ forecasters* are primarily selected for and experienced with forecasting quite different sorts of questions.
But my claim is not “trust them, they are well-calibrated on this”. It’s more “if your reason for thinking X will happen is a complex multi-stage argument, and a bunch of smart people with no particular reason to be biased, who are also selected for being careful and rational on at least some complicated emotive stuff, spend hours and hours on your argument and come away with a very different opinion on its strength, you probably shouldn’t trust the argument much (though this is less clear if the argument depends on technical scientific or mathematical knowledge they lack**)”. That is, I am not saying “supers are well-calibrated, so the risk probably is about 1 in 1000”. I agree the case for that is not all that strong. I am saying “if the concerned group’s credences are based in a multi-step, non-formal argument whose persuasiveness the supers feel very differently about, that is bad sign for how well-justified those credences are.”
Actually, in some ways, it might look better for AI X-risk work being a good use of money if the supers were obviously well-calibrated on this. A 1 in 1000 chance of an outcome as bad as extinction is likely worth spending some small portion of world GDP on preventing. And AI safety spending so far is a drop a bucket compared to world GDP. (Yeah, I know technical the D stands for domestic so “world GDP” can’t be quite the right term, but I forget the right one!). Indeed “AI risk is at least 1 in 1000″ is how Greaves and MacAskill justify the “we can make a big difference to the long-term future in expectation” in ‘The Case for Strong Longtermism’. (If a 1 in 1000 estimate is relatively robust, I think it is a big mistake to call this “Pascal’s Mugging”.)
*(of whom I’m one as it happens, though I didn’t work on this: did work on the original X-risk forecasting tournament.)
**I am open to argument that this actually is the case here.
Why do you think superforecasters who were selected specifically for assigning a low probability to AI x-risk are well described as “a bunch of smart people with no particular reason to be biased”?
For the avoidance of doubt, I’m not upset that the supers were selected in this way, it’s the whole point of the study, made very clear in the write-up, and was clear to me as a participant. It’s just that “your arguments failed to convince randomly selected superforecasters” and “your arguments failed to convince a group of superforecasters who were specifically selected for confidentiality disagreeing with you” are very different pieces of evidence.
One small clarification: the skeptical group was not all superforecasters. There were two domain experts as well. I was one of them.
I’m sympathetic to David’s point here. Even though the skeptic camp was selected for their skepticism, I think we still get some information from the fact that many hours of research and debate didn’t move their opinions. I think there are plausible alternative worlds where the skeptics come in with low probabilities (by construction), but update upward by a few points after deeper engagement reveals holes in their early thinking.
Ok, I slightly overstated the point. This time, the supers selected were not a (mostly) random draw from the set of supers. But they were in the original X-risk tournament, and in that case too, they were not persuaded to change their credences via further interaction with the concerned (that is the X-risk experts.) Then, when we took the more skeptical of them and gave them yet more exposure to AI safety arguments, that still failed to move the skeptics. I think taken together, these two results show that AI safety arguments are not all that persuasive to the average super. (More precisely, that no amount of exposure to them will persuade all supers as a group to the point where they get a median significantly above 0.75% in X-risk by the centuries end.)
TL;DR Lots of things are believed by some smart, informed, mostly well calibrated people. It’s when your arguments are persuasive to (roughly) randomly selected smart, informed, well-calibrated people that we should start being really confident in them. (As a rough heuristic, not an exceptionless rule.)
But this is a follow-up to the original X-risk tournament, where the selection really was fairly random (obviously not perfectly so, but it’s not clear in what direction selection effects in which supers participated biased things.) And in the original tournament, the supers were also fairly unpersuaded (mostly) by the case for AI X-risk. Or rather, to avoid putting it in too binary a way, they didn’t not move their credences further on hearing more argument after the initial round of forecasting. (I do think the supers level of concern was enough to motivate worrying about AI given how bad extinction is, so “unpersuaded” is a little misleading.) At that point, people then said ‘they didn’t spend the enough time on it, and they didn’t get the right experts’. Now, we have tried further with different experts, more time and effort lots of back and forth etc. and those who participated in the second round are still not moved. Now, it is possible that the only reason the participants were not moved 2nd time round was because they were more skeptical than some other supers the first time round. (Though the difference between medians of 0.1% and 0.3% medians in X-risk by 2100 is not that great.) But I think if you get ‘in imperfect conditions, a random smart crowd were not moved at all, then we tried the more skeptical ones in much better conditions and they still weren’t moved at all’, the most likely conclusion is that even people from the less skeptical half of the distribution from the first go round would not have moved their credences either had they participated in the second round. Of course, the evidence would be even stronger if the people had been randomly selected the first time as well as the second.
Here’s a hypothesis:
The base case / historical precedent for existential AI risk is:
- AGI has never been developed
- ASI has never been developed
- Existentially deadly technology has never been developed (I don’t count nuclear war or engineered pandemics, as they’ll likely leave survivors)
- Highly deadly technology (>1M deaths) has never been cheap and easily copied
- We’ve never had supply chains so fully automated end-to-end that they could become self-sufficient with enough intelligence
- We’ve never had technology so networked that it could all be taken over by a strong enough hacker
Therefore, if you’re in the skeptic camp, you don’t have to make as much of an argument about specific scenarios where many things happen. You can just wave your arms and say it’s never happened before because it’s really hard and rare, as supported by the historical record.
In contrast, if you’re in the concerned camp, you’re making more of a positive claim about an imminent departure from historical precedent, so the burden of proof is on you. You have to present some compelling model or principles for explaining why the future is going to be different from the past.
Therefore, I think the concerned camp relying on theoretical arguments with multiple steps of logic might be a structural side effect of them having to argue against the historical precedent, rather than any innate preference for that type of argument.
I think that is probably the explanation yes. But I don’t think it gets rid of the problem for the concerned camp that usually, long complex arguments about how the future will go are wrong. This is not a sporting contest, where the concerned camp are doing well if they take a position that’s harder to argue for and make a good go of it. It’s closer to the mark to say that if you want to track truth you should (usually, mostly) avoid the positions that are hard to argue for.
I’m not saying no one should ever be moved by a big long complicated argument*. But I think that if your argument fails to move a bunch of smart people, selected for good predictive track record to anything like your view of the matter, that is an extremely strong signal that your complicated argument is nowhere near good enough to escape the general sensible prior that long complicated arguments about how the future will go are wrong. This is particularly the case when your assessment of the argument might be biased, which I think is true for AI safety people: if they are right, then they are some of the most important people, maybe even THE most important people in history, not to mention the quasi-religious sense of meaning people always draw from apocalyptic salvation v. damnation type stories. Meanwhile the GJ superforecasters don’t really have much to lose if they decide “oh, I am wrong, looking at the arguments, the risk is more like 2-3% than 1 in 1000”. (I am not claiming that there is zero reason for the supers to be biased against the hypothesis, but just that the situation is not very symmetric.) I think I would feel quite different about what this exercise (probably) shows, if the supers had all gone up to 1-2%, even though that is a lot lower than the concerned group.
I do wonder (though I think other factors are more important in explaining the opinions of the concerned group) whether familiarity with academic philosophy helps people be less persuaded by long complicated arguments. Philosophy is absolutely full of arguments that have plausible premises and are very convincing to their proponents, but which nonetheless fail to produce convergence amongst the community. After seeing a lot of that, I got used to not putting that much faith in argument. (Though plenty philosophers remain dogmatic, and there are controversial philosophical views I hold with a reasonable amount of confidence.) I wonder if LessWrong functions a bit like a version of academic philosophy where there is-like philosophy-a strong culture of taking arguments seriously and trying to have them shape your views-but where consensus actually is reached on some big picture stuff. That might make people who were shaped by LW intellectually rather more optimistic about the power of argument (even as many of them would insist LW is not “philosophy”.) But it could just be an effect of homogeneity of personalities among LW users, rather than a sign that LW was converging on truth.
*(Although personally, I am much more moved by “hmmm, creating a new class of agents more powerful than us could end with them on top; probably very bad from our perspective” than I am by anything more complicated. This is, I think a kind of base rate argument, based off of things like the history of colonialism and empire; but of course the analogy is quite weak, given that we get to create the new agents ourselves.)
The smart people were selected for having a good predictive track record on geopolitical questions with resolution times measured in months, a track record equaled or bettered by several* members of the concerned group. I think this is much less strong evidence of forecasting ability on the kinds of question discussed than you do.
*For what it’s worth, I’d expect the skeptical group to do slightly better overall on e.g. non-AI GJP questions over the next 2 years, they do have better forecasting track records as a group on this kind of question, it’s just not a stark difference.
I agree this is quite different from the standard GJ forecasting problem. And that GJ forecasters* are primarily selected for and experienced with forecasting quite different sorts of questions.
But my claim is not “trust them, they are well-calibrated on this”. It’s more “if your reason for thinking X will happen is a complex multi-stage argument, and a bunch of smart people with no particular reason to be biased, who are also selected for being careful and rational on at least some complicated emotive stuff, spend hours and hours on your argument and come away with a very different opinion on its strength, you probably shouldn’t trust the argument much (though this is less clear if the argument depends on technical scientific or mathematical knowledge they lack**)”. That is, I am not saying “supers are well-calibrated, so the risk probably is about 1 in 1000”. I agree the case for that is not all that strong. I am saying “if the concerned group’s credences are based in a multi-step, non-formal argument whose persuasiveness the supers feel very differently about, that is bad sign for how well-justified those credences are.”
Actually, in some ways, it might look better for AI X-risk work being a good use of money if the supers were obviously well-calibrated on this. A 1 in 1000 chance of an outcome as bad as extinction is likely worth spending some small portion of world GDP on preventing. And AI safety spending so far is a drop a bucket compared to world GDP. (Yeah, I know technical the D stands for domestic so “world GDP” can’t be quite the right term, but I forget the right one!). Indeed “AI risk is at least 1 in 1000″ is how Greaves and MacAskill justify the “we can make a big difference to the long-term future in expectation” in ‘The Case for Strong Longtermism’. (If a 1 in 1000 estimate is relatively robust, I think it is a big mistake to call this “Pascal’s Mugging”.)
*(of whom I’m one as it happens, though I didn’t work on this: did work on the original X-risk forecasting tournament.)
**I am open to argument that this actually is the case here.
Why do you think superforecasters who were selected specifically for assigning a low probability to AI x-risk are well described as “a bunch of smart people with no particular reason to be biased”?
For the avoidance of doubt, I’m not upset that the supers were selected in this way, it’s the whole point of the study, made very clear in the write-up, and was clear to me as a participant. It’s just that “your arguments failed to convince randomly selected superforecasters” and “your arguments failed to convince a group of superforecasters who were specifically selected for confidentiality disagreeing with you” are very different pieces of evidence.
One small clarification: the skeptical group was not all superforecasters. There were two domain experts as well. I was one of them.
I’m sympathetic to David’s point here. Even though the skeptic camp was selected for their skepticism, I think we still get some information from the fact that many hours of research and debate didn’t move their opinions. I think there are plausible alternative worlds where the skeptics come in with low probabilities (by construction), but update upward by a few points after deeper engagement reveals holes in their early thinking.
Ok, I slightly overstated the point. This time, the supers selected were not a (mostly) random draw from the set of supers. But they were in the original X-risk tournament, and in that case too, they were not persuaded to change their credences via further interaction with the concerned (that is the X-risk experts.) Then, when we took the more skeptical of them and gave them yet more exposure to AI safety arguments, that still failed to move the skeptics. I think taken together, these two results show that AI safety arguments are not all that persuasive to the average super. (More precisely, that no amount of exposure to them will persuade all supers as a group to the point where they get a median significantly above 0.75% in X-risk by the centuries end.)
TL;DR Lots of things are believed by some smart, informed, mostly well calibrated people. It’s when your arguments are persuasive to (roughly) randomly selected smart, informed, well-calibrated people that we should start being really confident in them. (As a rough heuristic, not an exceptionless rule.)
They weren’t randomly selected, they were selected specifically for scepticism!
Ok yes, in this case they were.
But this is a follow-up to the original X-risk tournament, where the selection really was fairly random (obviously not perfectly so, but it’s not clear in what direction selection effects in which supers participated biased things.) And in the original tournament, the supers were also fairly unpersuaded (mostly) by the case for AI X-risk. Or rather, to avoid putting it in too binary a way, they didn’t not move their credences further on hearing more argument after the initial round of forecasting. (I do think the supers level of concern was enough to motivate worrying about AI given how bad extinction is, so “unpersuaded” is a little misleading.) At that point, people then said ‘they didn’t spend the enough time on it, and they didn’t get the right experts’. Now, we have tried further with different experts, more time and effort lots of back and forth etc. and those who participated in the second round are still not moved. Now, it is possible that the only reason the participants were not moved 2nd time round was because they were more skeptical than some other supers the first time round. (Though the difference between medians of 0.1% and 0.3% medians in X-risk by 2100 is not that great.) But I think if you get ‘in imperfect conditions, a random smart crowd were not moved at all, then we tried the more skeptical ones in much better conditions and they still weren’t moved at all’, the most likely conclusion is that even people from the less skeptical half of the distribution from the first go round would not have moved their credences either had they participated in the second round. Of course, the evidence would be even stronger if the people had been randomly selected the first time as well as the second.