In light of recent events, I came back to take another look at this paper. It’s a shame that so much of the discussion ended up focusing on the community’s reaction rather than the content itself. I think the paranoid response you describe in the post was both unjust and an overreaction. None of the paper’s conclusions seems hugely damaging or unfair to me.
That said, like other commenters, I’m not wholly convinced by your arguments. You’ve asked people to be more specific about this, and I can give two specific examples.
On technological determinism
You write that people in the EA community “disregard controlling technology on the grounds of a perceived lack of tractability” (p. 17). But you think this is probably the wrong approach, since technological determinism is “derided and dismissed by scholars of science and technology studies” and “unduly curtails the available mitigation options” (p. 18).
I’m a bit skeptical of this because I know many AI safety and biosecurity workers who would be stoked to learn that it’s possible to stop the development of powerful technologies. You write that “we have historical evidence for collective action and coordination on technological progress and regress” (p. 18). In support you give the example of weather control technology. However, I don’t think the paper you cite in this section provides very strong support for the claim. You write that weather control technology hasn’t been developed because the ENMOD Convention “seems to have successfully curtailed further research”. But the cited paper also documents significant technological challenges to progress in this area. The point of the paper is to document past cycles in which “the promise of weather control soon gave way to excessive hype and pathology” (p. 24). It’s far from clear that regulation, rather than technical challenges and lack of investment incentives, played a key role in styming progress.
In any case, I don’t think one historical case study necessarily provides much insight into modern debates over progress in AI and biotechnology. There seem to be much stronger economic incentives for investing in AI and biotechnology progress, making it much harder to reach agreements to slow or halt work in this area. Again, if that’s mistaken then I know a lot of very worried people who would be happy to hear it. In fact, I’m not even sure it’s true that EAs aren’t interested in this – many people in the community have been leading calls to ban gain-of-function research, for example!
On democracy and war
Second, I’m skeptical that democratic processes have as many benefits, and as few trade-offs, as you claim. Again, you cover a lot of ground, but on the ground that I’m most familiar with I think your assertions are too strong. You write, for example, that “[democracies] make war — a significant driver of GCRs — less likely” (p. 27). Here you cite Dafoe, Oneal and Russett on democratic peace theory. But democratic peace theory only holds that wars between democracies are, on average, less likely. Whether democracies make war in general less likely is much more debatable. See pp. 173-8 of Greg Cashman’s What Causes War? for a review of the empirical work on this question. The literature is mixed at best, with results sensitive to how one codes democracy, the time period examined, and what kind of violence (e.g. wars vs. militarized disputes) one chooses to examine as the dependent variable. But most studies find that the link between democracy and war-proneness is either non-existent or “exceedingly weak” (p. 176). Cashman writes (p. 173):
Are democracies more peaceful than autocracies? If ever there was a good theory mugged by a gang of facts, I suppose this is the one. However plausible it may seem, and however much our democratic values predispose us to root for this theory, there seems to be little evidence to support it.
So I think your statement about the relationship between democracies and war is, at best, far more contentious than it appears. And frankly this makes me more skeptical of the other confidently-phrased statements in the paper, particularly in the democracy section, that I’m less well-placed to evaluate for myself.
On existential vs. extinction risks
On the other hand, I do also want to say that I think the section of the paper that critiques existential risk typologies is good. I’m persuaded that it’s worth distinguishing extinction risks from non-extinction existential risks, given that one has to make value judgements to identify the latter. Extinction is an objective state, but people can have different views on what it means for humanity to “fail to achieve its potential”. I think it would be helpful for people to be more careful to separate extinction ethics, existential ethics, existential risks, catastrophic risks, and extinction risks, as you recommend.
I think it would be helpful for people to be more careful to separate extinction ethics, existential ethics, existential risks, catastrophic risks, and extinction risks, as you recommend.
In light of recent events, I came back to take another look at this paper. It’s a shame that so much of the discussion ended up focusing on the community’s reaction rather than the content itself. I think the paranoid response you describe in the post was both unjust and an overreaction. None of the paper’s conclusions seems hugely damaging or unfair to me.
That said, like other commenters, I’m not wholly convinced by your arguments. You’ve asked people to be more specific about this, and I can give two specific examples.
On technological determinism
You write that people in the EA community “disregard controlling technology on the grounds of a perceived lack of tractability” (p. 17). But you think this is probably the wrong approach, since technological determinism is “derided and dismissed by scholars of science and technology studies” and “unduly curtails the available mitigation options” (p. 18).
I’m a bit skeptical of this because I know many AI safety and biosecurity workers who would be stoked to learn that it’s possible to stop the development of powerful technologies. You write that “we have historical evidence for collective action and coordination on technological progress and regress” (p. 18). In support you give the example of weather control technology. However, I don’t think the paper you cite in this section provides very strong support for the claim. You write that weather control technology hasn’t been developed because the ENMOD Convention “seems to have successfully curtailed further research”. But the cited paper also documents significant technological challenges to progress in this area. The point of the paper is to document past cycles in which “the promise of weather control soon gave way to excessive hype and pathology” (p. 24). It’s far from clear that regulation, rather than technical challenges and lack of investment incentives, played a key role in styming progress.
In any case, I don’t think one historical case study necessarily provides much insight into modern debates over progress in AI and biotechnology. There seem to be much stronger economic incentives for investing in AI and biotechnology progress, making it much harder to reach agreements to slow or halt work in this area. Again, if that’s mistaken then I know a lot of very worried people who would be happy to hear it. In fact, I’m not even sure it’s true that EAs aren’t interested in this – many people in the community have been leading calls to ban gain-of-function research, for example!
On democracy and war
Second, I’m skeptical that democratic processes have as many benefits, and as few trade-offs, as you claim. Again, you cover a lot of ground, but on the ground that I’m most familiar with I think your assertions are too strong. You write, for example, that “[democracies] make war — a significant driver of GCRs — less likely” (p. 27). Here you cite Dafoe, Oneal and Russett on democratic peace theory. But democratic peace theory only holds that wars between democracies are, on average, less likely. Whether democracies make war in general less likely is much more debatable. See pp. 173-8 of Greg Cashman’s What Causes War? for a review of the empirical work on this question. The literature is mixed at best, with results sensitive to how one codes democracy, the time period examined, and what kind of violence (e.g. wars vs. militarized disputes) one chooses to examine as the dependent variable. But most studies find that the link between democracy and war-proneness is either non-existent or “exceedingly weak” (p. 176). Cashman writes (p. 173):
So I think your statement about the relationship between democracies and war is, at best, far more contentious than it appears. And frankly this makes me more skeptical of the other confidently-phrased statements in the paper, particularly in the democracy section, that I’m less well-placed to evaluate for myself.
On existential vs. extinction risks
On the other hand, I do also want to say that I think the section of the paper that critiques existential risk typologies is good. I’m persuaded that it’s worth distinguishing extinction risks from non-extinction existential risks, given that one has to make value judgements to identify the latter. Extinction is an objective state, but people can have different views on what it means for humanity to “fail to achieve its potential”. I think it would be helpful for people to be more careful to separate extinction ethics, existential ethics, existential risks, catastrophic risks, and extinction risks, as you recommend.
See Arepo for further discussion and ideas for terminology.
Thanks, I hadn’t seen those!