I’m mostly going to restrict my comments to your section on biosecurity, since (a) I have a better background in that area and (b) I think it’s stronger than the AI section. I haven’t read all of The Precipice yet, so I’m responding to your arguments in general, rather than specifically defending Ord’s phrasing.
One general comment: this post is long enough that I think it would benefit from a short bullet-point summary of the main claims (the current intro doesn’t say much beyond the fact that you disagree with Ord’s risk assessments).
Anyway, biosecurity. There’s a general problem with info-hazards/tongue-biting in this domain, which can make it very difficult to have a full and frank exchange of views, or even tell exactly where and why someone disagrees with you. I, and ~everyone else, finds this very frustrating, but it is the way of things. So you might well encounter people disagreeing with claims you make without telling you why, or even silently disagreeing without saying so.
That said, it’s my impression that most people I’ve spoken to (who are willing to issue an opinion on the subject!) think that, currently, directly causing human extinction via a pandemic would be extremely hard (there are lots of GCBR-focused biosecurity papers that say this). Your claim that such a pathogen is very likely impossible seems oddly strong to me, given that evolutionary constraints are not the same thing as physical constraints. But even if possible, such agents are certainly very hard to find in pathogen-space. I expect we’ll get drastically better at searching that space over the next 100 years, though.
I disagree with parts of all of your points here, but I think the weakest is the section arguing that no-one would want to create such dangerous biological weapons (which, to be fair, you also place the least weight on):
This appears to be reflected in the fact that as far as is publicly known, very few attempts have even been made to deploy such weapons in modern times. I thus believe that we have good reason to think that the number of people and amount of effort devoted to developing such dangerous bioweapons is likely to be low, especially for non-state actors.
We know that state actors (most notably, but not only, the Soviet Union) have put enormous effort and funding into creating some very nasty biological weapons over the past 100 years, including many “strategic” weapons that were intended to spread widely and create mass civilian casualties if released. Whether or not doing so was strategically rational or consistent with their stated goals or systems of ethics, there have in fact been biological weapons programs, which did in fact create “deadly, indiscriminate pathogen[s].”
A rogue state such as North Korea might be able to circumvent this particular problem, however that raises as range of new difficulties, such as why it would ever be in the interest of a state actor (as opposed to a death cult terrorist group) to develop such a deadly, indiscriminate pathogen.
Eppur si muove. Any attempt to tackle the question of how likely it is that someone would seek to develop catastrophic biological weapons must reckon with the fact that such weapons have, in fact, been sought.
I’m glad you mentioned information hazards in this context. Personally, I felt a bit uncomfortable reading the engineered pandemics section listing an array of obstacles to be surmounted to causing extinction, and ways they might be surmounted.
I agree that it’s quite an unfortunate situation that concerns about information hazards make it harder to openly debate levels of risks from various sources and related topics (at least within the biorisk space). I’m also generally quite in favour of people being able to poke and prod at prominent or common views, think this post seems to have done a good job of that in certain parts (although I disagree with quite a few specific points made), and would feel uncomfortable if people felt unable to write anything like this for information hazards reasons.
But I’d personally really hope that, before publishing this, the author at least ran the engineered pandemics section by one person who is fairly familiar with the biorisk or x-risk space, explicitly asking them for their views on how wise it would be to publish it in the current form. Such a person might be able to provide info on where the contents of that to-do list of doom are on a spectrum from:
already very widely known (such that publication may not do that much harm)
surprisingly novel, or currently receiving little attention from the most concerning actors (who may not have especially high creativity or expertise)
(There’s more discussion of the fraught topic of info hazards in these sources.)
In Kevin Esvelt’s recent EAGx talk, he provides a lot of interesting thoughts on the matter of information hazards in the bio space. It seems that Esvelt would likewise hope that the engineered pandemics section had at least been run by a knowledgeable and trustworthy person first, or that Esvelt might actually express stronger concerns than I did.
For people low on time, the last bit, from 40:30 onwards, is perhaps especially relevant.
Your claim that such a pathogen is very likely impossible seems oddly strong to me, given that evolutionary constraints are not the same thing as physical constraints.
I think this is an important point (as are the rest of your points), and something similar came to my mind too. I think we may be able to put it more strongly. Your phrasing makes me think of evolution “trying” to create the sort of pathogen that could lead to human extinction, but there being constraints on its ability to do so, which, given that they aren’t physical constraints, could perhaps be overcome through active technological effort. It seems to me that evolution isn’t even “trying” to create that sort of pathogen in the first place.
In fact, I’ve seen it argued that natural selection actively pushes against extreme virulence. From the Wikipedia article on optimal virulence:
A pathogen that is too restrained will lose out in competition to a more aggressive strain that diverts more host resources to its own reproduction. However, the host, being the parasite’s resource and habitat in a way, suffers from this higher virulence. This might induce faster host death, and act against the parasite’s fitness by reducing probability to encounter another host (killing the host too fast to allow for transmission). Thus, there is a natural force providing pressure on the parasite to “self-limit” virulence. The idea is, then, that there exists an equilibrium point of virulence, where parasite’s fitness is highest. Any movement on the virulence axis, towards higher or lower virulence, will result in lower fitness for the parasite, and thus will be selected against.
I don’t have any background in this area, so I’m not sure how well that Wikipedia article represents expert consensus, what implications to draw from that idea, and whether that’s exactly what you were already saying. But it seems to me that this presents additional reason to doubt how much we can extrapolate from what pathogens naturally arise to what pathogens are physically possible.
(Though I imagine that what pathogens are physically possible still provides some evidence, and that it’s reasonable to tentatively raise it in discussions of risks from engineered pandemics.)
I’m mostly going to restrict my comments to your section on biosecurity, since (a) I have a better background in that area and (b) I think it’s stronger than the AI section. I haven’t read all of The Precipice yet, so I’m responding to your arguments in general, rather than specifically defending Ord’s phrasing.
One general comment: this post is long enough that I think it would benefit from a short bullet-point summary of the main claims (the current intro doesn’t say much beyond the fact that you disagree with Ord’s risk assessments).
Anyway, biosecurity. There’s a general problem with info-hazards/tongue-biting in this domain, which can make it very difficult to have a full and frank exchange of views, or even tell exactly where and why someone disagrees with you. I, and ~everyone else, finds this very frustrating, but it is the way of things. So you might well encounter people disagreeing with claims you make without telling you why, or even silently disagreeing without saying so.
That said, it’s my impression that most people I’ve spoken to (who are willing to issue an opinion on the subject!) think that, currently, directly causing human extinction via a pandemic would be extremely hard (there are lots of GCBR-focused biosecurity papers that say this). Your claim that such a pathogen is very likely impossible seems oddly strong to me, given that evolutionary constraints are not the same thing as physical constraints. But even if possible, such agents are certainly very hard to find in pathogen-space. I expect we’ll get drastically better at searching that space over the next 100 years, though.
I disagree with parts of all of your points here, but I think the weakest is the section arguing that no-one would want to create such dangerous biological weapons (which, to be fair, you also place the least weight on):
We know that state actors (most notably, but not only, the Soviet Union) have put enormous effort and funding into creating some very nasty biological weapons over the past 100 years, including many “strategic” weapons that were intended to spread widely and create mass civilian casualties if released. Whether or not doing so was strategically rational or consistent with their stated goals or systems of ethics, there have in fact been biological weapons programs, which did in fact create “deadly, indiscriminate pathogen[s].”
Eppur si muove. Any attempt to tackle the question of how likely it is that someone would seek to develop catastrophic biological weapons must reckon with the fact that such weapons have, in fact, been sought.
I’m glad you mentioned information hazards in this context. Personally, I felt a bit uncomfortable reading the engineered pandemics section listing an array of obstacles to be surmounted to causing extinction, and ways they might be surmounted.
I agree that it’s quite an unfortunate situation that concerns about information hazards make it harder to openly debate levels of risks from various sources and related topics (at least within the biorisk space). I’m also generally quite in favour of people being able to poke and prod at prominent or common views, think this post seems to have done a good job of that in certain parts (although I disagree with quite a few specific points made), and would feel uncomfortable if people felt unable to write anything like this for information hazards reasons.
But I’d personally really hope that, before publishing this, the author at least ran the engineered pandemics section by one person who is fairly familiar with the biorisk or x-risk space, explicitly asking them for their views on how wise it would be to publish it in the current form. Such a person might be able to provide info on where the contents of that to-do list of doom are on a spectrum from:
already very widely known (such that publication may not do that much harm)
surprisingly novel, or currently receiving little attention from the most concerning actors (who may not have especially high creativity or expertise)
(There’s more discussion of the fraught topic of info hazards in these sources.)
In Kevin Esvelt’s recent EAGx talk, he provides a lot of interesting thoughts on the matter of information hazards in the bio space. It seems that Esvelt would likewise hope that the engineered pandemics section had at least been run by a knowledgeable and trustworthy person first, or that Esvelt might actually express stronger concerns than I did.
For people low on time, the last bit, from 40:30 onwards, is perhaps especially relevant.
I think this is an important point (as are the rest of your points), and something similar came to my mind too. I think we may be able to put it more strongly. Your phrasing makes me think of evolution “trying” to create the sort of pathogen that could lead to human extinction, but there being constraints on its ability to do so, which, given that they aren’t physical constraints, could perhaps be overcome through active technological effort. It seems to me that evolution isn’t even “trying” to create that sort of pathogen in the first place.
In fact, I’ve seen it argued that natural selection actively pushes against extreme virulence. From the Wikipedia article on optimal virulence:
I don’t have any background in this area, so I’m not sure how well that Wikipedia article represents expert consensus, what implications to draw from that idea, and whether that’s exactly what you were already saying. But it seems to me that this presents additional reason to doubt how much we can extrapolate from what pathogens naturally arise to what pathogens are physically possible.
(Though I imagine that what pathogens are physically possible still provides some evidence, and that it’s reasonable to tentatively raise it in discussions of risks from engineered pandemics.)