Participants in the 2008 FHI Global Catastrophic Risk conference estimated a probability of extinction from nano-technology at 5.5% (weapons + accident) and non-nuclear wars at 3% (all wars—nuclear wars) (the values are on the GCR wikipedia page). In the Precipice, Ord estimated the existential risk of Other anthropogenic risks (noted in the text as including but not limited to nano-technology, and I interpret this as including non-nuclear wars) as 2% (1 in 50). (Note that by definition, extinction risk is a sub-set of existential risk.)
Since starting to engage with EA in 2018 I have seen very little discussion about nano-technology or non-nuclear warfare as existential risks, yet it seems that in 2008 these were considered risks on-par with top longtermist cause areas today (nanotechnology weapons and AGI extinction risks were both estimated at 5%). I realize that Ord’s risk estimates are his own while the 2008 data is from a survey, but I assume that his views broadly represent those of his colleagues at FHI and others the GCR community.
My open question is: what new information or discussion over the last decade lead the GCR to reduce their estimate of the risks posed by (primarily) nano-technology and also conventional warfare?
I too find this an interesting topic. More specifically, I wonder why I’ve seen as little discussion published in the last few years (rather than from >10 years ago) of nanotech as I have. I also wonder about the limited discussion of things like very long-lasting totalitarianism - though there I don’t have reason to believe people recently had reasonably high x-risk estimates; I just sort-of feel like I haven’t yet seen good reason to deprioritise investigating that possible risk. (I’m not saying that there should be more discussion of these topics, and that there are no good reasons for the lack of it, just that I wonder about that.)
I realize that Ord’s risk estimates are his own while the 2008 data is from a survey, but I assume that his views broadly represent those of his colleagues at FHI and others the GCR community.
I’m not sure that’s a safe assumption. The 2008 survey you’re discussing seems to have itself involved widely differing views (see the graphs on the last pages). And more generally, the existential risk and GCR research community seems to have widely differing views on risk estimates (see a collection of side-by-side estimates here).
I would also guess that each individual’s estimates might themselves be relatively unstable from one time you ask them to another, or one particular phrasing of the question to another.
Relatedly, I’m not sure how decision-relevant differences of less than an order of magnitude between different estimates are. (Though such differences could sometimes be decision-relevant, and larger differences more easily could be.)
In case you hadn’t seen it: 80,000 Hours recently released a post with a brief discussion of the problem area of atomically precise manufacturing. That also has links to a few relevant sources.
Thanks Michael, I had seen that but hadn’t looked at the links. Some comments:
The cause report from OPP makes the distinction between molecular nanotechnology and atomically precise manufacturing. The 2008 survey seemed to be explicitly considering weaponised molecular nanotechnology as an extinction risk (I assume the nanotechnology accident was referring to molecular nanotechnology as well). While there seems to be agreement that molecular nanotechnology could be a direct path to GCR/extinction, OPP presents atomically precise manufacturing as being more of an indirect risk, such as through facilitating weapons proliferation. The Grey goo section of the report does resolve my question about why the community isn’t talking about (molecular) nanotechnology as an existential risk as much now (the footnotes are worth reading for more details):
‘Grey goo’ is a proposed scenario in which tiny self-replicating machines outcompete organic life and rapidly consume the earth’s resources in order to make more copies of themselves.40 According to Dr. Drexler, a grey goo scenario could not happen by accident; it would require deliberate design.41 Both Drexler and Phoenix have argued that such runaway replicators are, in principle, a physical possibility, and Phoenix has even argued that it’s likely that someone will eventually try to make grey goo. However, they believe that other risks from APM are (i) more likely, and (ii) very likely to be relevant before risks from grey goo, and are therefore more worthy of attention.42 Similarly, Prof. Jones and Dr. Marblestone have argued that a ‘grey goo’ catastrophe is a distant, and perhaps unlikely, possibility.43
OPP’s discussion on why molecular nanotechnology (and cryonics) failed to develop as scientific fields is also interesting:
First, early advocates of cryonics and MNT focused on writings and media aimed at a broad popular audience, before they did much technical, scientific work …
Second, early advocates of cryonics and MNT spoke and wrote in a way that was critical and dismissive toward the most relevant mainstream scientific fields …
Third, and perhaps largely as a result of these first two issues, these “neighboring” established scientific communities (of cryobiologists and chemists) engaged in substantial “boundary work” to keep advocates of cryonics and MNT excluded …
It least in the case of molecular nanotechnology, the simple failure of the field to develop may have been lucky (at least from a GCR reduction perspective) as it seems that the research that was (at the time) most likely to lead to the risky outcomes was simply never pursued.
Update: Probably influenced a bit by this discussion, I’ve now made a tag for posts about Atomically Precise Manufacturing, as well as a link post (with commentary) for that Open Phil report.