My thoughts on nanotechnology strategy research as an EA cause area
Two-sentence summary: Advanced nanotechnology might arrive in the next couple of decades (my wild guess: there’s a 1-2% chance in the absence of transformative AI) and could have very positive or very negative implications for existential risk. There has been relatively little high-quality thinking on how to make the arrival of advanced nanotechnology go well, and I think there should be more work in this area (very tentatively, I suggest we want 2-3 people spending at least 50% of their time on this by 3 years from now).
Context: This post reflects my current views as someone with a relevant PhD who has thought about this topic on and off for roughly the past 20 months (something like 9 months FTE). Note that some of the framings and definitions provided in this post are quite tentative, in the sense that I’m not at all sure that they will continue to seem like the most useful framings and definitions in the future. Some other parts of this post are also very tentative, and are hopefully appropriately flagged as such.
Key points
I define advanced nanotechnology as any highly advanced future technology, including atomically precise manufacturing (APM), that uses nanoscale machinery to finely image and control processes at the nanoscale, and is capable of mechanically assembling small molecules into a wide range of cheap, high-performance products at a very high rate (note that my definition of advanced nanotechnology is only loosely related to what people tend to mean by the term “nanotechnology”). (more)
If developed, advanced nanotechnology could increase existential risk, for example by making destructive capabilities widely accessible, by allowing the development of weapons that pose a higher existential risk, or by accelerating AI development; or it could decrease existential risk, for example by causing the world’s most destructive weapons to be replaced by weapons that pose a lower existential risk. (more)
Timelines for advanced nanotechnology are extremely uncertain and poorly characterised, but the chance it arrives by 2040 seems non-negligible (I’d guess 1-2%), even in the absence of transformative AI. (more)
It seems likely that there’d be a long period of development with clear warning signs before advanced nanotechnology is developed, pushing against prioritising work in this area and pushing towards a focus on monitoring and foundational work. (more)
There has been relatively little high-quality nanotechnology strategy work, and by default this seems unlikely to change much in the near future. (more)
It seems possible to make progress in this area, for example by clarifying timelines, tracking potential warning signs of accelerating progress, and doing strategic planning. (more)
Overall, I think that nanotechnology strategy research could be very valuable from a longtermist EA perspective. Currently, my extremely rough, unstable guess is that we should have 2-3 people spending at least 50% of their time on this by 3 years from now (against a background of perhaps 0-0.5 FTE over the past 5 years or so). (more)
Note that it seems that we don’t want to accelerate progress towards advanced nanotechnology because of (i) the dramatic but highly uncertain net effects of the technology, including the possibility of very bad outcomes, (ii) the plausible difficulty of reversing an increase in the rate of progress, and (iii) the option of waiting to gain more information. (Though note that I still feel a bit confused about how harmful various forms of accelerating progress might be, and I’d like to think more carefully about this topic.) (more)
Introduction
This post has two main goals:
To provide a resource that EA community members can use to improve their understanding of advanced nanotechnology and nanotechnology strategy.
To make a case for nanotechnology strategy research being valuable from a longtermist EA perspective in order to get more people to consider working on it.
If you’re mostly interested in the second point, feel free to quickly skim through the first parts of the post, or maybe to skip directly to How to prioritise nanotechnology strategy research.
Definitions and intuitions
In this section, I introduce some concepts that seem useful for thinking about nanotechnology strategy. I’ll refer to these in various places throughout the rest of the post.
Note that, although the focus of nanotechnology strategy research is ultimately advanced nanotechnology, I start by describing atomically precise manufacturing (APM), which I consider to be a particular version of advanced nanotechnology. I do this because APM is a far more concrete and well-explored technology than advanced nanotechnology, and because I like to refer to APM to help describe what I mean by advanced nanotechnology.
Defining atomically precise manufacturing (APM)
I think the term APM is often used in a fairly vague way, and doesn’t have a widely accepted precise definition, so for the purposes of this post I’ll introduce some more precisely defined concepts that pick out important aspects of what people refer to as APM.
These are:
Core APM: A system of nanoscale atomically precise machinery that can mechanically assemble small molecules into useful atomically precise products.
Complex APM: Core APM that is highly complex, made up of very stiff components, operates in a vacuum, and performs many operations per second; that is joined to progressively larger high-performance assembly systems in order to allow a range of product sizes; where the assembly machines and products are perhaps only mostly atomically precise; and where the assembly method is perhaps only mostly mechanical assembly.
Consequential APM: Complex APM that can create a wide range of complex, very high performance products at very low cost ($1000/kg or less) and with very high throughput (1kg of product per 1kg of assembly machinery in 3 days or less).
For more detailed definitions of these terms, see the Appendix section APM definitions in more detail.
The APM concept originated with Eric Drexler.[1] Complex APM roughly corresponds to the technical side of Drexler’s vision for APM, while consequential APM describes a watered-down version of the capabilities Drexler describes for APM.[2],[3] In what follows, I’ll use these terms when I want to be specific, and I’ll use the term “APM” when I want to point to the wider concept.
Intuitions for why APM might be possible and impactful
This section provides some quick intuitions for why you might think that consequential APM is possible and for why atomic precision might be desirable when building very high performance assembly machines and products.
We know that core APM is feasible, i.e. that atomically precise nanoscale machines can be used to do mechanical assembly of small molecules into useful atomically precise products, because examples of core APM exist in nature. For example, ribosomes are atomically precise nanoscale machines[4] that join amino acids to create proteins, which are themselves atomically precise.[5] (Note that we don’t have similar “existence proofs” for complex or consequential APM.)
Atomic precision might be desirable for nanoscale assembly systems and products because, on the nanoscale, the atomic building blocks might be 1/100th or 1/1000th the width of the structure, so that you need atomic precision unless you design for very wide tolerances in structural parts. Atomic precision also gives you perfectly faithful realisations of your design (provided there are no manufacturing errors).
A manufacturing capability that can exactly reproduce a target structure down to the last atom might intuitively seem able to produce products with performance dramatically exceeding current capabilities. Naturally evolved systems often outperform present-day artificial ones on important dimensions,[6] showing that we have some way to go before we reach performance ceilings. In addition, these evolved systems are composed of nanoscale machines and structures (albeit not always atomically precise ones), which are themselves a product of nanoscale manufacturing, suggesting that this is a powerful scheme for producing high performance products.
Nanoscale machines might be able to achieve high throughput because stiff nanoscale machines moving small distances can operate with very high frequency, and because 1cm³ of nanoscale assembly systems can together perform vastly more operations per second than a single machine of size 1cm³.[7],[8]
This high throughput in turn suggests cheap products, for the following reasons:
A complex APM system might be able to do efficient processing of input materials into whatever high-quality building blocks are required just as easily as it can do highly efficient assembly of those building blocks, and so be able to accept cheap and abundant input materials (only requiring the presence of the necessary chemical elements).
Similarly, we might expect efficient processing to allow for harmless, easily manageable waste.
A complex APM system might itself be cheap if it can manufacture copies of itself.
We can also appeal to naturally evolved systems, which seem to often be significantly cheaper to manufacture (at least based on a rough comparison of energy cost) than artificial ones, showing that nanoscale manufacturing systems can do cheap manufacturing.[9]
For a more detailed discussion, see Appendix section More intuitions for why APM might be possible and impactful.
Broadening the scope from APM to advanced nanotechnology
As far as I know, EA efforts in this area have been focused on APM. I consider APM to be a remarkably concrete vision for future nanotechnology, and an extremely useful thing to analyse, but I tentatively propose that we slightly broaden the scope of work in this area to consider what I call advanced nanotechnology.
I define advanced nanotechnology as any highly advanced technology involving nanoscale machinery that allows us to finely image and control processes at the nanoscale, with manufacturing capabilities roughly matching, or exceeding, those of consequential APM.
Advanced nanotechnology covers a wider range of possible future technologies than APM. For example, a future nanotechnology might rely less on mechanical positioning or very stiff machines than APM does.[10],[11] But these technologies would look similar to APM in many ways, and might have the same strategic implications. My current feeling is that it’s better for strategy work to cover this wider area than to focus purely on APM.[12]
Advanced nanotechnology vs nanotechnology today
Note that advanced nanotechnology is fairly loosely connected to the concept that’s usually pointed to by the term “nanotechnology”.
The term “nanotechnology field” seems to commonly refer to a loose connection of research areas that are united by the fact that they concern physical systems on a length scale of between roughly 1-100nm (although there are lots of fields that aren’t usually considered nanotechnology that also concern physical systems on that length scale). The term “nanotechnology” is sometimes used to refer to useful (potential) products or technologies that come from this field.
See the later section called Current R&D landscape for my take on the present-day fields of research that are most relevant for advanced nanotechnology.
Defining transformative AI (TAI)
At a few points in this post I’ll refer to “transformative AI”, usually through the abbreviation “TAI”. By this I mean “AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution”, which is the definition for TAI sometimes used by Open Philanthropy (for more detail on this definition, see What Open Philanthropy means by “transformative AI”).
Potential effects of advanced nanotechnology
Advanced nanotechnology’s extremely powerful capabilities suggest that its development could have very dramatic effects.
The potential effects listed below seem among the most important from a longtermist EA perspective. I’d guess that if advanced nanotechnology is developed within the next 20 years and we haven’t (yet) developed TAI, there’d be something like a 70% chance that it would have very dramatic effects, i.e. effects of at least similar importance to the ones described below.[13] Note that the first two bullet points use the typology from Nick Bostrom’s paper The Vulnerable World Hypothesis.
Vulnerable world type-1 vulnerabilities. Advanced nanotechnology might lead to widespread access to manufacturing devices able to make things like nuclear weapons, dangerous pathogens, or worse. This seems like a plausible outcome to me, assuming society were to fail to properly regulate and control the use of the technology.
Vulnerable world type-2a vulnerabilities. Even if manufacture of highly destructive weapons doesn’t become widely available to ordinary citizens, advanced nanotechnology might allow states to manufacture weapons that pose an (even) greater probability of existential catastrophe than the highly risky weapons that are currently accessible (such as currently accessible nuclear and biological weapons). I’d place a “grey goo” scenario, where self-replicating nanoscale machines turn the entire world into copies of themselves, into this category. It seems very plausible to me that advanced nanotechnology would enable the development of weapons that pose a significantly larger existential risk than do our most dangerous present-day weapons. And while weapons that have a moderate-to-high chance of causing existential catastrophe might not seem very appealing on the face of it, it seems very plausible to me that at least some states would perceive a sufficiently high strategic benefit from developing them.[14]
Reduced existential risk from state weapons. On the other hand, in a reversal of the above scenario, advanced nanotechnology might lower existential risk from nuclear and biological weapons (or from other future weapons that pose an existential risk) by allowing states to develop weapons that are more strategically useful and pose a lower existential risk. This also seems very plausible to me.
More powerful computers, leading to earlier TAI. Advanced nanotechnology could allow us to build more powerful computers more cheaply. If advanced nanotechnology is developed before TAI, having cheaper and better computers could lead to earlier TAI.[15] This seems very plausible to me, and I’d also (very tentatively) guess that this would come earlier than the effects mentioned in the previous three bullet points.
Each of the above effects could constitute an existential risk or existential risk factor, or could correspond to a reduction in existential risk. Overall, currently I’m very unsure about whether advanced nanotechnology would be good or bad for the world on net, and I’m very unsure whether I’d rather learn that advanced nanotechnology was coming sooner or later than expected (but I still currently think that acting to speed up the arrival of advanced nanotechnology is probably bad. See the later section Potential harms from nanotechnology strategy research).
Note that the potential effects described above all follow from having an extremely powerful and flexible manufacturing capability, and novel nanoscale devices as products are necessary only in the grey goo scenario.[16]
For other potential effects of advanced nanotechnology, see the Appendix section Other potential effects of advanced nanotechnology.
When and how advanced nanotechnology might be developed
This section is quite long and complex, so here is a quick summary:
My rough guess is that there’s something like a 20-70% chance that advanced nanotechnology is feasible (where by “feasible” I mean: it’s possible in principle for a sufficiently advanced civilisation to build the technology, ignoring issues of economic feasibility). (more)
I’d guess there’s currently something like $10-100 million per year of funding that is fairly well directed towards developing advanced nanotechnology. These efforts are mostly focused on using scanning probe microscopy, which doesn’t seem like the most effective approach for developing advanced nanotechnology. (more)
My current very rough, unstable guesses regarding advanced nanotechnology timelines are:
Assuming feasibility, a median estimate of 2110 for the arrival of advanced nanotechnology. (more)
Not assuming feasibility, a 4-5% probability that advanced nanotechnology arrives by 2040. (more)
Not assuming feasibility, and assuming that advanced nanotechnology comes before TAI, a 1-2% probability that advanced nanotechnology arrives by 2040. (more)
It seems likely to me that there’d be a long period of development with clear warning signs before the arrival of advanced nanotechnology. (more)
In-principle feasibility
In this section, I briefly give my views on whether APM technologies, and advanced nanotechnology more broadly, are feasible in principle. By “feasible in principle”, I mean that it’s possible in principle for a sufficiently advanced civilisation to build the technology, ignoring issues of economic viability.
The probabilities given in this section are based on only relatively shallow thinking and are very unstable, and shouldn’t be taken as more than a rough indication of my personal guesses at the time I wrote this.[17]
As noted in the earlier section Intuitions for why APM might be possible and impactful, we know that core APM is feasible because examples of it exist in nature.
Maybe I’d give a 50% probability that complex APM is feasible. While I think there’s a significant chance that complex APM is feasible, and I’m not aware of convincing arguments that it couldn’t possibly be built, I wouldn’t find it that surprising if it turns out to be impossible. For example, it might turn out that it’s not possible to find suitable arrangements of atomic building blocks, given the finite selection of atoms and atomic sizes.
Assuming that complex APM is feasible, it seems pretty plausible to me that consequential APM is also feasible. Maybe I’d give this a 10-50% probability (implying that I guess a 5-25% unconditional probability that consequential APM is feasible). The earlier section Intuitions for why APM might be possible and impactful and the Appendix section More intuitions for why APM might be possible and impactful cover some reasons for thinking that complex APM might be feasible. In addition to those points, I’d note that Drexler has published technical arguments for the in-principle feasibility of APM (including a book, Nanosystems, which explores designs and capabilities), and in my view no-one has shown that the technology is not feasible despite some attention on the question.[18]
I don’t assign a higher probability to the feasibility of consequential APM because I feel like there could be practical barriers that block the development of consequential APM even if complex APM is feasible. For example, maybe the system can’t be made reliable enough or is very expensive to maintain (so that the “very cheap products” condition can’t be met), or maybe there just aren’t any designs that allow you to take advantage of the favourable theoretical properties of nanoscale assembly systems.
Advanced nanotechnology is, by definition, at least as likely to be feasible as consequential APM, and we might consider it to be substantially more likely to be feasible because it covers a much wider array of potential technologies. Overall, I’d guess something like a 20-70% chance that advanced nanotechnology is feasible. This is around 3 times as large as my guess for the probability that consequential APM is feasible, which feels reasonable to me.
Current R&D landscape
Anecdotally, my impression is that the majority of researchers in fields relevant for advanced nanotechnology haven’t heard of APM, and don’t think about visions for transformative nanotechnology more generally.[19]
Correspondingly, there is not much R&D explicitly targeting APM or broader advanced nanotechnology, as far as I’m aware.
I’d guess there’s something like $10-100 million per year of funding that is fairly well directed towards developing advanced nanotechnology. This is mostly being spent (according to my guess) at Zyvex and on a project at Canadian Bank Note. These projects use an approach involving scanning probe microscopy[20] to make progress towards APM (sometimes called the “hard path” to APM), which doesn’t seem like the most promising approach,[21] although of course surprises are always possible.
Particular examples of less well-targeted, but still relevant, work include:
Impressive work on protein engineering from David Baker’s lab.
Work on spiroligomers from Christian Schafmeister’s group.
Some DNA nanotechnology projects from the Shih group and the Turberfield group.
More broadly, my impression is that the most relevant work for progress towards advanced nanotechnology is:
Work in the protein engineering and DNA nanotechnology fields.
Work on spiroligomers and foldamers, as well as supporting technologies such as computational approaches (AlphaFold is a notable example relevant for protein engineering) and nanoscale imaging.
Work on advancing scanning probe microscopy.
Perhaps to a lesser extent, a broad class of work on non-biological dynamic nanoscale systems.[22]
Timelines
Considerations relevant for thinking about APM timelines
In this section, I’ll discuss considerations relevant for thinking about when APM might be developed. I focus on APM here rather than broader advanced nanotechnology because I find it easiest to first think about APM and then consider how much earlier things might happen if we consider timelines to advanced nanotechnology of any form, rather than just timelines to APM.
The most obvious consideration pushing against short timelines for APM is the very slow rate of progress towards APM in the last 35 years. For example, atomic manipulation with scanning probe microscopes doesn’t seem to have improved very much since the original demonstration of the technique by IBM in 1989. In addition, despite significant progress in protein engineering, positional assembly with protein suprastructures (which is perhaps the first step on a hypothesised pathway to APM called the “soft path”, as discussed in footnote 5) has still not been demonstrated.[23] Overall, my very rough (and unstable) guess would be that we’ve come perhaps 10% of the way to APM in the past 35 years.
Aside from the empirical observation of slow progress, my inside-view impression is that it would be an enormous engineering challenge to make progress along the technical pathways sketched for APM. Further down the line, it also seems hugely challenging to engineer complex APM systems; although perhaps this is mitigated by the consideration that very stiff, later-stage systems might be much more predictable (and so easier to model and engineer) than early-stage ones.
An additional consideration pushing against short timelines is that it doesn’t seem like there would be many commercial applications from making the first few steps towards APM (even though later stages might have lots of commercial applications). This reduces the incentive for private research efforts and reduces researcher interest within academia.
Some considerations push in favour of shorter timelines for APM, however.
Firstly, in outlining the “soft path” to APM (see footnote 5), Drexler has sketched a technical pathway to APM that I find plausible.
In addition, it seems plausible that an APM field could emerge over the next decade, or perhaps that some highly targeted and well-resourced private effort will emerge. Because there has been relatively little highly targeted effort towards the development of APM so far, we have relatively little empirical evidence regarding what rate of progress would be possible under such an effort; perhaps progress would be much faster than expected.[24]
Further, advances in AI are leading to advances in our ability to model molecular systems, most notably in the field of protein folding, which is particularly relevant for near-term progress along the soft path to APM. It seems plausible to me that increasingly powerful AI, even if it falls short of TAI, will lead to surprisingly fast progress towards APM.
Finally, it seems very possible that the arrival of TAI would lead to the rapid development of APM. This might significantly shorten your timelines for APM, depending on your timelines for TAI.
Median timelines
As mentioned above, my guess for the probability that advanced nanotechnology is feasible in principle fluctuates between roughly 20% and 70%. If my probability for feasibility is less than 50%, it doesn’t really make sense to talk about a median timeline: if I think the chance that it’s possible to develop advanced nanotechnology is less than 50%, it wouldn’t make sense to think that at some point in the future there’s a 50% chance that we’ve developed advanced nanotechnology.
Assuming advanced nanotechnology is feasible in principle, my median timeline is perhaps something like 2110. But this number is very made up and not stable. It also interacts pretty strongly with my timelines for TAI, since TAI could massively accelerate technological progress.
Another data point comes from Robin Hanson in 2013, when he reported a guess of roughly 100-300 years until something seemingly roughly equivalent to advanced nanotechnology is developed.
Probability of advanced nanotechnology by 2040
We might be particularly interested in the probability that advanced nanotechnology arrives in the next couple of decades, since events further in the future seem generally harder to influence.
We might also be particularly interested in worlds where advanced nanotechnology is not preceded by TAI, because, for example, we might think that after TAI “everything goes crazy” and it’s hard to make useful plans for things that happen after that point. Personally, I think it’s reasonable to imagine that work on nanotechnology strategy will be much less useful if advanced nanotechnology is preceded by TAI, although I’m not at all confident about this and my view feels quite unstable.
I’d guess there’s something like a 1-2% chance that advanced nanotechnology arrives by 2040 and isn’t preceded by TAI.[25] As with my median timeline, this number is very made up and not stable.[26]
For the probability that advanced nanotechnology arrives by 2040 and is preceded by TAI, my current speculative guess is something like:
“16% chance of TAI by 2040”
“50% chance that advanced nanotechnology is feasible in principle”
“40% chance that advanced nanotechnology is developed between TAI and 2040, given that TAI arrives before 2040 and that advanced nanotechnology is feasible in principle”
= 3% chance that advanced nanotechnology arrives by 2040 and is preceded by TAI.
You might want to substitute your own probabilities for the arrival of TAI and/or the chance that TAI leads to the development of advanced nanotechnology.
Overall, then, adding the above probabilities implies that my guess is that there’s a 4-5% chance that advanced nanotechnology arrives by 2040. Again, this number is very made up and not stable.
Rate of progress and warning signs
I imagine that developing advanced nanotechnology would be a huge engineering challenge, and there would be a lengthy road to get there from where we are today. In the median case, I imagine that it would be extremely obvious for many years that this kind of work is being done: the field might look a bit like the present AI field, for example, with notable private-sector efforts alongside lots of work in academia. It seems very unlikely, though possible, that progress would be driven by some huge, secretive, Manhattan Project–style effort instead; but even in that case, it seems like informed people would probably know that something was up, for example by noticing that relevant academics had suddenly stopped publishing.
The main ways I can see the above picture being wrong are:
Technical progress just turns out to be way easier than it seems now. Maybe surprisingly quick progress along the “hard path” to APM (i.e. making progress using scanning probe microscopy) is the most likely way this could happen.
Pre-transformative AI dramatically speeds up the rate of progress in relevant fields.
Transformative AI dramatically speeds up technological progress (but then maybe “all bets are off” anyway).
It’s worth noting that if we’re focused on worlds where advanced nanotechnology arrives in the next 20 years, we should presumably focus more than we would otherwise on worlds where progress happens surprisingly quickly. This is because worlds where advanced nanotechnology arrives in the next 20 years will tend to be worlds where progress happens surprisingly quickly (for example, if there is a gradual ramp up in progress over the next 30 years, we obviously won’t have the technology in 20 years’ time).
Current landscape for nanotechnology strategy
I’m aware of little thoughtful nanotechnology strategy work currently being undertaken, and I don’t think there’s a very large amount of high-quality existing work.
The most notable public EA work in this area that I’m aware of is a 2015 Open Philanthropy report called Risks from Atomically Precise Manufacturing.
As far as I know, no-one in the EA community, other than me, is currently spending a significant fraction of their time thinking about this or is planning to do so. Due to other (potential) projects competing for my time, I’d guess I’ll spend something like 25% of my time on nanotechnology strategy-related things on average in the next 12 months, with a good chance (maybe 50%) that I don’t spend much time at all on nanotechnology strategy things in the next 12 months. Maybe I’d guess I’ll spend 40% of my time on average on nanotechnology strategy things over the next 5 years.
Outside of EA, the Foresight Institute has historically thought about nanotechnology strategy questions, and they have perhaps 0.2-0.6 FTE working in this broad area currently. Other organisations have been active in this area, such as the Center for Responsible Nanotechnology and the Institute for Molecular Manufacturing,[27] but I’m not aware of relevant recent work. I think that a lot of the public work in this area has been of low quality, and overall I think the existing work falls a long way short of providing complete and high-quality coverage of the important questions within this area.
Prospects for making progress
The high-level goals of EA work in this area could be to reduce the chance of an existential catastrophe due to advanced nanotechnology, to use its development to reduce existential risk from other sources, or more broadly to achieve any attainable value related to its development.
Potentially valuable interventions might include:
Seeking to speed up particular technical research areas where that seems to promote safety.
Making policy recommendations related to regulating, monitoring, and understanding emerging advanced nanotechnology.
Positively steering the development of advanced nanotechnology by being a major early investor or founder.
We have some reasons to expect progress to be hard in this area: the technology appears likely to be many decades away, and the terrain for strategy work is poorly mapped out at present.
On the other hand, the fact that the world is not paying much attention to advanced nanotechnology gives altruistic actors a chance to make preparations now in order to, for example, be in a position to make impactful policy recommendations at critical moments later. In addition, the relative lack of exploration makes it hard to rule out the existence of valuable low-hanging fruit. And, regarding our understanding of the technology itself, advanced nanotechnology is more opaque than risky biotechnology, but perhaps compares favourably with artificial general intelligence (AGI), since APM is arguably a more plausible blueprint for advanced nanotechnology than present-day machine learning methods are for AGI.
Examples of concrete projects
One concrete project could be to consider what you would do if you knew that advanced nanotechnology was coming in 2032, seeking areas of consensus among EA-aligned individuals who are well informed about nanotechnology strategy. The results could inform what we should do now (given our actual timelines), perhaps pointing to a particular high-value intervention. The results could also help generate a plan to hold in reserve in case our advanced nanotechnology timelines were to dramatically shorten, increasing the chance that EAs successfully execute high-value interventions in the case of shortened timelines. In addition, the exercise would be practice for future strategic thinking, potentially increasing the chance that EAs successfully execute high-value interventions at some future time after a change of circumstances or further deliberation.
Another project could be to clarify what the key effects of complex APM that we want to forecast are; work out the technical requirements of these effects; and create forecasts using trend extrapolation, other outside-view considerations and methods, and inside-view judgements. These forecasts could help prioritise nanotechnology strategy research against work in other cause areas. That way, if advanced nanotechnology timelines turn out to be shorter than initially believed, more effort can be exerted in this area, increasing the chance that EAs and others successfully identify and execute high-value interventions. Forecasts of various key effects could also be helpful for directing work within nanotechnology strategy towards the most high-value sub-areas, perhaps increasing the chance that high-value interventions are found and successfully executed.
A third potential project could be to identify and monitor potential warning signs of surprising progress towards advanced nanotechnology, for example by identifying the most relevant areas of current research, the most important research groups, and key bottlenecks and potential breakthroughs. Similarly to the previous project, this monitoring could help prioritise nanotechnology strategy research against work in other cause areas, so that more resources are expended in this area if progress appears to be accelerating, increasing the chance that EAs and others successfully identify and execute high-value interventions.
How to prioritise nanotechnology strategy research
This section gives some arguments and considerations relevant for prioritising nanotechnology strategy research against other cause areas from a longtermist EA perspective. To skip to my personal bottom line view on this, see the final subsection of this section, My view on how longtermist EAs should be prioritising this work against other areas right now.
A case for nanotechnology strategy research
Pulling together the information presented in the previous sections, a rough, high-level argument for nanotechnology strategy research being highly valuable from a longtermist EA perspective could be the following:
Advanced nanotechnology could have dramatic effects, with both positive and negative potential implications for existential risk.
There seems to be a non-negligible (although low) probability that it will arrive within the next 20 years, even if it’s not preceded by TAI (my wild guess is that there’s a 1-2% probability of this).
There’s been very little high-quality work in this area.
It seems possible to make progress in this area.[28],[29] I’d also note that I think there’s a synergy between nanotechnology strategy and other areas like AI governance and biosecurity, which makes work in nanotechnology strategy more tractable and more valuable than it would be otherwise. Specifically, because these areas all concern trying to make the development of very powerful emerging technologies go well for the world, methodologies and findings seem likely to be transferable between these areas to some extent.
Could we wait until advanced nanotechnology is about to be developed?
As discussed in the earlier section Rate of progress and warning signs, my current guess is that there’s very likely to be a long, obvious-to-the-outside-world R&D process before advanced nanotechnology is developed. This pushes in favour of deprioritising this area until there are signs that progress towards advanced nanotechnology is speeding up.
However, this depends on to what extent nanotechnology strategy work needs to be done sequentially versus in parallel. The more the work needs to happen sequentially, the more it’s worth doing work now. I feel very uncertain about this, although it seems like the work is probably at least a bit sequential. For example, maybe it takes time to lay the conceptual foundations for a new topic, and also presumably it takes time for people to get up to speed and start making contributions.
In addition, we might think that being one of the first to act in this space when progress starts to pick up is valuable. I’d guess there could be a fair amount of value here.
Overall, I think the likely slow and easy-to-detect ramp up towards advanced nanotechnology pushes in favour of spending relatively few resources now to do foundational work, while carefully monitoring progress and being ready to pivot to spending lots of EA effort in the area if that seems important.
Potential harms from nanotechnology strategy research
Nanotechnology strategy is a very complex area, and there are many potential harms from well-intentioned nanotechnology strategy work.
It currently seems to me that we don’t want to speed up progress towards advanced nanotechnology. Roughly speaking, I think this because i) the potential effects of the technology seem dramatic, but very uncertain (they could be very good or very bad), ii) it seems plausible that accelerating progress will be hard to reverse, and iii) we have the option of waiting and potentially gaining more information about the effects of the technology. It seems plausible that accelerating progress will be hard to reverse because, for example, visibly faster progress could generate more interest and a stronger research effort, which in turn sustains the rate of progress. (Though note that I still feel a bit confused about how harmful various forms of accelerating progress might be, including whether some forms would be harmful at all, and I’d like to think more carefully about this topic.)
Because accelerating progress seems bad, public talk about advanced nanotechnology in a way that hypes the technology or otherwise generates interest in developing it could cause harm by speeding up progress towards the technology. And it seems relatively easy to notably increase attention on the technology given that there’s currently relatively little attention on it.[30] Emphasising military applications seems particularly undesirable, since it might tend to push the direction of the technology’s development towards dangerous applications.
In addition, findings from nanotechnology strategy research could, in some cases, represent information hazards that would inform efforts to speed up advanced nanotechnology development. Examples include findings from efforts to better understand advanced nanotechnology timelines by mapping out technical pathways, or from mapping out scenarios for the development of an advanced nanotechnology field. While it’s possible in principle to keep these findings private, there are advantages to public communication, and exchanging ideas with people interested in speeding up the development of advanced nanotechnology is often very helpful for doing nanotechnology strategy work. So the right policy here isn’t always obvious.
Because this is a relatively unexplored area, maybe there’s also a risk that poor initial attempts at nanotechnology strategy research will “poison the well”, doing long-term damage to the ability for progress to be made in this area. This could happen because researchers develop a poor set of concepts, or because EA-aligned people communicate with policymakers in a poorly thought-out way that discourages them from engaging with EA-aligned people in the future.
There may also be a risk that nanotechnology strategy research leads to making policy recommendations that turn out to be harmful and hard to reverse. For example, this could occur if a policy recommendation seems beneficial after a shallow investigation, but enough thought would show it to be harmful.
Relatedly, some individual or group of nanotechnology strategy researchers might mistakenly come to believe that accelerating progress towards advanced nanotechnology is a worthwhile goal, and then act to accelerate progress, causing irreversible damage. (To be clear, I also think it’s possible that at some point we’ll correctly determine that accelerating advanced nanotechnology progress is a worthwhile goal.)
Mitigating the risks of harm
To reduce the risk from these potential harms, prospective nanotechnology strategy researchers require good judgement and a strong support network they can turn to for high-quality advice and feedback. Keeping the unilaterist’s curse in mind also seems important.
Prospective funders in this area should keep these traits in mind when considering who to fund. In addition, funders could, for example, look for an assurance that nanotechnology strategy researchers won’t share their work outside a trusted circle without particular trusted individuals giving approval.
Why you might not think work in this area is high priority
Here are a couple of reasons you might not consider work in this area to be high priority from a longtermist EA perspective.
Firstly, you might think that it’s just too difficult to make progress towards concrete impact, because advanced nanotechnology is likely to be far off in time, and hard to analyse because its precise nature is very uncertain. Tangible interventions are notably absent right now.
I think this is a reasonable position, although I disagree with it. I think there are concrete projects that seem useful (see the earlier section Examples of concrete projects), I think that this area is too unexplored to be very sure that there isn’t valuable work here, and I think that in scenarios where advanced nanotechnology arrives in the next couple of decades we’ll be glad that work was done in the early 2020s.
Secondly, you might just be convinced that some other area is much more important from a longtermist EA perspective, so that you don’t think longtermist EA resources should be spent on nanotechnology strategy research. For example, maybe you have short AI timelines and think AI safety work is at least moderately tractable.
Again, I think this position is reasonable (either regarding the great importance of AI safety work, or some other area), although it’s not a position I hold — cause prioritisation is hard, and there’s lots of scope for reasonable people to disagree.
My view on how longtermist EAs should be prioritising this work against other areas right now
In this section I’ll briefly give some more concrete views on how people from the longtermist EA community should prioritise working on this compared to working in other areas. I haven’t thought about this a huge amount (and this kind of prioritisation is extremely difficult and also strongly depends on details that I’m brushing over), so these are extremely rough views and are liable to change in the near future. But reviewers of earlier drafts of this post were keen to see something like this, so I’m providing it here.
I’d guess that some small, non-zero fraction of EA resources should be going into nanotechnology strategy research right now. In particular, a rough guess might be that we want to move to having 2-3 people each spending at least 50% of their time on this in the next 3 years, and maybe get to 4 FTE in 5 years.
So I don’t think we should be piling a huge amount of resources into this, but the resource allocation I’m suggesting comes against a background of (it seems to me) something like 0-0.5 FTE of longtermist EAs thinking about this at any given time over the past 5 years. So this would represent a significant change from the status quo.
Another angle on this is my view on the following highly abstract case: if I had an aspiring researcher in front of me who seemed about an equally good fit for nanotechnology strategy research, biorisk research, AI safety research, and AI governance research, I’d rank nanotechnology strategy research a bit below the others. I’d rank the options in this way because I think we should be able to recruit enough people from the pool of people who are a better fit for nanotechnology strategy research than for research in other areas (but again, I feel very uncertain about all of this).
Some guesses at who might be a good fit for nanotechnology strategy research
Being broadly longtermist EA-aligned seems important for doing work in this area.
In addition, I’d guess that, very roughly speaking, the following sorts of people would be a particularly good fit for nanotechnology strategy research:
People with chemistry/physics/biology/materials science backgrounds (among others), and especially people with PhDs in those areas.
People who have done this kind of “strategy” thinking in other contexts, including in other EA areas like AI risk or ending factory farming.
People who are thoughtful, have good judgement, and are not likely to act unilaterally.
People who are willing to build the support network mentioned in Mitigating the risks of harm.
People willing to tackle something hard and relatively unexplored, and willing to go for higher-risk things on the basis of high expected impact.
People who are okay with doing work that might be less legible to the outside world because of infohazard concerns and the unpredictable nature of exploratory research.
Conclusion
Advanced nanotechnology could significantly increase or decrease existential risk, and might arrive in the next couple of decades, even without transformative AI. So far there’s been relatively little high-quality work aimed at making its development go well for the world. Doing nanotechnology strategy research now can help lay the foundations to increase the chance that the development of the technology goes well.
I would love to see more people considering working in this area. If you’re interested in learning more, and especially if you’re interested in trying out work in this area now or later in your career, please get in touch by sending me a private message on the Forum.
Acknowledgements
This post was written by me, Ben Snodin, in my capacity as a researcher at Rethink Priorities. I finished writing this post while at Rethink Priorities, and I did a lot of the work for it prior to joining Rethink Priorities, while employed as a Senior Research Scholar at the Future of Humanity Institute. Many thanks to James Wagstaff, Jennifer Lin, Ondrej Bajgar, Max Daniel, Daniel Eth, Ashwin Acharya, Lukas Finnveden, Aleš Flídr, Carl Shulman, Linch Zhang, Michael Aird, Jason Schukraft, and others for their helpful feedback, and to Katy Moore for copy editing. If you like Rethink Priorities’ work, you could consider subscribing to our newsletter. You can see more of our work here.
Appendix
Further reading
Update 2022-10-24: See here for a more comprehensive and up-to-date database of resources.
Some resources that are especially relevant for nanotechnology strategy research are:
Risks from Atomically Precise Manufacturing (2015) by Open Philanthropy.
Evaluating Future Nanotechnology: The Net Societal Impacts of Atomically Precise Manufacturing (2018), a paper from Steven Umbrello and Seth Baum.
These resources are also useful for context and technical understanding:
Bottleneck analysis: positional chemistry, a report from Adam Marblestone that includes a detailed introduction to “positional chemistry” (a concept that is closely related to advanced nanotechnology), a history of nanotechnology, and details of relevant past and present areas of R&D.
Richard Jones’s blog Soft Machines contains writing on advanced nanotechnology, including a 2007 explainer of debates around nanotechnology.
Eric Drexler’s books: Engines of Creation (1986), Unbounding the Future (1991), Nanosystems (1992), and Radical Abundance (2013).
Nano-solutions for the 21st century (2013), a report by Eric Drexler and Dennis Pamlin.
Kinematic Self-Replicating Machines (2004), a book by Robert Freitas and Ralph Merkle (note that I have only skimmed small parts of it).
The Foresight Institute website has relevant articles and talks, some of which are useful for gaining technical knowledge.
APM definitions in more detail
This section provides more detailed definitions for the core APM, complex APM, and consequential APM concepts introduced in the main text in Definitions and intuitions.
I define core APM as:
A system of nanoscale, atomically precise machines that can mechanically assemble small molecules into useful atomically precise products.
Where:
By nanoscale I mean “extremely small”, roughly of length 1-100nm in each dimension. For comparison, atoms often have a diameter of around 0.1-0.2nm. Note that a core APM system, while composed of nanoscale machines, might itself be larger than 100nm.
Atomically precise machines means machinery that has exactly the desired atomic structure. Similarly, atomically precise products means products that have exactly the desired atomic structure.
Mechanically assemble small molecules means (in this context): use nanoscale machines to control the motion of small molecules, using short-range intermolecular forces, such that the molecules form stable bonds with some partially constructed product.
We can describe a particular form of core APM that uses complex, very stiff components and performs many operations per second. In addition, we can imagine that the products from the nanoscale assembly system are used as inputs to slightly bigger (but otherwise very similar) assembly systems, which in turn assemble them into bigger products; and so on until you get to whatever size product you want (including everyday-scale things like laptops). Finally, we can slightly relax the definition to include systems where the assembly machines and products are only mostly atomically precise, and where the assembly method is only mostly mechanical assembly.
I label such a technology complex APM (the modifications to the core APM concept are in bold):
An extremely complex, intricate system of very stiff nanoscale, (mostly) atomically precise machines joined to progressively bigger assembly systems that operate in a vacuum and perform many operations per second to (mostly) mechanically assemble small molecules into (mostly) atomically precise products with high throughput, and with product sizes ranging from from nanoscale to metre-scale and beyond.
Finally, Drexler claims that these APM systems could create a wide range of complex, very high performance products very cheaply and with very high throughput. In this spirit, I define consequential APM to refer to:
A complex APM technology that can create a range of complex, very high performance products for $1000/kg or less and with a throughput of 1kg every 3 days or less per kg of machinery.
More intuitions for why APM might be possible and impactful
This section expands on the Intuitions for why APM might be possible and impactful described in the main text. As in the main text, these arguments are designed to give the reader some feeling for why you might believe these things rather than constituting cast-iron proofs.
See Drexler’s 1992 book, Nanosystems, for a treatment of the physics underlying nanoscale manufacturing systems. Chapter 2 covers the scaling of important physical properties with system size and is particularly relevant for intuitions about nanoscale manufacturing technology.
Intuitions for why atomic precision might be a natural goal
If you want to build high-performance systems and products on the nanoscale, it’s perhaps natural to build systems and products with atomic precision.
As mentioned in the main text, if you have small molecules as building blocks and you’re making products on the order of 1-100nm, you’re building things out of discrete building blocks with a width roughly 1/10th to 1/1000th of the width of the product you’re building. You may well then need atomic precision in order to build the product to the correct specification.
In addition, molecular building blocks are perfect copies (e.g. all oxygen molecules are the same[31]), which bond together in discrete ways to form a finite set of possible structures. So by insisting on atomic precision, you get perfectly faithful physical realisations of your design.[32]
Finally, given high-performance systems that are able to manufacture things with an extremely high degree of control, and if atomically precise products will often give significantly higher performance, atomically precise products will perhaps be an attractive target.
Intuitions for high-performance products
One reason we might expect complex APM to produce a very wide range of complex products is that building things by successively adding small molecules would seem to allow for huge flexibility in products. We see something similar with 3D printers today, which successively add material to a growing structure and can create a wide range of complex structures.
To give some intuition for the claim that atomically precise products can achieve very high performance, the semiconductor industry spends tens of billions of dollars annually on R&D to manufacture chips with increasingly finer-grained features, a pathway that ends with atomic precision (or close to it). (To be clear, I’d imagine that far more impressive products would be possible with flexible atomically precise fabrication.)
As mentioned in the main text, another angle is to consider that, with today’s technology, naturally evolved systems and devices often outperform present-day artificial ones on important dimensions. These evolved systems are certainly not atomically precise (although they contain some atomically precise components), but they show that we are currently some way from the maximum achievable performance along many dimensions. In addition, some people, including Drexler, have argued that evolved systems necessarily look different to designed ones, and that this doesn’t imply that designed systems must be inferior.[33] A highly flexible manufacturing capability that can exactly produce a vast array of complex products defined down to the last atom might seem able to meet and perhaps dramatically exceed the performance of evolved systems.
Intuitions for high operating speeds and high throughput
As mentioned in the main text, one reason to think that nanoscale machines might achieve high throughput is that these machines can in principle each operate very rapidly, because they are made of very stiff materials and each operation only needs to involve movements over very short distances.
As was also noted in the main text, another reason is that the machines are not too much bigger than the small molecules they are manipulating during the assembly process. This makes a high rate of production per gram of manufacturing system more plausible than in the case of fabrication using present-day atomic force microscopes, for example, where a machine weighing 0.1 grams (or more) performs assembly operations on single atoms weighing around 10-23 grams.
In addition to these considerations, the performance of biological machines suggests that complex APM could produce products fast. Some bacterial cells can double in both cell count and total mass of cells roughly every 10 minutes. Cells are not themselves atomically precise, but they include nanoscale atomically precise components, so we can consider this as an example of atomically precise components producing their own mass in product (copies of themselves) in 10 minutes, albeit as part of a larger system that is acting to produce a copy of all of its components.[34],[35]
Intuitions for low cost
As argued in the main text, high throughput could lead to cheap products from complex APM if the machinery uses cheap inputs; is itself cheap; and produces harmless, easily manageable waste.
We might expect the inputs to be cheap if they consist of readily available raw materials, with minimal processing required before they are passed to the APM system.
This might make sense if complex APM machinery is able to deal with minimally processed inputs, mostly requiring only that the right chemical elements are present in the inputs (not necessarily in the right ratios). This could be the case if the machinery was able to cheaply and efficiently do most of the required processing itself, which might seem plausible for complex APM machinery that is able to efficiently produce a wide range of atomically precise products through mechanical assembly.
Inputs containing the right chemical elements might be cheap if only commonly occurring elements are needed. One reason to think that only common elements are needed is that carbon, a relatively abundant element, seems to be an excellent building material: carbon forms diverse atomic-scale structures and has excellent physical properties (for example, it can exist as diamond, graphite, or carbon nanotubes, and diamond is extremely hard and stiff).
The complex APM machinery could be cheap because, once you know how to make it, perhaps you could use it to make lots more machinery, just as you could use it to make any other product.
We might expect the complex APM machinery to produce harmless, easily manageable waste because, as with processing inputs, we might expect that if we can build machinery that is able to efficiently produce a wide range of atomically precise products through mechanical assembly, we can also build similar machinery that can efficiently process waste into a harmless and easily manageable form.
Finally, as mentioned in the main text, we can also consider that naturally evolved systems seem to often be significantly cheaper to manufacture than artificial ones, at least by a rough comparison of energy cost according to the quick investigation by Paul Christiano described in Simple evolution analogy.
Other potential effects of advanced nanotechnology
Other than the potential effects mentioned in the main text in the section Potential effects of advanced nanotechnology, some (perhaps) less important effects of advanced nanotechnology being developed might be:
An “extremely powerful surveillance and non-lethal control” scenario. Advanced nanotechnology might allow states to cheaply manufacture ubiquitous, centrally controlled drones the size of insects or small mammals to monitor and control the world’s population using non-lethal force.[36] This seems like a plausible outcome to me. This capability might lead to or facilitate value lock-in (probably a very bad outcome), but it also seems possible that it would reduce existential risk by mitigating “vulnerable world type-1” vulnerabilities.
Advanced nanotechnology as a TAI capability. TAI could lead to the rapid development of advanced nanotechnology. This could allow an agent-like AI to quickly create more powerful hardware and thereby gain the ability to rapidly transform the physical world.
Impacts on biorisk. Advanced nanotechnology, or technologies along the path to it, seem likely to make it easier to develop things that could increase or decrease biorisk, including deadly pathogens, DNA sequencing devices, and biosensing devices. It might also enable a highly robust, rapid, local capability to manufacture medical countermeasures in a scenario where a global biological catastrophe has destroyed supply chains and important infrastructure.[37]
Advanced neurotechnology. Advanced nanotechnology might allow us to monitor the brain at very high resolution, potentially enabling whole brain emulation or neuromorphic AI. It seems unclear whether this would be good or bad overall.
Cheap energy. Cheap, powerful manufacturing might enable the fabrication of cheap solar cells and cheap batteries that help to overcome intermittency in solar power, leading to very cheap solar power (although, naively, I’m unsure how large an effect this would be given that advanced nanotechnology wouldn’t on the face of it reduce land and labour costs).
Reduced risks from catastrophic climate change. With advanced nanotechnology, we might be able to return atmospheric CO2 concentrations to pre-industrial levels for a relatively small cost using cheap, high-performance direct air capture devices powered by cheap solar. This could reduce the risk of catastrophic climate change, and also reduce existential risk if we believe that climate change poses such a risk.
Improved resilience to global catastrophes. As mentioned above, advanced nanotechnology might provide a highly robust and powerful local capability to manufacture medical countermeasures during a global biological catastrophe. More generally, the technology could enable greater resilience to global catastrophes by broadly enabling local, rapid production of vital goods, equipment, and infrastructure without requiring pre-existing infrastructure or supply chains, and regardless of environmental damage. One example might be food production without relying on sunlight or global supply chains (see also resilient foods). This could reduce the chance that a global catastrophe precipitates an existential catastrophe.
Dramatic improvements in medicine. Aside from an expectation that cheap, powerful manufacturing would be useful for fabricating medical products (just as for most other products), advanced nanotechnology might be particularly useful for developing improved medical interventions. One high-level reason for thinking this is that humans are mostly made of cells, and cellular processes happen at the nanoscale, suggesting that artificial nanoscale devices might be particularly useful.
General wealth / economic growth. On the face of it, if we can make high-performance materials and devices very cheaply, people will on average be very wealthy compared to today (whether this is a deviation from trend GDP growth will, of course, depend on when and how the technology is developed).
Social effects. Advanced nanotechnology could have important social effects, especially if it arrives abruptly. For example, strong economic growth might generally promote cooperation, while rapid technological changes might cause social upheaval.
Altering the global balance of power. Advanced nanotechnology might change the global balance of power. For example, it might dramatically shorten supply chains by enabling highly efficient production with unrefined local materials, leading to a shift in the balance of geopolitical power.
Footnotes
- ↩︎
In my opinion, the public work with the most authoritative and detailed description of the APM concept is Drexler’s 2013 book, Radical Abundance (especially chapter 10, and parts of chapters 1 and 2).
- ↩︎
Drexler claims that (something similar to) complex APM would be able to manufacture products for $1/kg or less and with a throughput of 1kg every 3 hours or less per kg of machinery (for “$1/kg or less”, see, for example, Radical Abundance, 2013, p. 172; for “a throughput of 1kg every 3 hours or less per kg of machinery”, see Nanosystems, 1992, p. 1, 3rd bullet). I set a lower bar for my definition of consequential APM here because I think this lower bar is more than sufficient to imply an extremely impactful technology, while perhaps capturing a larger fraction of potential scenarios over the next few decades.
- ↩︎
The more certain we are that the development of complex APM naturally implies the development of consequential APM soon afterwards, the less useful it is to distinguish these concepts (I think this partly explains why they are often bundled together). But I’m uncertain enough to find the distinction useful.
- ↩︎
Someone who read a draft of this post mentioned they felt initially sceptical that it makes sense to think of ribosomes (or other nanoscale objects) as “machines”, and that they found the Wikipedia page on molecular machines helpful for reducing this scepticism.
- ↩︎
In addition to showing that such machines are feasible, this suggests that one approach to engineering nanoscale machines doing mechanical assembly might be to use biological machines as the starting point.
One approach that uses biological machines as the starting point is the approach I associate with synthetic biology, which involves making incremental changes to existing biological machines to achieve some goal (I’ll refer to this as the “synthetic biology approach” from now on, although I don’t know whether synthetic biologists would agree with this characterisation). This approach seems to be very challenging. My impression is that this is because the effect of small modifications is hard to predict, and because biological machines often stop working outside of their usual conditions. Still, I think some people see this as the best approach for developing something like advanced nanotechnology.
Another approach that uses biological machines as the starting point is the “soft path” approach to APM described by Drexler (see, for example, Appendix II of Radical Abundance). This approach also starts with engineering biological molecules such as proteins (or synthetic but somewhat similar molecules), but is quite different to the synthetic biology approach, because the biological molecules are used more as a building material than as active machines in their own right, and are generally completely removed from their biological context. This soft path approach seems like a more plausible path to me than the synthetic biology approach. So far the soft path approach has had far less effort directed to it than the synthetic biology approach, as far as I’m aware.
- ↩︎
I’m mostly relying on the analysis described in Paul Christiano’s Simple evolution analogy for this claim. From that document:
I think the typical pattern is for human artifacts to be 2-3 OOM [order of magnitude, i.e. a factor of 10] less efficient than their biological analogs, measured by “How much energy/mass is needed to achieve a given level of performance?”
For example, a top GPU is said to perform roughly 1-2 OOM fewer Flops per unit power than a human brain, and artificial photodetectors are said to require roughly 3-4 OOM as much power as a human eye to attain the same level of performance. Note that the analysis in that document is fairly rough, although I’d be very surprised if it turned out not to be the case that naturally evolved systems often outperform present-day ones on important dimensions.
- ↩︎
For example, we could imagine a system occupying a 1000nm 1000nm 1000nm volume, and composed of nanoscale machines, that can perform one assembly operation at a time. We could fit 1,000,000,000,000 such systems into 1cm³. (Atomic force microscopes are used today to manipulate atoms one at a time, and my impression, from briefly talking to someone who uses atomic force microscopes for their research, is that they are generally around 1cm³ in size or larger (potentially much larger, depending on what you count as the atomic force microscope and what you count as its supporting infrastructure).)
- ↩︎
Though note that these high frequency, parallel operations involving single molecules each make a tiny amount of progress towards creating a structure of appreciable size, so overall we might expect these machines to make their own mass in product within say minutes, hours, or days, rather than say once per second or 1 billion times per second.
- ↩︎
According to Paul Christiano’s Simple evolution analogy : “Manufacturing costs differ by a further 2-4 OOM” (“OOM” stands for “order of magnitude”, i.e. a factor of 10).
- ↩︎
I haven’t given a large amount of thought to how advanced nanotechnology might deviate from consequential APM, and I can easily see my views changing significantly in the future. Naively, though, here are some ways I imagine the technology might deviate from consequential APM: it might not be the case that most of the assembly machinery is atomically precise; maybe assembly methods or physical phenomena other than mechanical positioning are very important; maybe important components involved in the assembly of nanoscale products are not themselves nanoscale; maybe the assembly machines would not be very stiff; or maybe “control the trajectories of small molecules” wouldn’t be a natural description of the machinery’s operation.
These alternatives to APM might often be things that are technically not consequential APM but are so similar that they can be treated as equivalent to consequential APM for most practical purposes. But perhaps many have substantial enough differences that the distinction turns out to matter for thinking about nanotechnology strategy.
- ↩︎
To give a rough quantification of my (very unstable) beliefs here: if we have advanced nanotechnology in 2040, I’d guess that in roughly 80% of cases the advanced nanotechnology doesn’t look like complex APM. This would imply that, in some relevant sense, advanced nanotechnology covers 5x as much of the space of technological possibilities as does APM.
- ↩︎
For example, we might sometimes be more interested in the question “how likely is it that advanced nanotechnology will be developed by 2040?” than the question “how likely is it that APM will be developed by 2040?”, because advanced nanotechnology has consequences as important as those of APM (or perhaps even more important), and answering the question for advanced nanotechnology might involve similar considerations but have a significantly different answer.
- ↩︎
This guess of a 70% probability is basically driven by: “It’s hard for me to imagine this not being a really big deal, but I haven’t thought about this that much, and maybe my imagination just isn’t very good.”
- ↩︎
For example, this transcript of an 80,000 Hours podcast with Carl Shulman quotes Shulman discussing the Soviets’ apparent willingness to develop bioweapons that they couldn’t protect their own people against (though presumably a weapon that could kill everyone would look less appealing than a weapon that kills some random fraction of the global population; and Soviet decision-makers might have felt that they could protect themselves through physical isolation, for example):
It’s hard to know exactly how much work they would put into pandemic things, because… With pandemic pathogens, they’re going to destroy your own population unless you have already made a vaccine for it.
And so the US eschewed weapons of that sort towards the end of its bioweapons program before it abandoned it entirely, on the theory that they only wanted weapons they could aim. But the Soviets did work on making smallpox more deadly, and defeat vaccines. So, there was interest in doing at least some of this ruin-the-world kind of bioweapons research.
- ↩︎
We might also expect that the TAI we get might be different if it’s developed using a huge amount of computational resources made available by advanced nanotechnology (other than the differences we’d expect to directly follow from earlier TAI). For example, maybe the TAI we get would be more likely to follow present-day paradigms, like stochastic gradient descent, which (very speculatively) could have safety implications.
- ↩︎
Although perhaps some new weapons other than grey goo would also involve nanoscale devices, and you might consider the components of new kinds of computers to be “novel nanoscale devices”.
- ↩︎
Here’s a quick attempt to brainstorm considerations that seem to be feeding into my views here: “Drexler has sketched a reasonable-looking pathway and endpoint”, “no-one has shown X isn’t feasible even though presumably some people tried”, “things are complicated and usually don’t turn out how you expect”, “no new physics is needed”, “X seems intuitively doable given my intuitions from molecular simulations”, “it’s hard to be sure of anything”, “trend-breaking tech rarely happens but does sometimes”.
- ↩︎
Richard Jones is a notable example of a highly credentialed person who seems to have engaged seriously with the question of the feasibility of APM. He seems to think that something resembling complex APM is feasible, although he seems sceptical about the feasibility of something with the kind of capabilities described by consequential APM. See, for example, Open Philanthropy’s A conversation with Richard Jones on September 30, 2014.
- ↩︎
I’d speculate that there might be a decent minority of now-senior researchers who entered the field around 1995-2005, excited by the vision for nanotechnology laid out in Drexler’s works and the creation of the National Nanotechnology Initiative in the US. Perhaps these researchers nowadays have a vague sense for what Drexler’s ideas are (and probably consider APM-like visions for nanotechnology to be too far in the future to be worth thinking about).
- ↩︎
Michael Nielsen has written a very nice, relatively quick introduction to scanning tunnelling microscopy (scanning tunnelling microscopy is a type of scanning probe microscopy).
- ↩︎
My perception is that the hard path to APM is less promising than an alternative path called the “soft path” (see footnote 5). This perception mostly comes from what people around me seem to think, and those views in turn perhaps mostly come from Drexler’s view on this. I don’t have much of an inside view here myself; my impression is that deliberate hard path work has been occurring for many years (most notably at Zyvex) without much to show for it, but this seems like only weak evidence, partly because the level of investment has apparently been quite low.
- ↩︎
For a more thorough overview of recent (and less recent) R&D relevant for advanced nanotechnology, see Adam Marblestone’s Bottleneck analysis: positional chemistry. The most relevant sections are “Building blocks that emerged in the meantime”, and “Explicit work on positionally directed covalent chemistry”. (Note that the focus of the report is on a technology the author calls “positional chemistry”, which is different to advanced nanotechnology, but is closely related.)
- ↩︎
Although positional assembly with DNA suprastructures might soon be demonstrated, as a result of this grant, which seems to me to represent relevant progress.
- ↩︎
Of course, this lack of empirical evidence also pushes against high confidence that progress will be very fast if a large research effort emerges. But I expect people will already be inclined not to be confident that progress will be very fast if a large research effort emerges.
- ↩︎
To be clear, this number does not assume feasibility, unlike my median timeline estimate.
- ↩︎
As a somewhat independent data point, Daniel Eth, a former colleague of mine at the Future of Humanity Institute who has spent time thinking about APM, told me that he guesses a 1% probability for roughly the event “advanced nanotechnology arrives by 2040 and isn’t preceded by advanced AI”, where “advanced AI” is, roughly “AGI or transformative AI or CAIS or some future AI that is a similarly huge deal” (note that AI that dramatically speeds up technological progress doesn’t necessarily qualify as advanced AI). Daniel said, “I’d expect [the estimate would] move around a bit, but probably not more than one order of magnitude.”
- ↩︎
Here are some examples of work from these organisations:
Foresight Institute: the list of articles tagged molecular manufacturing includes lots of commentary on relevant (incremental) scientific advances, e.g. this commentary on a 2014 paper on manipulating atoms with AFM.
Center for Responsible Nanotechnology: Overview of Current Findings (NB my current weak impression from skimming a few articles is that the material on the Center for Responsible Nanotechnology website probably contains some inaccurate or misleading statements).
The IMM website says that Institute for Molecular Manufacturing members contributed to a 2007 report called Productive Nanosystems: A Technology Roadmap.
- ↩︎
One potential objection might be that points 1 or 2 just stem from general uncertainty about advanced nanotechnology due to a relative lack of attention on the relevant questions; for example, you might think that if we tried a bit harder to reduce our uncertainty, we would likely end up finding that there’s a sufficiently low probability that advanced nanotechnology arrives in the next 20 years without TAI that this area doesn’t seem worth investigating. A counterargument would be that the above is really an argument about which projects to prioritise in this area, rather than an argument for not thinking about this area at all. Similar comments apply to the uncertainty that is driving point 1. Of course, this counterargument only works if you think points 1 and 2 are reasonable positions given our current state of knowledge.
- ↩︎
Note that an assumption underlying this argument is that there’s not much value in doing nanotechnology strategy work to understand or prepare for scenarios where TAI precedes the development of advanced nanotechnology, because TAI makes everything go crazy such that it’s hard to plan for after that point. However, if you do think such work might be valuable, the case for nanotechnology strategy research looks stronger, because there’s a broader range of future scenarios where this work is relevant. (As mentioned in the main text, it seems reasonable to me to think that work on nanotechnology strategy will be much less useful if advanced nanotechnology is preceded by TAI, but I’m not confident in this and my view feels quite unstable.)
- ↩︎
Although note that, for example, Drexler has written several books on topics within advanced nanotechnology and the Foresight Institute runs a programme promoting the development of APM, so discussion of advanced nanotechnology would not happen against a background of complete silence on the topic.
- ↩︎
This isn’t strictly true because some molecules might contain different isotopes; but it doesn’t seem likely that the presence of different isotopes would usually matter much, and rare isotopes could in principle be filtered out if necessary.
- ↩︎
Provided that there are no errors in the assembly process that lead to incorrect building block placement or bonding. Products could be checked for errors and any errors could be corrected or the misformed products could be discarded; although it’s not clear to me how effective this would be in practice at eliminating errors.
- ↩︎
In Biological and Nanomechanical Systems: Contrasts in Evolutionary Capacity (1989), Drexler discusses how and why designed systems differ from evolved ones. These LessWrong posts also make relevant arguments, in the context of TAI: Building brain-inspired AGI is infinitely easier than understanding the brain and Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain.
- ↩︎
This 2016 Nature Methods article reports bacteria growing with a cell count doubling time of less than 10 minutes. I haven’t found numbers for the rate of total mass doubling, but I understand that cell mass remains roughly constant over successive generations, implying that total mass doubles at roughly the same rate as cell count.
- ↩︎
Relatedly, Robert Freitas and Ralph Merkle report in Kinematic Self-Replicating Machines that ribosomes can produce their own mass in proteins in ~5-12 minutes (Ctrl+F for “If the bacterial ribosome” on the linked page).
- ↩︎
Drexler describes something along these lines in The Stealth Threat: An Interview with K. Eric Drexler.
- ↩︎
Though it’s not completely clear to me how useful the “advanced nanotechnology” framing is here, as opposed to, say, something like “advanced biotechnology”.
- What Rethink Priorities General Longtermism Team Did in 2022, and Updates in Light of the Current Situation by 14 Dec 2022 13:37 UTC; 162 points) (
- Some lesser-known megaproject ideas by 2 Apr 2023 1:03 UTC; 132 points) (
- Rethink Priorities 2022 Mid-Year Update: Progress, Plans, Funding by 26 Jul 2022 14:33 UTC; 112 points) (
- Rethink Priorities’ 2022 Impact, 2023 Strategy, and Funding Gaps by 25 Nov 2022 5:37 UTC; 108 points) (
- The Rethink Priorities Existential Security Team’s Strategy for 2023 by 8 May 2023 8:08 UTC; 92 points) (
- Monthly Overload of EA—June 2022 by 27 May 2022 15:48 UTC; 62 points) (
- Risks from atomically precise manufacturing—Problem profile by 9 Aug 2022 13:41 UTC; 53 points) (
- Future Matters #2: Clueless skepticism, ‘longtermist’ as an identity, and nanotechnology strategy research by 28 May 2022 6:25 UTC; 52 points) (
- A new database of nanotechnology strategy resources by 5 Nov 2022 5:20 UTC; 39 points) (
- My thoughts on nanotechnology strategy research as an EA cause area by 2 May 2022 17:57 UTC; 34 points) (LessWrong;
- EA & LW Forums Weekly Summary (12th Dec − 18th Dec 22′) by 20 Dec 2022 9:49 UTC; 27 points) (
- EA Organization Updates: April-May 2022 by 12 May 2022 14:38 UTC; 25 points) (
- A quick review of resource depletion, waste and overpopulation by 25 Sep 2023 21:11 UTC; 24 points) (
- 2 Jan 2023 0:17 UTC; 24 points) 's comment on Your 2022 EA Forum Wrapped 🎁 by (
- Some lesser-known megaproject ideas by 2 Apr 2023 1:14 UTC; 19 points) (LessWrong;
- Five Areas I Wish EAs Gave More Focus by 27 Oct 2022 6:13 UTC; 13 points) (LessWrong;
- EA & LW Forums Weekly Summary (12th Dec − 18th Dec 22′) by 20 Dec 2022 9:49 UTC; 10 points) (LessWrong;
- Five Areas I Wish EAs Gave More Focus by 27 Oct 2022 6:13 UTC; 8 points) (
- 27 Oct 2022 9:21 UTC; 2 points) 's comment on Five Areas I Wish EAs Gave More Focus by (LessWrong;
I found this post helpful, since lately I’ve been trying to understand the role of molecular nanotechnology in EA and x-risk discussions. I appreciate your laying out your thinking, but I think full-time effort here is premature.
This sounds astonishingly high to me (as does 1-2% without TAI). My read is that no research program active today leads to advanced nanotechnology by 2040. Absent an Apollo program, you’d need several serial breakthroughs from a small number of researchers. Echoing Peter McCluskey’s comment, there’s no profit motive or arms race to spur such an investment. I’d give even a megaproject slim odds—all these synthesis methods, novel molecules, assemblies, information and power management—in the span of three graduate student generations? Simulations are too computationally expensive and not accurate enough to parallelize much of this path. I’d put the chance below 1e-4, and that feels very conservative.
Scientists convince themselves that Drexler’s sketch is infeasible more often than one might think. But to someone at that point there’s little reason to pursue the subject further, let alone publish on it. It’s of little intrinsic scientific interest to argue an at-best marginal, at-worst pseudoscientific question. It has nothing to offer their own research program or their career. Smalley’s participation in the debate certainly didn’t redound to his reputation.
So there’s not much publication-quality work contesting Nanosystems or establishing tighter upper bounds on maximum capabilities. But that’s at least in part because such work is self-disincentivizing. Presumably some arguments people find sufficient for themselves wouldn’t go through in generality or can’t be formalized enough to satisfy a demand for a physical impossibility proof, but I wouldn’t put much weight on the apparent lack of rebuttals.
Thanks, would be interested to discuss more! I’ll give some reactions here for the time being
(For context / slight warning on the quality of the below: I haven’t thought about this for a while, and in order to write the below I’m mostly relying on old notes + my current sense of whether I still agree with them.)
Maybe we don’t want to get into an AGI/TAI timelines discussion here (and I don’t have great insights to offer there anyway) so I’ll focus on the pre-TAI number.
I definitely agree that it seems like we’re not at all on track to get to advanced nanotechnology in 20 years, and I’m not sure I disagree with anything you said about what needs to happen to get there etc.
I’ll try to say some things that might make it clearer why we are currently giving different numbers here (though to be clear, as is hopefully apparent in the post, I’m not especially convinced about the number I gave)
I think getting to 99.99% confidence is pretty hard—like in the 0.001% fastest-development scenarios I feel like we’re far into “wow I made some very wrong assumptions I wasn’t even aware I was making” territory. (In general with prediction, I feel like in the 10% most extreme scenarios an assumption I thought was rock solid turns out to be untrue)
Apart from the “reluctance to be extremely confident in anything” thing:
I think the main scenario I have in mind for pre-TAI advanced nanotechnology by 2040 is one where some very powerful AI that isn’t powerful enough to count as TAI gets developed and speeds up (relevant parts of) science R&D a lot
I think there’s also some (very small) chance that advanced nanotechnology is much easier than it currently seems, since (maybe) we haven’t really tried yet. Either through roughly Drexler’s path, or through some other path.
I definitely agree with the points about incentives for people to rebut Drexler’s sketch, but I still think the lack of great rebuttals is some evidence here (I don’t think that represents a shift in my view—I guess I just didn’t go into enough detail in the post to get to this kind of nuance (it’s possible that was a mistake)).
Kind of reacting to both of the points you made / bits I quoted above: I think convincing me (or someone more relevant than me, like major EA funders etc) that the chance that advanced nanotechnology arrives by 2040 is less than 1 in 1e-4 would be pretty valuable. I don’t know if you’d be interested in working to try to do that, but if you were I’d potentially be very keen to support that. (Similarly for ~showing something like “near-infeasibility for Drexler’s sketch.)
[2023-01-19 update: there’s now an expanded version of this comment here.]
Note: I’ve edited this comment after dashing it off this morning, mainly for clarity.
Sure, that all makes sense. I’ll think about spending some more time on this. In the meantime I’ll just give my quick reactions:
On reluctance to be extremely confident—I start to worry when considerations like this dictate that one give a series of increasingly specific/conjunctive scenarios roughly the same probability. I don’t expect a forum comment or blog post to get someone to such high confidence, but I don’t think it’s beyond reach.
We also have different expectations for AI, which may in the end make the difference.
I don’t expect machine learning to help much, since the kinds of structures in question are very far out of domain, and physical simulation has some intrinsic hardness problems.
I don’t think it’s correct to say that we haven’t tried yet.
Some of the threads I would pull on if I wanted to talk about feasibility, after a relatively recent re-skim:
We’ve done many simulations and measurements of nanoscale mechanical systems since 1992. How does Nanosystems hold up against those?
For example, some of the best-case bearings (e.g. multi-walled carbon nanotubes) seem to have friction worse than Drexler’s numbers by orders of magnitude. Why is that?
Edges also seem to be really important in nanoscale friction, but this is a hard thing to quantify ab initio.
I think there’s an argument using the Akhiezer limit on Qf products that puts tighter upper bounds on dissipation for stiff components, at least at “moderate” operating speeds. This is still a pretty high bound if it can be reached, but dissipation (and cooling) are generally weak points in Nanosystems.
I don’t recall discussion of torsional rigidity of components. I think you can get a couple orders of magnitude over flagellar motors with CNTs, but you run into trouble beyond that.
Nanosystems mainly considers mechanical properties of isolated components and their interfaces. If you look at collective motion of the whole, everything looks much worse. For example, stiff 6-axis positional control doesn’t help much if the workpiece has levered fluctuations relative to the assembler arm.
Similarly, in collective motion, non-bonded interfaces should be large contributors to phonon radiation and dissipation.
Due to surface effects, just about anything at the nanoscale can be piezoelectric/flexoelectric with a strength comparable to industrial workhorse bulk piezoelectrics. This can dramatically alter mechanical properties relative to the continuum approximation. (Sometimes in a favorable direction! But it’s not clear how accurate simulations are, and it’s hard to set up experiments.)
Current ab initio simulation methods are accurate only to within a few percent on “easy” properties like electric dipole moments (last I checked). Time-domain simulations are difficult to extend beyond picoseconds. What tolerances do you need to make reliable mechanisms?
In general I wouldn’t be surprised if a couple orders of magnitude in productivity over biological systems were physically feasible for typically biological products (that’s closer to my 1% by 2040 scenario). Broad-spectrum utility is much harder, as is each further step in energy efficiency or speed.
Nice, I don’t think I have much to add at the moment, but I really like + appreciate this comment!
Very helpful post!
If the typical solar cell thickness is 400 µm and a density of 2.3 kg/L and efficiency of 20%, with 1000 W/m2 and $1000/kg, this would be ~$5/W, which is significantly more expensive than current solar cells. However, some solar cells have much thinner active layers, so it could be lower cost. The cost of land is much less than the cost of the solar cells. Labor still could be significant. Batteries are already less than $1000 per kilogram, so the main question is how much better performance they would have.
Some have noted that even at the one dollar per kilogram cost, manufacturing is a relatively small fraction of the economy, so if this were to go much smaller, it would not help that much. However, if you could get truly self replicating equipment that could just draw minerals from the ground and get energy from the sun, an individual with a plot of land could produce a big house, lots of cars, lots of consumer goods, etc., so then they would be very wealthy. If we just get $1000 per kilogram, I don’t think it would save very much of the economy, so the main question is how much higher the performance would be.
Thanks for posting this!
I have a few uninformed doubts about how likely this is (although you never claimed it was especially likely):
There are already millions of different types of self-replicating nanoscale machines out there (biological organisms), and none of them seem to have gotten close to turning the entire world into copies of themselves. So it seems pretty hard to make “gray goo,” even with APM.
These machines may be more capable than biological ones if they were very intelligent, but this scenario seems to already be covered by the “APM accelerates TAI” scenario.
On the other hand, maybe APM is much more dangerous than evolution because operators could do more than just local optimization.
Niche point: How much is that argument undermined by anthropic considerations? I suspect not very, because:
I’m pointing out that we don’t see near-catastrophe, rather than that we don’t see total catastrophe.
Our actions arguably matter much more if we haven’t gotten lucky.
As armchair ecology, there seem to be non-luck reasons why there hasn’t been biological “gray goo” (“green goo”?) (although, admittedly, manufactured machines might be able to get around these):
There’s a tradeoff between versatility and specialization—it’s hard to be most successful in all niches.
There’s competition, e.g., if a population is very large, predators multiply.
Organisms seem unable to have both explosive population growth and fast motion, since organisms that rely on eating other organisms for energy run out of food if their population explodes, while organisms that rely on the sun for energy can’t move quickly.
The offense-defense balance might not be so bad: as suggested by the point about biological predators, APM might create strong defensive capabilities, e.g., the capability to quickly identify dangerously replicating machines and then create targeted/specialized countermeasures.
Being really good at replicating within human bodies (naively) seems much easier than being good at replicating in any environment. But the former worry is ~bioweapons, which are already covered by another risk scenario you mention.
Developing or using “gray goo” might not be very strategically appealing.
Assuming it could be made, it’d be very self-destructive (and/or maybe could be retaliated against), so using it would be a terrible idea, and it’d be hard to make credible threats with it. In other words, it might not be a type-2a vulnerability (“safe first strike”) after all.
It might still be “safe first strike” vulnerability if there were great narrow-scope countermeasures that just one side could develop in advance and no secure second-strike capabilities, in which case these weapons would pose more local risk but less humanity-wide risk. (Or maybe more humanity-wide risk if first strike were just somewhat safe?)
I have some relevant knowledge. I was involved in a relevant startup 20 years ago, but haven’t paid much attention to this area recently.
My guess is that Drexlerian nanotech could probably be achieved in less than 10 years, but would need on the order of a billion dollars spent on an organization that’s at least as competent as the Apollo program. As long as research is being done by a few labs that have just a couple of researchers, progress will likely continue to be slow to need much attention.
It’s unclear what would trigger that kind of spending and that kind of collection of experts.
Profit motives aren’t doing much here, due to a combination of the long time to profitability and a low probability that whoever produces the first usable assembler will also produce one that’s good enough for a large market share. I expect that the first usable assembler will be fairly hard to use, and that anyone who can get a copy will use it to produce better versions. That means any company that sells assemblers will have many customers who experiment with ways to compete. It seems
Maybe some of the new crypto or Tesla billionaires will be willing to put up with those risks, or maybe they’ll be deterred by the risks of nanotech causing a catastrophe.
Could a new cold war cause militaries to accelerate development? This seems like a medium-sized reason for concern.
What kind of nanotech safety efforts are needed?
I’m guessing the main need is for better think-tanks to advise politicians on military and political issues. That requires rather different skills than I or most EAs have.
There may be some need for technical knowledge on how to enforce arms control treaties.
There’s some need for more research into grey goo risks. I don’t think much has happened there since the ecophagy paper. Here’s some old discussion about that paper: Hal Finney, Eliezer, me, Hal Finney
Good to see nanotech and APM finally get some attention from the EA experts after 10 (15?) years of neglect!
We at NanoLab ( http://nanolabvr.com/ ) stand firmly behind an aggressive (but of course risk-aware) strategy of implementing APM for the benefit of humanity quickly and safely.
I want also to emphasize that the Bottleneck analysis report is top-notch and is probably a required reading for anyone interested in the topic (unless you already know all the background and who did what when), since it’s up to date and very thorough.
I just want to flag that, for reasons expressed in the post, I think it seems probably a bad idea to be trying to accelerate the implementation of APM at the moment, as opposed to doing more research and thinking on whether to do that and then maybe indeed doing that afterwards, if it then appears useful.
And I also think it seems bad to “stand firmly behind” any “aggressive strategy” for accelerating powerful emerging technologies; I think there are many cases where accelerating such technologies is beneficial for the world, but one should probably always explicitly maintain some uncertainty about that and some openness to changing one’s mind.
I’d be open to debating this further, but I think basically I just agree with what’s stated in the post and I’m not sure which specific points you disagree with or would add. (It seems clear that you see the risks as lower and/or see the benefits as higher, but I’m not sure why.) Perhaps if I hear what you disagree with or would add, I could see if that changes my views or if I then have useful counterpoints tailored to your views.
(Though it’s also plausible I won’t have time for such a debate, and in any case some/many other people know more about this topic than me.)
Just noting that the Bottleneck analysis report is written in the first person, but I can’t see a name attached to it anywhere! Who is the author?
Adam Marblestone
😍
Though I am disappointed by the thrust of the author. Nanotech may be important, therefore longtermist EAs should not work on it, should not talk about it and should only study it in secret, getting paid through some EA foundation to just sit and “strategize” about its risks. Improving lives of billions of people with APM/nanotech is not valuable, saving billions of lives is not valuable, increasing man’s power over matter is not valuable, preventing civilizational collapse due to resource depletion/climate change is not valuable.
I am starting to think that longtermism may indeed be a cognitive cancer that is consuming parts of EA and transhumanism. Let’s hope I am not put on some kill list by well-meaning longtermists for this comment...
I strong downvoted this comment. Given that and that others have too (which I endorse), I want to mention I’m happy to write some thoughts on why I did so if you want, since I imagine sometimes people new-ish to the EA Forum may not understand why they’re getting downvoted.
But in brief:
I thought this was a misleading/inaccurate and uncharitable reading of the post
I think that the “kill list” part of your comment feels wildly over-the-top/hyperbolic
Perhaps you meant it as light-hearted or a joke or something, but I think it’s not obvious that that’s the case without hearing your tone
I think it’s just in general clearly not conducive to good discussion for someone to in any way imply their conversational partners may put them on a kill list—that’s not a good way to start a productive debate where both sides are open to and trying to learn from each other and see if they want to change their views
Less importantly, I also disagree with your view that it’s a good move at the moment to try to speed up advanced nanotechnology development.
But if you just stated you have that view, I’d probably not downvote and instead just leave a comment disagreeing.
And that’d certainly be the case if you stated that view but also indicated an openness to having your view changed (as I believe the post did), explained why you have your view in a way that sounds intended to inform rather than persuade, and ideally also attempted to summarise your understanding of the post’s argument or where you disagree with it. I think that’s a much better way to have a productive discussion.
For that reason, I didn’t downvote the parent comment, even though my current guess is that the strategy you’re endorsing there is a bad one from the perspective of safeguarding and improving the world & future.
As a moderator, I agree with Michael. The comment Michael’s replying to goes against Forum norms.
Great post!
I was thinking recently about nanotechnology as an x-risk, so it’s awesome you took the time to research this. Nanotechnology features heavily in early writings on x-risk and trans-humanism, sometimes even being mentioned more prominently than AI-risk. Which as I understand it coincided with a strong push of public and private investments in all things nanotech from around 2000 (although not always APM). From 2010 on investment seems to have gone down and a lot of people in the space, like Eric Drexler and Nick Bostrom, redirected their attention to AI, leaving me wondering what happened to it, and how real of a threat it remains.
Do you think the retreat of investment into nanotechnology reduced attention towards it as an EA cause area, compared to research into more well-funded (and hyped) technologies like AI?
Yeah, I think that progress in nanotech stuff has been very slow over the past 20 years, whereas progress in AI stuff has sped up a lot (and investment has increased a huge amount). Based on that, it seems reasonable to focus more on making the development of powerful AI go well for the world and to think less about nanotech, so I think this is at least part of the story.