I found this post helpful, since lately I’ve been trying to understand the role of molecular nanotechnology in EA and x-risk discussions. I appreciate your laying out your thinking, but I think full-time effort here is premature.
Overall, then, adding the above probabilities implies that my guess is that there’s a 4-5% chance that advanced nanotechnology arrives by 2040. Again, this number is very made up and not stable.
This sounds astonishingly high to me (as does 1-2% without TAI). My read is that no research program active today leads to advanced nanotechnology by 2040. Absent an Apollo program, you’d need several serial breakthroughs from a small number of researchers. Echoing Peter McCluskey’s comment, there’s no profit motive or arms race to spur such an investment. I’d give even a megaproject slim odds—all these synthesis methods, novel molecules, assemblies, information and power management—in the span of three graduate student generations? Simulations are too computationally expensive and not accurate enough to parallelize much of this path. I’d put the chance below 1e-4, and that feels very conservative.
Here’s a quick attempt to brainstorm considerations that seem to be feeding into my views here: “Drexler has sketched a reasonable-looking pathway and endpoint”, “no-one has shown X isn’t feasible even though presumably some people tried”
Scientists convince themselves that Drexler’s sketch is infeasible more often than one might think. But to someone at that point there’s little reason to pursue the subject further, let alone publish on it. It’s of little intrinsic scientific interest to argue an at-best marginal, at-worst pseudoscientific question. It has nothing to offer their own research program or their career. Smalley’s participation in the debate certainly didn’t redound to his reputation.
So there’s not much publication-quality work contesting Nanosystems or establishing tighter upper bounds on maximum capabilities. But that’s at least in part because such work is self-disincentivizing. Presumably some arguments people find sufficient for themselves wouldn’t go through in generality or can’t be formalized enough to satisfy a demand for a physical impossibility proof, but I wouldn’t put much weight on the apparent lack of rebuttals.
Thanks, would be interested to discuss more! I’ll give some reactions here for the time being
This sounds astonishingly high to me (as does 1-2% without TAI)
(For context / slight warning on the quality of the below: I haven’t thought about this for a while, and in order to write the below I’m mostly relying on old notes + my current sense of whether I still agree with them.)
Maybe we don’t want to get into an AGI/TAI timelines discussion here (and I don’t have great insights to offer there anyway) so I’ll focus on the pre-TAI number.
I definitely agree that it seems like we’re not at all on track to get to advanced nanotechnology in 20 years, and I’m not sure I disagree with anything you said about what needs to happen to get there etc.
I’ll try to say some things that might make it clearer why we are currently giving different numbers here (though to be clear, as is hopefully apparent in the post, I’m not especially convinced about the number I gave)
I think getting to 99.99% confidence is pretty hard—like in the 0.001% fastest-development scenarios I feel like we’re far into “wow I made some very wrong assumptions I wasn’t even aware I was making” territory. (In general with prediction, I feel like in the 10% most extreme scenarios an assumption I thought was rock solid turns out to be untrue)
Apart from the “reluctance to be extremely confident in anything” thing:
I think the main scenario I have in mind for pre-TAI advanced nanotechnology by 2040 is one where some very powerful AI that isn’t powerful enough to count as TAI gets developed and speeds up (relevant parts of) science R&D a lot
I think there’s also some (very small) chance that advanced nanotechnology is much easier than it currently seems, since (maybe) we haven’t really tried yet. Either through roughly Drexler’s path, or through some other path.
Scientists convince themselves that Drexler’s sketch is infeasible more often than one might think. But to someone at that point there’s little reason to pursue the subject further, let alone publish on it. It’s of little intrinsic scientific interest to argue an at-best marginal, at-worst pseudoscientific question. It has nothing to offer their own research program or their career. Smalley’s participation in the debate certainly didn’t redound to his reputation.
So there’s not much publication-quality work contesting Nanosystems or establishing tighter upper bounds on maximum capabilities. But that’s at least in part because such work is self-disincentivizing. Presumably some arguments people find sufficient for themselves wouldn’t go through in generality or can’t be formalized enough to satisfy a demand for a physical impossibility proof, but I wouldn’t put much weight on the apparent lack of rebuttals.
I definitely agree with the points about incentives for people to rebut Drexler’s sketch, but I still think the lack of great rebuttals is some evidence here (I don’t think that represents a shift in my view—I guess I just didn’t go into enough detail in the post to get to this kind of nuance (it’s possible that was a mistake)).
Kind of reacting to both of the points you made / bits I quoted above: I think convincing me (or someone more relevant than me, like major EA funders etc) that the chance that advanced nanotechnology arrives by 2040 is less than 1 in 1e-4 would be pretty valuable. I don’t know if you’d be interested in working to try to do that, but if you were I’d potentially be very keen to support that. (Similarly for ~showing something like “near-infeasibility for Drexler’s sketch.)
[2023-01-19 update: there’s now an expanded version of this comment here.]
Note: I’ve edited this comment after dashing it off this morning, mainly for clarity.
Sure, that all makes sense. I’ll think about spending some more time on this. In the meantime I’ll just give my quick reactions:
On reluctance to be extremely confident—I start to worry when considerations like this dictate that one give a series of increasingly specific/conjunctive scenarios roughly the same probability. I don’t expect a forum comment or blog post to get someone to such high confidence, but I don’t think it’s beyond reach.
We also have different expectations for AI, which may in the end make the difference.
I don’t expect machine learning to help much, since the kinds of structures in question are very far out of domain, and physical simulation has some intrinsic hardness problems.
I don’t think it’s correct to say that we haven’t tried yet.
Some of the threads I would pull on if I wanted to talk about feasibility, after a relatively recent re-skim:
We’ve done many simulations and measurements of nanoscale mechanical systems since 1992. How does Nanosystems hold up against those?
For example, some of the best-case bearings (e.g. multi-walled carbon nanotubes) seem to have friction worse than Drexler’s numbers by orders of magnitude. Why is that?
Edges also seem to be really important in nanoscale friction, but this is a hard thing to quantify ab initio.
I think there’s an argument using the Akhiezer limit on Qf products that puts tighter upper bounds on dissipation for stiff components, at least at “moderate” operating speeds. This is still a pretty high bound if it can be reached, but dissipation (and cooling) are generally weak points in Nanosystems.
I don’t recall discussion of torsional rigidity of components. I think you can get a couple orders of magnitude over flagellar motors with CNTs, but you run into trouble beyond that.
Nanosystems mainly considers mechanical properties of isolated components and their interfaces. If you look at collective motion of the whole, everything looks much worse. For example, stiff 6-axis positional control doesn’t help much if the workpiece has levered fluctuations relative to the assembler arm.
Similarly, in collective motion, non-bonded interfaces should be large contributors to phonon radiation and dissipation.
Due to surface effects, just about anything at the nanoscale can be piezoelectric/flexoelectric with a strength comparable to industrial workhorse bulk piezoelectrics. This can dramatically alter mechanical properties relative to the continuum approximation. (Sometimes in a favorable direction! But it’s not clear how accurate simulations are, and it’s hard to set up experiments.)
Current ab initio simulation methods are accurate only to within a few percent on “easy” properties like electric dipole moments (last I checked). Time-domain simulations are difficult to extend beyond picoseconds. What tolerances do you need to make reliable mechanisms?
In general I wouldn’t be surprised if a couple orders of magnitude in productivity over biological systems were physically feasible for typically biological products (that’s closer to my 1% by 2040 scenario). Broad-spectrum utility is much harder, as is each further step in energy efficiency or speed.
I found this post helpful, since lately I’ve been trying to understand the role of molecular nanotechnology in EA and x-risk discussions. I appreciate your laying out your thinking, but I think full-time effort here is premature.
This sounds astonishingly high to me (as does 1-2% without TAI). My read is that no research program active today leads to advanced nanotechnology by 2040. Absent an Apollo program, you’d need several serial breakthroughs from a small number of researchers. Echoing Peter McCluskey’s comment, there’s no profit motive or arms race to spur such an investment. I’d give even a megaproject slim odds—all these synthesis methods, novel molecules, assemblies, information and power management—in the span of three graduate student generations? Simulations are too computationally expensive and not accurate enough to parallelize much of this path. I’d put the chance below 1e-4, and that feels very conservative.
Scientists convince themselves that Drexler’s sketch is infeasible more often than one might think. But to someone at that point there’s little reason to pursue the subject further, let alone publish on it. It’s of little intrinsic scientific interest to argue an at-best marginal, at-worst pseudoscientific question. It has nothing to offer their own research program or their career. Smalley’s participation in the debate certainly didn’t redound to his reputation.
So there’s not much publication-quality work contesting Nanosystems or establishing tighter upper bounds on maximum capabilities. But that’s at least in part because such work is self-disincentivizing. Presumably some arguments people find sufficient for themselves wouldn’t go through in generality or can’t be formalized enough to satisfy a demand for a physical impossibility proof, but I wouldn’t put much weight on the apparent lack of rebuttals.
Thanks, would be interested to discuss more! I’ll give some reactions here for the time being
(For context / slight warning on the quality of the below: I haven’t thought about this for a while, and in order to write the below I’m mostly relying on old notes + my current sense of whether I still agree with them.)
Maybe we don’t want to get into an AGI/TAI timelines discussion here (and I don’t have great insights to offer there anyway) so I’ll focus on the pre-TAI number.
I definitely agree that it seems like we’re not at all on track to get to advanced nanotechnology in 20 years, and I’m not sure I disagree with anything you said about what needs to happen to get there etc.
I’ll try to say some things that might make it clearer why we are currently giving different numbers here (though to be clear, as is hopefully apparent in the post, I’m not especially convinced about the number I gave)
I think getting to 99.99% confidence is pretty hard—like in the 0.001% fastest-development scenarios I feel like we’re far into “wow I made some very wrong assumptions I wasn’t even aware I was making” territory. (In general with prediction, I feel like in the 10% most extreme scenarios an assumption I thought was rock solid turns out to be untrue)
Apart from the “reluctance to be extremely confident in anything” thing:
I think the main scenario I have in mind for pre-TAI advanced nanotechnology by 2040 is one where some very powerful AI that isn’t powerful enough to count as TAI gets developed and speeds up (relevant parts of) science R&D a lot
I think there’s also some (very small) chance that advanced nanotechnology is much easier than it currently seems, since (maybe) we haven’t really tried yet. Either through roughly Drexler’s path, or through some other path.
I definitely agree with the points about incentives for people to rebut Drexler’s sketch, but I still think the lack of great rebuttals is some evidence here (I don’t think that represents a shift in my view—I guess I just didn’t go into enough detail in the post to get to this kind of nuance (it’s possible that was a mistake)).
Kind of reacting to both of the points you made / bits I quoted above: I think convincing me (or someone more relevant than me, like major EA funders etc) that the chance that advanced nanotechnology arrives by 2040 is less than 1 in 1e-4 would be pretty valuable. I don’t know if you’d be interested in working to try to do that, but if you were I’d potentially be very keen to support that. (Similarly for ~showing something like “near-infeasibility for Drexler’s sketch.)
[2023-01-19 update: there’s now an expanded version of this comment here.]
Note: I’ve edited this comment after dashing it off this morning, mainly for clarity.
Sure, that all makes sense. I’ll think about spending some more time on this. In the meantime I’ll just give my quick reactions:
On reluctance to be extremely confident—I start to worry when considerations like this dictate that one give a series of increasingly specific/conjunctive scenarios roughly the same probability. I don’t expect a forum comment or blog post to get someone to such high confidence, but I don’t think it’s beyond reach.
We also have different expectations for AI, which may in the end make the difference.
I don’t expect machine learning to help much, since the kinds of structures in question are very far out of domain, and physical simulation has some intrinsic hardness problems.
I don’t think it’s correct to say that we haven’t tried yet.
Some of the threads I would pull on if I wanted to talk about feasibility, after a relatively recent re-skim:
We’ve done many simulations and measurements of nanoscale mechanical systems since 1992. How does Nanosystems hold up against those?
For example, some of the best-case bearings (e.g. multi-walled carbon nanotubes) seem to have friction worse than Drexler’s numbers by orders of magnitude. Why is that?
Edges also seem to be really important in nanoscale friction, but this is a hard thing to quantify ab initio.
I think there’s an argument using the Akhiezer limit on Qf products that puts tighter upper bounds on dissipation for stiff components, at least at “moderate” operating speeds. This is still a pretty high bound if it can be reached, but dissipation (and cooling) are generally weak points in Nanosystems.
I don’t recall discussion of torsional rigidity of components. I think you can get a couple orders of magnitude over flagellar motors with CNTs, but you run into trouble beyond that.
Nanosystems mainly considers mechanical properties of isolated components and their interfaces. If you look at collective motion of the whole, everything looks much worse. For example, stiff 6-axis positional control doesn’t help much if the workpiece has levered fluctuations relative to the assembler arm.
Similarly, in collective motion, non-bonded interfaces should be large contributors to phonon radiation and dissipation.
Due to surface effects, just about anything at the nanoscale can be piezoelectric/flexoelectric with a strength comparable to industrial workhorse bulk piezoelectrics. This can dramatically alter mechanical properties relative to the continuum approximation. (Sometimes in a favorable direction! But it’s not clear how accurate simulations are, and it’s hard to set up experiments.)
Current ab initio simulation methods are accurate only to within a few percent on “easy” properties like electric dipole moments (last I checked). Time-domain simulations are difficult to extend beyond picoseconds. What tolerances do you need to make reliable mechanisms?
In general I wouldn’t be surprised if a couple orders of magnitude in productivity over biological systems were physically feasible for typically biological products (that’s closer to my 1% by 2040 scenario). Broad-spectrum utility is much harder, as is each further step in energy efficiency or speed.
Nice, I don’t think I have much to add at the moment, but I really like + appreciate this comment!