Thanks for this thoughtful and detailed deep dive!
I think it misses the main cruxes though. Yes, some people (Drexler and young Yudkowsky) thought that ordinary human science would get us all the way to atomically precise manufacturing in our lifetimes. For the reasons you mention, that seems probably wrong.
But the question I’m interested in is whether a million superintelligences could figure it out in a few years or less. (If it takes them, say, 10 years or longer, then probably they’ll have better ways of taking over the world) Since that’s the situation we’ll actually be facing.
To answer that question, we need to ask questions like
(1) Is it even in principle possible? Is there some configuration of atoms, that would be a general-purpose nanofactory, capable of making more of itself, that uses diamandoid instead of some other material? Or is there no such configuration?
Seems like the answer is “Probably, though not necessarily; it might turn out that the obstacles discussed are truly insurmountable. Maybe 80% credence.” If we remove the diamandoid criterion and allow it to be built of any material (but still require it to be dramatically more impressive and general-purpose / programmable than ordinary life forms) then I feel like the credence shoots up to 95%, the remaining 5% being model uncertainty.
(2) Is it practical for an entire galactic empire of superintelligences to build in a million years? (Conditional on 1, I think the answer to 2 is ‘of course.’)
(3) OK, conditional on the above, the question becomes what the limiting factor is—is it genius insights about clever binding processes or mini-robo-arm-designs exploiting quantum physics to solve the stickiness problems mentioned in this post? Is it mucking around in a laboratory performing experiments to collect data to refine our simulations? Is it compute & sim-algorithms, to run the simulations and predict what designs should in theory work? Genius insights will probably be pretty cheap to come by for a million superintelligences. I’m torn about whether the main constraint will be empirical data to fit the simulations, or compute to run the simulations.
(4) What’s our credence distribution over orders of magnitude of the following inputs: Genius, experiments, and compute, in each case assuming that it’s the bottleneck? Not sure how to think about genius, but it’s OK because I don’t think it’ll be the bottleneck. Our distributions should range over many orders of magnitude, and should update on our observation so far that however many experiments and simulations humans have done didn’t seem close to being enough.
I wildly guess something like 50% that we’ll see some sort of super powerful nanofactory-like thing. I’m more like 5% that it consists of diamandoid in particular, there are so many different material designs and even if diamandoid is viable and in some sense theoretically the best, the theoretical best probably takes several OOMs more inputs to achieve than something else which is just merely good enough.
Hey, thanks for engaging. I saved the AGI theorizing for last because it’s the most inherently speculative: I am highly uncertain about it, and everyone else should be too.
But the question I’m interested in is whether a million superintelligences could figure it out in a few years or less. (If it takes them, say, 10 years or longer, then probably they’ll have better ways of taking over the world) Since that’s the situation we’ll actually be facing.
I would dispute that “a million superintelligence exist and cooperate with each other to invent MNT” is a likely scenario, but even given that, my guess would still be no. The usual disclaimer that the following is all my personal guesses as a non-experimentalist and non-future knower:
(1) Is it even in principle possible? Is there some configuration of atoms, that would be a general-purpose nanofactory, capable of making more of itself, that uses diamandoid instead of some other material? Or is there no such configuration?
If we restrict to diamondoid, my credence would be very low, somewhere in the 0 to 10% range. The “diamondoid massively parallel builds diamondoid and everything else” process is intensely challenging: we only need one step to be unworkable for the whole thing to be kaput, and we’ve already identified some potential problems (tips sticking together, hydrogen hitting, etc). With all materials available, my credence is very likely (above 95%) that something self-replicating that is more impressive than bacteria and viruses is possible, but I have no idea how impressive the limits of possibility are.
(2) Is it practical for an entire galactic empire of superintelligences to build in a million years? (Conditional on 1, I think the answer to 2 is ‘of course.’)
I’d agree that this is almost certain conditional on 1.
(3) OK, conditional on the above, the question becomes what the limiting factor is—is it genius insights about clever binding processes or mini-robo-arm-designs exploiting quantum physics to solve the stickiness problems mentioned in this post? Is it mucking around in a laboratory performing experiments to collect data to refine our simulations? Is it compute & sim-algorithms, to run the simulations and predict what designs should in theory work? Genius insights will probably be pretty cheap to come by for a million superintelligences. I’m torn about whether the main constraint will be empirical data to fit the simulations, or compute to run the simulations.
To be clear, all forms of bonds are “exploiting quantum physics”, in that they are low-energy configurations of electrons interacting with each other according to quantum rules. The answer to the sticky fingers problem, if there is one, will almost certainly involve the bonds we already know about, such as using weaker Van-der-Waals forces to stick and unstick atoms, as I think is done in biology?
As for the limiting factor: In the case of the million years of superintelligences, it would probably be a long search over a gargantuan set of materials, and a gargantuan set of possible designs and approaches, to identify ones that are theoretically promising, tests them with computational simulations to whittle them down, and then experimentally create each material and each approach and test them all in turn. The galaxy cluster would be able to optimize each step to calculate what balance will be fastest overall.
The balance will be different in the galaxy than in the human scale, because they would have orders of magnitude more compute available (including quantum computing), would have a galaxy worth of materials available, wouldn’t have to hide from people, etc. So you really have to ask about the actual scenario, not the galaxy.
In the actual scenario of super-AI trying to covertly build nanotech, the bottleneck would likely be experimental. The problem is a twin dilemma: If you have to rely on employing humans in a lab, then they go at human pace, and hence will not get the job done in a few years. If you try to eliminate the humans from the production process, you need to build a specialized automated lab… which also requires humans, and would probably take more than a few years.
What part of the scenario would you dispute? A million superintelligences will probably exist by 2030, IMO; the hard part is getting to superintelligence at all, not getting to a million of them (since you’ll probably have enough compute to make a million copies)
I agree that the question is about the actual scenario, not the galaxy. The galaxy is a helpful thought experiment though; it seems to have succeeded in establishing the right foundations: How many OOMs of various inputs (compute, experiments, genius insights) will be needed? Presumably a galaxy’s worth would be enough. What about a solar system? What about a planet? What about a million superintelligences and a few years? Asking these questions helps us form a credence distribution over OOMs.
And my point is that our credence distribution should be spread out over many OOMs, but since a million superintelligences would be capable of many more OOMs of nanotech research in various relevant dimensions than all humanity has been able to achieve thus far, it’s plausible that this would be enough. How plausible? Idk I’m guessing 50% or so. I just pulled that number out of my ass, but as far as I can tell you are doing the same with your numbers.
I didn’t say they’d covertly be building it. It would probably be significantly harder if covert, they wouldn’t be able to get as many OOMs. But they’d still get some probably.
I don’t think using humans would mean going at a human pace. The humans would just be used as actuators. I also think making a specialized automated lab might take less than a year, or else a couple years, not more than a few years. (For a million superintelligences with an obedient human nation of servants, that is)
A million superintelligences will probably exist by 2030
This is a very wild claim to throw out with no argumentation to back it up. Cotra puts a 15% chance on transformative AI by 2036, and I find his assumptions incredibly optimistic about AI arrival. (also worth noting that transformative AI and superintelligence are not the same thing). The other thing I dispute is that a million superintelligences would cooperate. They would presumably have different goals and interests: surely at least some of them would betray the other’s plan for a leg-up from humanity.
For a million superintelligences with an obedient human nation of servants, that is
You don’t think some of the people of the “obedient nation” are gonna tip anyone off about the nanotech plan? Unless you think the AI’s have some sort of mind-control powers, in which case why the hell would they need nanotech?
I said IMO. In context it was unnecessary for me to justify the claim, because I was asking whether or not you agreed with it.
I take it that not only do you disagree, you agree it’s the crux? Or don’t you? If you agree it’s the crux (i.e. you agree that probably a million cooperating superintelligences with an obedient nation of humans would be able to make some pretty awesome self-replicating nanotech within a few years) then I can turn to the task of justifying the claim that such a scenario is plausible. If you don’t agree, and think that even such a superintelligent nation would be unable make such things (say, with >75% credence), then I want to talk about that instead.
(Re: people tipping off, etc.: I’m happy to say more on this but I’m going to hold off for now since I don’t want to lose the main thread of the conversation.)
With all materials available, my credence is very likely (above 95%) that something self-replicating that is more impressive than bacteria and viruses is possible, but I have no idea how impressive the limits of possibility are.
Much of the (purported) advantage of diamondoid mechanisms is that they’re (meant to be) stiff enough to operate deterministically with atomic precision. Without that, you’re likely to end up much closer to biological systems—transport is more diffusive, the success of any step is probabilistic, and you need a whole ecosystem of mechanisms for repair and recycling (meaning the design problem isn’t necessarily easier). For anything that doesn’t specifically need self-replication for some reason, it’ll be hard to beat (e.g.) flow reactors.
I broadly endorse this reply and have mostly shifted to trying to talk about “covalently bonded” bacteria, since using the term “diamondoid” (tightly covalently bonded CHON) causes people to panic about the lack of currently known mechanosynthesis pathways for tetrahedral carbon lattices.
I broadly endorse this reply and have mostly shifted to trying to talk about “covalently bonded” bacteria
This terminology is actually significantly worse, because it makes it almost impossible for anyone to follow up on your claims. Covalent bonds are the most common type of bond in organic chemistry, and thus all existing bacteria have them in ridiculous abundance. So claiming the new technology will be “covalently bonded” does not distinguish it from existing bacteria in the slightest.
To correct some other weird definitions you made in your very short reply: “tetrahedral carbon lattice” is the literal exact same thing as diamond. The scientific definition of diamondoid is also not “tightly covalently bonded CHON”, it specifically refers to hydrocarbon variants of adamantane, which are not tetrahedral (I discussed this in the post). Also, the new technology probably would not fit the definition of “bacteria”, except in a metaphorical sense.
Now, I’m assuming you mean something like “in places where existing bacteria uses weak van der waals forces to stick together, the new tech will use stronger covalent bonds instead”. If you have specific research you are referring to, I would be interested in reading it, because, again, you have made googling the subject impossible.
My problem here would be that in a lot of cases, you actually want the forces to be weak. If you want to assemble and reassemble things, and stick them and break them, having too strong forces will make life significantly harder (this was the subject of the theoretical study I looked at). There is a reason bricklayers don’t coat their gloves in superglue when working with bricks.
As for the “tetrahedral carbon” if you are aware of other dedicated research efforts for mechanosynthesis that I have missed in my post, I would be genuinely interested in reading up on them. I did my best to look, and did my best to highlight the ones I could find in my extensively researched article which I’m unsure if you actually read.
What if he just said “Some sort of super-powerful nanofactory-like thing?”
He’s not citing some existing literature that shows how to do it, but rather citing some existing literature which should make it plausible to a reasonable judge that a million superintelligences working for a year could figure out how to do it. (If you dispute the plausibility of this, what’s your argument? We have an unfinished exchange on this point elsewhere in this comment section. Seems you agree that a galaxy full of superintelligences could do it; I feel like it’s pretty plausible that if a galaxy of superintelligences could do it, a mere million also could do it.)
What if he just said “Some sort of super-powerful nanofactory-like thing?”
I would vastly prefer this phrasing, because it would be an accurate relaying of his beliefs, and would not involve the use of scientific terms that are at best misleading and at worst active misinformation.
As for the “millions of superintelligences”, one of my main cruxes is that I do not think we will have millions of superintelligences in my lifetime. We may have lots of AGI, but I do not believe that AGI=superintelligence. Also, I think that if a few superintelligences come into existence they may prevent others from being built out of self-preservation. These points are probably out of scope here though.
I don’t think a million superintelligences could invent nanotech in a year, with only the avalaible resources on earth. Unlike the galaxy, there is limited computational power available on earth, and limited everything else as well. I do not think the sheer scale of experimentation required could be assembled in a year, without having already invented nanotech. The galaxy situation is fundamentally misleading.
Lastly, I think even if nanotech is invented, it will probably end up being disappointing or limited in some way. This tends to be the case with all technologies: Did anyone predict that when we could build an AI that could easily pass a simple turing test, but be unable to multiply large numbers together? Hypothetical technologies get to be perfect in our minds, but as something actually gets built, it accumulates shortcomings and weaknesses from the inevitable brushes with engineering.
Cool. Seems you and I are mostly agreed on terminology then.
Yeah we definitely disagree about that crux. You’ll see. Happy to talk about it more sometime if you like.
Re: galaxy vs. earth: The difference is one of degree, not kind. In both cases we have a finite amount of resources and a finite amount of time with which to do experiments. The proper way to handle this, I think, is to smear out our uncertainty over many orders of magnitude. E.g. the first OOM gets 5% of our probability mass, the second OOM gets 5% of the remaining probability mass, and so forth. Then we look at how many OOMs of extra research and testing (compared to what humans have done) a million ASIs would be able to do in a year, and compare it to how many OOMs extra (beyond that level) a galaxy worth of ASI would be able to do in many years. And crunch the numbers.
It really seems to me like the galaxy thing is just going to mislead, rather than elucidate. I can make my judgements about a system where one planet is converted into computronium, one planets contains a store of every available element, one planet is tiled completely with experimental labs doing automated experiments, etc. But the results of that hypothetical won’t scale down to what we actually care about. For example, it wouldn’t account for the infrastructure that needs to be built to assemble any of those components in bulk.
If someone wants to try their hand at modelling a more earthly scenario, I’d be happy to offer my insights. Remember, this development of nanotech has to predate the AI taking over the world, or else the whole exercise is pointless. You could look at something like “AI blackmails the dictator of a small country into starting a research program” as a starting point.
Personally, I don’t think there is very much you can be certain about, beyond: “this problem is extremely fucking hard”, and “humans aren’t cracking this anytime soon”. I think building the physical infrastructure required to properly do the research in bulk could easily take more than a year on it’s own.
I agree with the claims “this problem is extremely fucking hard” and “humans aren’t cracking this anytime soon” and I suspect Yudkowsky does too these days.
I disagree that nanotech has to predate taking over the world; that wasn’t an assumption I was making or a conclusion I was arguing for at any rate. I agree it is less likely that ASIs will make nanotech before takeover than that they will make nanotech while still on earth.
I like your suggestion to model a more earthly scenario but I lack the energy and interest to do so right now.
My closing statement is that I think your kind of reasoning would have been consistently wrong had it been used in the past—e.g. in 1600 you would have declared so many things to be impossible on the grounds that you didn’t see a way for the natural philosophers and engineers of your time to build them. Things like automobiles, flying machines, moving pictures, thinking machines, etc. It was indeed super difficult to build those things, it turns out—‘impossible’ relative to the R&D capabilities of 1600 -- but R&D capabilities improved by many OOMs, and the impossible became possible.
I disagree that nanotech has to predate taking over the world; that wasn’t an assumption I was making or a conclusion I was arguing for at any rate. I agree it is less likely that ASIs will make nanotech before takeover than that they will make nanotech while still on earth.
Sorry, to be clear, I wasn’t actually making a prediction as to whether nanotech predates AI takeover. My point is that that these discussions are in the context of the question “can nanotech be used to defeat humanity”. If AI can only invent nanotech after defeating humanity, that’s interesting but has no bearing on the question.
I also lack the energy or interest to do the modelling, so we’ll have to leave it there.
My closing statement is that I think your kind of reasoning would have been consistently wrong had it been used in the past—e.g. in 1600 you would have declared so many things to be impossible on the grounds that you didn’t see a way for the natural philosophers and engineers of your time to build them. Things like automobiles, flying machines, moving pictures, thinking machines, etc. It was indeed super difficult to build those things, it turns out—‘impossible’ relative to the R&D capabilities of 1600 -- but R&D capabilities improved by many OOMs, and the impossible became possible.
My closing rebuttal: I have never stated that I am certain that nanotech is impossible. I have only stated that it could be impossible, impractical, or disappointing, and that the timelines for development are large, and would remain so even with the advent of AGI.
If I had stated in 1600 that flying machines, moving pictures, thinking machines, etc were at least 100 years off, I would have been entirely correct and accurate. And for every great technological change that turned out to be true and transformative, there are a hundred great ideas that turned out to be prohibitively expensive, or impractical, or just plain not workable. And as for the ones that did work out, and did transform the world: it almost always took a long time to build them, once we had the ability to. And even then they started out shitty as hell, and took a long, long time to become as flawless as they are today.
I’m not saying new tech can’t change the world, I’m just saying it can’t do it instantly.
(I forgot to mention an important part of my argument, oops—You wouldn’t have said “at least 100 years off” you would have said “at least 5000 years off.” Because you are anchoring to recent-past rates of progress rather than looking at how rates of progress increase over time and extrapolating. (This is just an analogy / data point, not the key part of my argument, but look at GWP growth rates as a proxy for tech progress rates: According to this GWP doubling time was something like 600 years back then, whereas it’s more like 20 years now. So 1.5 OOMs faster.) Saying “at least a hundred years off” in 1600 would be like saying “at least 3 years off” today. Which I think is quite reasonable.)
That argument does make more sense, although it still doesn’t apply to me, as I would never confidently state a 5000 year forecast due to the inherent uncertainty of long term predictions. (My estimates for nanotech are also high uncertainty, for the record).
Thanks for this thoughtful and detailed deep dive!
I think it misses the main cruxes though. Yes, some people (Drexler and young Yudkowsky) thought that ordinary human science would get us all the way to atomically precise manufacturing in our lifetimes. For the reasons you mention, that seems probably wrong.
But the question I’m interested in is whether a million superintelligences could figure it out in a few years or less. (If it takes them, say, 10 years or longer, then probably they’ll have better ways of taking over the world) Since that’s the situation we’ll actually be facing.
To answer that question, we need to ask questions like
(1) Is it even in principle possible? Is there some configuration of atoms, that would be a general-purpose nanofactory, capable of making more of itself, that uses diamandoid instead of some other material? Or is there no such configuration?
Seems like the answer is “Probably, though not necessarily; it might turn out that the obstacles discussed are truly insurmountable. Maybe 80% credence.” If we remove the diamandoid criterion and allow it to be built of any material (but still require it to be dramatically more impressive and general-purpose / programmable than ordinary life forms) then I feel like the credence shoots up to 95%, the remaining 5% being model uncertainty.
(2) Is it practical for an entire galactic empire of superintelligences to build in a million years? (Conditional on 1, I think the answer to 2 is ‘of course.’)
(3) OK, conditional on the above, the question becomes what the limiting factor is—is it genius insights about clever binding processes or mini-robo-arm-designs exploiting quantum physics to solve the stickiness problems mentioned in this post? Is it mucking around in a laboratory performing experiments to collect data to refine our simulations? Is it compute & sim-algorithms, to run the simulations and predict what designs should in theory work? Genius insights will probably be pretty cheap to come by for a million superintelligences. I’m torn about whether the main constraint will be empirical data to fit the simulations, or compute to run the simulations.
(4) What’s our credence distribution over orders of magnitude of the following inputs: Genius, experiments, and compute, in each case assuming that it’s the bottleneck? Not sure how to think about genius, but it’s OK because I don’t think it’ll be the bottleneck. Our distributions should range over many orders of magnitude, and should update on our observation so far that however many experiments and simulations humans have done didn’t seem close to being enough.
I wildly guess something like 50% that we’ll see some sort of super powerful nanofactory-like thing. I’m more like 5% that it consists of diamandoid in particular, there are so many different material designs and even if diamandoid is viable and in some sense theoretically the best, the theoretical best probably takes several OOMs more inputs to achieve than something else which is just merely good enough.
Hey, thanks for engaging. I saved the AGI theorizing for last because it’s the most inherently speculative: I am highly uncertain about it, and everyone else should be too.
I would dispute that “a million superintelligence exist and cooperate with each other to invent MNT” is a likely scenario, but even given that, my guess would still be no. The usual disclaimer that the following is all my personal guesses as a non-experimentalist and non-future knower:
If we restrict to diamondoid, my credence would be very low, somewhere in the 0 to 10% range. The “diamondoid massively parallel builds diamondoid and everything else” process is intensely challenging: we only need one step to be unworkable for the whole thing to be kaput, and we’ve already identified some potential problems (tips sticking together, hydrogen hitting, etc). With all materials available, my credence is very likely (above 95%) that something self-replicating that is more impressive than bacteria and viruses is possible, but I have no idea how impressive the limits of possibility are.
I’d agree that this is almost certain conditional on 1.
To be clear, all forms of bonds are “exploiting quantum physics”, in that they are low-energy configurations of electrons interacting with each other according to quantum rules. The answer to the sticky fingers problem, if there is one, will almost certainly involve the bonds we already know about, such as using weaker Van-der-Waals forces to stick and unstick atoms, as I think is done in biology?
As for the limiting factor: In the case of the million years of superintelligences, it would probably be a long search over a gargantuan set of materials, and a gargantuan set of possible designs and approaches, to identify ones that are theoretically promising, tests them with computational simulations to whittle them down, and then experimentally create each material and each approach and test them all in turn. The galaxy cluster would be able to optimize each step to calculate what balance will be fastest overall.
The balance will be different in the galaxy than in the human scale, because they would have orders of magnitude more compute available (including quantum computing), would have a galaxy worth of materials available, wouldn’t have to hide from people, etc. So you really have to ask about the actual scenario, not the galaxy.
In the actual scenario of super-AI trying to covertly build nanotech, the bottleneck would likely be experimental. The problem is a twin dilemma: If you have to rely on employing humans in a lab, then they go at human pace, and hence will not get the job done in a few years. If you try to eliminate the humans from the production process, you need to build a specialized automated lab… which also requires humans, and would probably take more than a few years.
What part of the scenario would you dispute? A million superintelligences will probably exist by 2030, IMO; the hard part is getting to superintelligence at all, not getting to a million of them (since you’ll probably have enough compute to make a million copies)
I agree that the question is about the actual scenario, not the galaxy. The galaxy is a helpful thought experiment though; it seems to have succeeded in establishing the right foundations: How many OOMs of various inputs (compute, experiments, genius insights) will be needed? Presumably a galaxy’s worth would be enough. What about a solar system? What about a planet? What about a million superintelligences and a few years? Asking these questions helps us form a credence distribution over OOMs.
And my point is that our credence distribution should be spread out over many OOMs, but since a million superintelligences would be capable of many more OOMs of nanotech research in various relevant dimensions than all humanity has been able to achieve thus far, it’s plausible that this would be enough. How plausible? Idk I’m guessing 50% or so. I just pulled that number out of my ass, but as far as I can tell you are doing the same with your numbers.
I didn’t say they’d covertly be building it. It would probably be significantly harder if covert, they wouldn’t be able to get as many OOMs. But they’d still get some probably.
I don’t think using humans would mean going at a human pace. The humans would just be used as actuators. I also think making a specialized automated lab might take less than a year, or else a couple years, not more than a few years. (For a million superintelligences with an obedient human nation of servants, that is)
This is a very wild claim to throw out with no argumentation to back it up. Cotra puts a 15% chance on transformative AI by 2036, and I find his assumptions incredibly optimistic about AI arrival. (also worth noting that transformative AI and superintelligence are not the same thing). The other thing I dispute is that a million superintelligences would cooperate. They would presumably have different goals and interests: surely at least some of them would betray the other’s plan for a leg-up from humanity.
You don’t think some of the people of the “obedient nation” are gonna tip anyone off about the nanotech plan? Unless you think the AI’s have some sort of mind-control powers, in which case why the hell would they need nanotech?
I said IMO. In context it was unnecessary for me to justify the claim, because I was asking whether or not you agreed with it.
I take it that not only do you disagree, you agree it’s the crux? Or don’t you? If you agree it’s the crux (i.e. you agree that probably a million cooperating superintelligences with an obedient nation of humans would be able to make some pretty awesome self-replicating nanotech within a few years) then I can turn to the task of justifying the claim that such a scenario is plausible. If you don’t agree, and think that even such a superintelligent nation would be unable make such things (say, with >75% credence), then I want to talk about that instead.
(Re: people tipping off, etc.: I’m happy to say more on this but I’m going to hold off for now since I don’t want to lose the main thread of the conversation.)
Btw Ajeya Cotra is a woman and uses she/her pronouns :)
Much of the (purported) advantage of diamondoid mechanisms is that they’re (meant to be) stiff enough to operate deterministically with atomic precision. Without that, you’re likely to end up much closer to biological systems—transport is more diffusive, the success of any step is probabilistic, and you need a whole ecosystem of mechanisms for repair and recycling (meaning the design problem isn’t necessarily easier). For anything that doesn’t specifically need self-replication for some reason, it’ll be hard to beat (e.g.) flow reactors.
I broadly endorse this reply and have mostly shifted to trying to talk about “covalently bonded” bacteria, since using the term “diamondoid” (tightly covalently bonded CHON) causes people to panic about the lack of currently known mechanosynthesis pathways for tetrahedral carbon lattices.
This terminology is actually significantly worse, because it makes it almost impossible for anyone to follow up on your claims. Covalent bonds are the most common type of bond in organic chemistry, and thus all existing bacteria have them in ridiculous abundance. So claiming the new technology will be “covalently bonded” does not distinguish it from existing bacteria in the slightest.
To correct some other weird definitions you made in your very short reply: “tetrahedral carbon lattice” is the literal exact same thing as diamond. The scientific definition of diamondoid is also not “tightly covalently bonded CHON”, it specifically refers to hydrocarbon variants of adamantane, which are not tetrahedral (I discussed this in the post). Also, the new technology probably would not fit the definition of “bacteria”, except in a metaphorical sense.
Now, I’m assuming you mean something like “in places where existing bacteria uses weak van der waals forces to stick together, the new tech will use stronger covalent bonds instead”. If you have specific research you are referring to, I would be interested in reading it, because, again, you have made googling the subject impossible.
My problem here would be that in a lot of cases, you actually want the forces to be weak. If you want to assemble and reassemble things, and stick them and break them, having too strong forces will make life significantly harder (this was the subject of the theoretical study I looked at). There is a reason bricklayers don’t coat their gloves in superglue when working with bricks.
As for the “tetrahedral carbon” if you are aware of other dedicated research efforts for mechanosynthesis that I have missed in my post, I would be genuinely interested in reading up on them. I did my best to look, and did my best to highlight the ones I could find in my extensively researched article which I’m unsure if you actually read.
What if he just said “Some sort of super-powerful nanofactory-like thing?”
He’s not citing some existing literature that shows how to do it, but rather citing some existing literature which should make it plausible to a reasonable judge that a million superintelligences working for a year could figure out how to do it. (If you dispute the plausibility of this, what’s your argument? We have an unfinished exchange on this point elsewhere in this comment section. Seems you agree that a galaxy full of superintelligences could do it; I feel like it’s pretty plausible that if a galaxy of superintelligences could do it, a mere million also could do it.)
I would vastly prefer this phrasing, because it would be an accurate relaying of his beliefs, and would not involve the use of scientific terms that are at best misleading and at worst active misinformation.
As for the “millions of superintelligences”, one of my main cruxes is that I do not think we will have millions of superintelligences in my lifetime. We may have lots of AGI, but I do not believe that AGI=superintelligence. Also, I think that if a few superintelligences come into existence they may prevent others from being built out of self-preservation. These points are probably out of scope here though.
I don’t think a million superintelligences could invent nanotech in a year, with only the avalaible resources on earth. Unlike the galaxy, there is limited computational power available on earth, and limited everything else as well. I do not think the sheer scale of experimentation required could be assembled in a year, without having already invented nanotech. The galaxy situation is fundamentally misleading.
Lastly, I think even if nanotech is invented, it will probably end up being disappointing or limited in some way. This tends to be the case with all technologies: Did anyone predict that when we could build an AI that could easily pass a simple turing test, but be unable to multiply large numbers together? Hypothetical technologies get to be perfect in our minds, but as something actually gets built, it accumulates shortcomings and weaknesses from the inevitable brushes with engineering.
Cool. Seems you and I are mostly agreed on terminology then.
Yeah we definitely disagree about that crux. You’ll see. Happy to talk about it more sometime if you like.
Re: galaxy vs. earth: The difference is one of degree, not kind. In both cases we have a finite amount of resources and a finite amount of time with which to do experiments. The proper way to handle this, I think, is to smear out our uncertainty over many orders of magnitude. E.g. the first OOM gets 5% of our probability mass, the second OOM gets 5% of the remaining probability mass, and so forth. Then we look at how many OOMs of extra research and testing (compared to what humans have done) a million ASIs would be able to do in a year, and compare it to how many OOMs extra (beyond that level) a galaxy worth of ASI would be able to do in many years. And crunch the numbers.
It really seems to me like the galaxy thing is just going to mislead, rather than elucidate. I can make my judgements about a system where one planet is converted into computronium, one planets contains a store of every available element, one planet is tiled completely with experimental labs doing automated experiments, etc. But the results of that hypothetical won’t scale down to what we actually care about. For example, it wouldn’t account for the infrastructure that needs to be built to assemble any of those components in bulk.
If someone wants to try their hand at modelling a more earthly scenario, I’d be happy to offer my insights. Remember, this development of nanotech has to predate the AI taking over the world, or else the whole exercise is pointless. You could look at something like “AI blackmails the dictator of a small country into starting a research program” as a starting point.
Personally, I don’t think there is very much you can be certain about, beyond: “this problem is extremely fucking hard”, and “humans aren’t cracking this anytime soon”. I think building the physical infrastructure required to properly do the research in bulk could easily take more than a year on it’s own.
I agree with the claims “this problem is extremely fucking hard” and “humans aren’t cracking this anytime soon” and I suspect Yudkowsky does too these days.
I disagree that nanotech has to predate taking over the world; that wasn’t an assumption I was making or a conclusion I was arguing for at any rate. I agree it is less likely that ASIs will make nanotech before takeover than that they will make nanotech while still on earth.
I like your suggestion to model a more earthly scenario but I lack the energy and interest to do so right now.
My closing statement is that I think your kind of reasoning would have been consistently wrong had it been used in the past—e.g. in 1600 you would have declared so many things to be impossible on the grounds that you didn’t see a way for the natural philosophers and engineers of your time to build them. Things like automobiles, flying machines, moving pictures, thinking machines, etc. It was indeed super difficult to build those things, it turns out—‘impossible’ relative to the R&D capabilities of 1600 -- but R&D capabilities improved by many OOMs, and the impossible became possible.
Sorry, to be clear, I wasn’t actually making a prediction as to whether nanotech predates AI takeover. My point is that that these discussions are in the context of the question “can nanotech be used to defeat humanity”. If AI can only invent nanotech after defeating humanity, that’s interesting but has no bearing on the question.
I also lack the energy or interest to do the modelling, so we’ll have to leave it there.
My closing rebuttal: I have never stated that I am certain that nanotech is impossible. I have only stated that it could be impossible, impractical, or disappointing, and that the timelines for development are large, and would remain so even with the advent of AGI.
If I had stated in 1600 that flying machines, moving pictures, thinking machines, etc were at least 100 years off, I would have been entirely correct and accurate. And for every great technological change that turned out to be true and transformative, there are a hundred great ideas that turned out to be prohibitively expensive, or impractical, or just plain not workable. And as for the ones that did work out, and did transform the world: it almost always took a long time to build them, once we had the ability to. And even then they started out shitty as hell, and took a long, long time to become as flawless as they are today.
I’m not saying new tech can’t change the world, I’m just saying it can’t do it instantly.
Thanks for discussing with me!
(I forgot to mention an important part of my argument, oops—You wouldn’t have said “at least 100 years off” you would have said “at least 5000 years off.” Because you are anchoring to recent-past rates of progress rather than looking at how rates of progress increase over time and extrapolating. (This is just an analogy / data point, not the key part of my argument, but look at GWP growth rates as a proxy for tech progress rates: According to this GWP doubling time was something like 600 years back then, whereas it’s more like 20 years now. So 1.5 OOMs faster.) Saying “at least a hundred years off” in 1600 would be like saying “at least 3 years off” today. Which I think is quite reasonable.)
That argument does make more sense, although it still doesn’t apply to me, as I would never confidently state a 5000 year forecast due to the inherent uncertainty of long term predictions. (My estimates for nanotech are also high uncertainty, for the record).
no worries, I enjoyed the debate!