Hey, thanks for engaging. I saved the AGI theorizing for last because it’s the most inherently speculative: I am highly uncertain about it, and everyone else should be too.
But the question I’m interested in is whether a million superintelligences could figure it out in a few years or less. (If it takes them, say, 10 years or longer, then probably they’ll have better ways of taking over the world) Since that’s the situation we’ll actually be facing.
I would dispute that “a million superintelligence exist and cooperate with each other to invent MNT” is a likely scenario, but even given that, my guess would still be no. The usual disclaimer that the following is all my personal guesses as a non-experimentalist and non-future knower:
(1) Is it even in principle possible? Is there some configuration of atoms, that would be a general-purpose nanofactory, capable of making more of itself, that uses diamandoid instead of some other material? Or is there no such configuration?
If we restrict to diamondoid, my credence would be very low, somewhere in the 0 to 10% range. The “diamondoid massively parallel builds diamondoid and everything else” process is intensely challenging: we only need one step to be unworkable for the whole thing to be kaput, and we’ve already identified some potential problems (tips sticking together, hydrogen hitting, etc). With all materials available, my credence is very likely (above 95%) that something self-replicating that is more impressive than bacteria and viruses is possible, but I have no idea how impressive the limits of possibility are.
(2) Is it practical for an entire galactic empire of superintelligences to build in a million years? (Conditional on 1, I think the answer to 2 is ‘of course.’)
I’d agree that this is almost certain conditional on 1.
(3) OK, conditional on the above, the question becomes what the limiting factor is—is it genius insights about clever binding processes or mini-robo-arm-designs exploiting quantum physics to solve the stickiness problems mentioned in this post? Is it mucking around in a laboratory performing experiments to collect data to refine our simulations? Is it compute & sim-algorithms, to run the simulations and predict what designs should in theory work? Genius insights will probably be pretty cheap to come by for a million superintelligences. I’m torn about whether the main constraint will be empirical data to fit the simulations, or compute to run the simulations.
To be clear, all forms of bonds are “exploiting quantum physics”, in that they are low-energy configurations of electrons interacting with each other according to quantum rules. The answer to the sticky fingers problem, if there is one, will almost certainly involve the bonds we already know about, such as using weaker Van-der-Waals forces to stick and unstick atoms, as I think is done in biology?
As for the limiting factor: In the case of the million years of superintelligences, it would probably be a long search over a gargantuan set of materials, and a gargantuan set of possible designs and approaches, to identify ones that are theoretically promising, tests them with computational simulations to whittle them down, and then experimentally create each material and each approach and test them all in turn. The galaxy cluster would be able to optimize each step to calculate what balance will be fastest overall.
The balance will be different in the galaxy than in the human scale, because they would have orders of magnitude more compute available (including quantum computing), would have a galaxy worth of materials available, wouldn’t have to hide from people, etc. So you really have to ask about the actual scenario, not the galaxy.
In the actual scenario of super-AI trying to covertly build nanotech, the bottleneck would likely be experimental. The problem is a twin dilemma: If you have to rely on employing humans in a lab, then they go at human pace, and hence will not get the job done in a few years. If you try to eliminate the humans from the production process, you need to build a specialized automated lab… which also requires humans, and would probably take more than a few years.
What part of the scenario would you dispute? A million superintelligences will probably exist by 2030, IMO; the hard part is getting to superintelligence at all, not getting to a million of them (since you’ll probably have enough compute to make a million copies)
I agree that the question is about the actual scenario, not the galaxy. The galaxy is a helpful thought experiment though; it seems to have succeeded in establishing the right foundations: How many OOMs of various inputs (compute, experiments, genius insights) will be needed? Presumably a galaxy’s worth would be enough. What about a solar system? What about a planet? What about a million superintelligences and a few years? Asking these questions helps us form a credence distribution over OOMs.
And my point is that our credence distribution should be spread out over many OOMs, but since a million superintelligences would be capable of many more OOMs of nanotech research in various relevant dimensions than all humanity has been able to achieve thus far, it’s plausible that this would be enough. How plausible? Idk I’m guessing 50% or so. I just pulled that number out of my ass, but as far as I can tell you are doing the same with your numbers.
I didn’t say they’d covertly be building it. It would probably be significantly harder if covert, they wouldn’t be able to get as many OOMs. But they’d still get some probably.
I don’t think using humans would mean going at a human pace. The humans would just be used as actuators. I also think making a specialized automated lab might take less than a year, or else a couple years, not more than a few years. (For a million superintelligences with an obedient human nation of servants, that is)
A million superintelligences will probably exist by 2030
This is a very wild claim to throw out with no argumentation to back it up. Cotra puts a 15% chance on transformative AI by 2036, and I find his assumptions incredibly optimistic about AI arrival. (also worth noting that transformative AI and superintelligence are not the same thing). The other thing I dispute is that a million superintelligences would cooperate. They would presumably have different goals and interests: surely at least some of them would betray the other’s plan for a leg-up from humanity.
For a million superintelligences with an obedient human nation of servants, that is
You don’t think some of the people of the “obedient nation” are gonna tip anyone off about the nanotech plan? Unless you think the AI’s have some sort of mind-control powers, in which case why the hell would they need nanotech?
I said IMO. In context it was unnecessary for me to justify the claim, because I was asking whether or not you agreed with it.
I take it that not only do you disagree, you agree it’s the crux? Or don’t you? If you agree it’s the crux (i.e. you agree that probably a million cooperating superintelligences with an obedient nation of humans would be able to make some pretty awesome self-replicating nanotech within a few years) then I can turn to the task of justifying the claim that such a scenario is plausible. If you don’t agree, and think that even such a superintelligent nation would be unable make such things (say, with >75% credence), then I want to talk about that instead.
(Re: people tipping off, etc.: I’m happy to say more on this but I’m going to hold off for now since I don’t want to lose the main thread of the conversation.)
With all materials available, my credence is very likely (above 95%) that something self-replicating that is more impressive than bacteria and viruses is possible, but I have no idea how impressive the limits of possibility are.
Much of the (purported) advantage of diamondoid mechanisms is that they’re (meant to be) stiff enough to operate deterministically with atomic precision. Without that, you’re likely to end up much closer to biological systems—transport is more diffusive, the success of any step is probabilistic, and you need a whole ecosystem of mechanisms for repair and recycling (meaning the design problem isn’t necessarily easier). For anything that doesn’t specifically need self-replication for some reason, it’ll be hard to beat (e.g.) flow reactors.
Hey, thanks for engaging. I saved the AGI theorizing for last because it’s the most inherently speculative: I am highly uncertain about it, and everyone else should be too.
I would dispute that “a million superintelligence exist and cooperate with each other to invent MNT” is a likely scenario, but even given that, my guess would still be no. The usual disclaimer that the following is all my personal guesses as a non-experimentalist and non-future knower:
If we restrict to diamondoid, my credence would be very low, somewhere in the 0 to 10% range. The “diamondoid massively parallel builds diamondoid and everything else” process is intensely challenging: we only need one step to be unworkable for the whole thing to be kaput, and we’ve already identified some potential problems (tips sticking together, hydrogen hitting, etc). With all materials available, my credence is very likely (above 95%) that something self-replicating that is more impressive than bacteria and viruses is possible, but I have no idea how impressive the limits of possibility are.
I’d agree that this is almost certain conditional on 1.
To be clear, all forms of bonds are “exploiting quantum physics”, in that they are low-energy configurations of electrons interacting with each other according to quantum rules. The answer to the sticky fingers problem, if there is one, will almost certainly involve the bonds we already know about, such as using weaker Van-der-Waals forces to stick and unstick atoms, as I think is done in biology?
As for the limiting factor: In the case of the million years of superintelligences, it would probably be a long search over a gargantuan set of materials, and a gargantuan set of possible designs and approaches, to identify ones that are theoretically promising, tests them with computational simulations to whittle them down, and then experimentally create each material and each approach and test them all in turn. The galaxy cluster would be able to optimize each step to calculate what balance will be fastest overall.
The balance will be different in the galaxy than in the human scale, because they would have orders of magnitude more compute available (including quantum computing), would have a galaxy worth of materials available, wouldn’t have to hide from people, etc. So you really have to ask about the actual scenario, not the galaxy.
In the actual scenario of super-AI trying to covertly build nanotech, the bottleneck would likely be experimental. The problem is a twin dilemma: If you have to rely on employing humans in a lab, then they go at human pace, and hence will not get the job done in a few years. If you try to eliminate the humans from the production process, you need to build a specialized automated lab… which also requires humans, and would probably take more than a few years.
What part of the scenario would you dispute? A million superintelligences will probably exist by 2030, IMO; the hard part is getting to superintelligence at all, not getting to a million of them (since you’ll probably have enough compute to make a million copies)
I agree that the question is about the actual scenario, not the galaxy. The galaxy is a helpful thought experiment though; it seems to have succeeded in establishing the right foundations: How many OOMs of various inputs (compute, experiments, genius insights) will be needed? Presumably a galaxy’s worth would be enough. What about a solar system? What about a planet? What about a million superintelligences and a few years? Asking these questions helps us form a credence distribution over OOMs.
And my point is that our credence distribution should be spread out over many OOMs, but since a million superintelligences would be capable of many more OOMs of nanotech research in various relevant dimensions than all humanity has been able to achieve thus far, it’s plausible that this would be enough. How plausible? Idk I’m guessing 50% or so. I just pulled that number out of my ass, but as far as I can tell you are doing the same with your numbers.
I didn’t say they’d covertly be building it. It would probably be significantly harder if covert, they wouldn’t be able to get as many OOMs. But they’d still get some probably.
I don’t think using humans would mean going at a human pace. The humans would just be used as actuators. I also think making a specialized automated lab might take less than a year, or else a couple years, not more than a few years. (For a million superintelligences with an obedient human nation of servants, that is)
This is a very wild claim to throw out with no argumentation to back it up. Cotra puts a 15% chance on transformative AI by 2036, and I find his assumptions incredibly optimistic about AI arrival. (also worth noting that transformative AI and superintelligence are not the same thing). The other thing I dispute is that a million superintelligences would cooperate. They would presumably have different goals and interests: surely at least some of them would betray the other’s plan for a leg-up from humanity.
You don’t think some of the people of the “obedient nation” are gonna tip anyone off about the nanotech plan? Unless you think the AI’s have some sort of mind-control powers, in which case why the hell would they need nanotech?
I said IMO. In context it was unnecessary for me to justify the claim, because I was asking whether or not you agreed with it.
I take it that not only do you disagree, you agree it’s the crux? Or don’t you? If you agree it’s the crux (i.e. you agree that probably a million cooperating superintelligences with an obedient nation of humans would be able to make some pretty awesome self-replicating nanotech within a few years) then I can turn to the task of justifying the claim that such a scenario is plausible. If you don’t agree, and think that even such a superintelligent nation would be unable make such things (say, with >75% credence), then I want to talk about that instead.
(Re: people tipping off, etc.: I’m happy to say more on this but I’m going to hold off for now since I don’t want to lose the main thread of the conversation.)
Btw Ajeya Cotra is a woman and uses she/her pronouns :)
Much of the (purported) advantage of diamondoid mechanisms is that they’re (meant to be) stiff enough to operate deterministically with atomic precision. Without that, you’re likely to end up much closer to biological systems—transport is more diffusive, the success of any step is probabilistic, and you need a whole ecosystem of mechanisms for repair and recycling (meaning the design problem isn’t necessarily easier). For anything that doesn’t specifically need self-replication for some reason, it’ll be hard to beat (e.g.) flow reactors.