What part of the scenario would you dispute? A million superintelligences will probably exist by 2030, IMO; the hard part is getting to superintelligence at all, not getting to a million of them (since you’ll probably have enough compute to make a million copies)
I agree that the question is about the actual scenario, not the galaxy. The galaxy is a helpful thought experiment though; it seems to have succeeded in establishing the right foundations: How many OOMs of various inputs (compute, experiments, genius insights) will be needed? Presumably a galaxy’s worth would be enough. What about a solar system? What about a planet? What about a million superintelligences and a few years? Asking these questions helps us form a credence distribution over OOMs.
And my point is that our credence distribution should be spread out over many OOMs, but since a million superintelligences would be capable of many more OOMs of nanotech research in various relevant dimensions than all humanity has been able to achieve thus far, it’s plausible that this would be enough. How plausible? Idk I’m guessing 50% or so. I just pulled that number out of my ass, but as far as I can tell you are doing the same with your numbers.
I didn’t say they’d covertly be building it. It would probably be significantly harder if covert, they wouldn’t be able to get as many OOMs. But they’d still get some probably.
I don’t think using humans would mean going at a human pace. The humans would just be used as actuators. I also think making a specialized automated lab might take less than a year, or else a couple years, not more than a few years. (For a million superintelligences with an obedient human nation of servants, that is)
A million superintelligences will probably exist by 2030
This is a very wild claim to throw out with no argumentation to back it up. Cotra puts a 15% chance on transformative AI by 2036, and I find his assumptions incredibly optimistic about AI arrival. (also worth noting that transformative AI and superintelligence are not the same thing). The other thing I dispute is that a million superintelligences would cooperate. They would presumably have different goals and interests: surely at least some of them would betray the other’s plan for a leg-up from humanity.
For a million superintelligences with an obedient human nation of servants, that is
You don’t think some of the people of the “obedient nation” are gonna tip anyone off about the nanotech plan? Unless you think the AI’s have some sort of mind-control powers, in which case why the hell would they need nanotech?
I said IMO. In context it was unnecessary for me to justify the claim, because I was asking whether or not you agreed with it.
I take it that not only do you disagree, you agree it’s the crux? Or don’t you? If you agree it’s the crux (i.e. you agree that probably a million cooperating superintelligences with an obedient nation of humans would be able to make some pretty awesome self-replicating nanotech within a few years) then I can turn to the task of justifying the claim that such a scenario is plausible. If you don’t agree, and think that even such a superintelligent nation would be unable make such things (say, with >75% credence), then I want to talk about that instead.
(Re: people tipping off, etc.: I’m happy to say more on this but I’m going to hold off for now since I don’t want to lose the main thread of the conversation.)
What part of the scenario would you dispute? A million superintelligences will probably exist by 2030, IMO; the hard part is getting to superintelligence at all, not getting to a million of them (since you’ll probably have enough compute to make a million copies)
I agree that the question is about the actual scenario, not the galaxy. The galaxy is a helpful thought experiment though; it seems to have succeeded in establishing the right foundations: How many OOMs of various inputs (compute, experiments, genius insights) will be needed? Presumably a galaxy’s worth would be enough. What about a solar system? What about a planet? What about a million superintelligences and a few years? Asking these questions helps us form a credence distribution over OOMs.
And my point is that our credence distribution should be spread out over many OOMs, but since a million superintelligences would be capable of many more OOMs of nanotech research in various relevant dimensions than all humanity has been able to achieve thus far, it’s plausible that this would be enough. How plausible? Idk I’m guessing 50% or so. I just pulled that number out of my ass, but as far as I can tell you are doing the same with your numbers.
I didn’t say they’d covertly be building it. It would probably be significantly harder if covert, they wouldn’t be able to get as many OOMs. But they’d still get some probably.
I don’t think using humans would mean going at a human pace. The humans would just be used as actuators. I also think making a specialized automated lab might take less than a year, or else a couple years, not more than a few years. (For a million superintelligences with an obedient human nation of servants, that is)
This is a very wild claim to throw out with no argumentation to back it up. Cotra puts a 15% chance on transformative AI by 2036, and I find his assumptions incredibly optimistic about AI arrival. (also worth noting that transformative AI and superintelligence are not the same thing). The other thing I dispute is that a million superintelligences would cooperate. They would presumably have different goals and interests: surely at least some of them would betray the other’s plan for a leg-up from humanity.
You don’t think some of the people of the “obedient nation” are gonna tip anyone off about the nanotech plan? Unless you think the AI’s have some sort of mind-control powers, in which case why the hell would they need nanotech?
I said IMO. In context it was unnecessary for me to justify the claim, because I was asking whether or not you agreed with it.
I take it that not only do you disagree, you agree it’s the crux? Or don’t you? If you agree it’s the crux (i.e. you agree that probably a million cooperating superintelligences with an obedient nation of humans would be able to make some pretty awesome self-replicating nanotech within a few years) then I can turn to the task of justifying the claim that such a scenario is plausible. If you don’t agree, and think that even such a superintelligent nation would be unable make such things (say, with >75% credence), then I want to talk about that instead.
(Re: people tipping off, etc.: I’m happy to say more on this but I’m going to hold off for now since I don’t want to lose the main thread of the conversation.)
Btw Ajeya Cotra is a woman and uses she/her pronouns :)