It really seems to me like the galaxy thing is just going to mislead, rather than elucidate. I can make my judgements about a system where one planet is converted into computronium, one planets contains a store of every available element, one planet is tiled completely with experimental labs doing automated experiments, etc. But the results of that hypothetical won’t scale down to what we actually care about. For example, it wouldn’t account for the infrastructure that needs to be built to assemble any of those components in bulk.
If someone wants to try their hand at modelling a more earthly scenario, I’d be happy to offer my insights. Remember, this development of nanotech has to predate the AI taking over the world, or else the whole exercise is pointless. You could look at something like “AI blackmails the dictator of a small country into starting a research program” as a starting point.
Personally, I don’t think there is very much you can be certain about, beyond: “this problem is extremely fucking hard”, and “humans aren’t cracking this anytime soon”. I think building the physical infrastructure required to properly do the research in bulk could easily take more than a year on it’s own.
I agree with the claims “this problem is extremely fucking hard” and “humans aren’t cracking this anytime soon” and I suspect Yudkowsky does too these days.
I disagree that nanotech has to predate taking over the world; that wasn’t an assumption I was making or a conclusion I was arguing for at any rate. I agree it is less likely that ASIs will make nanotech before takeover than that they will make nanotech while still on earth.
I like your suggestion to model a more earthly scenario but I lack the energy and interest to do so right now.
My closing statement is that I think your kind of reasoning would have been consistently wrong had it been used in the past—e.g. in 1600 you would have declared so many things to be impossible on the grounds that you didn’t see a way for the natural philosophers and engineers of your time to build them. Things like automobiles, flying machines, moving pictures, thinking machines, etc. It was indeed super difficult to build those things, it turns out—‘impossible’ relative to the R&D capabilities of 1600 -- but R&D capabilities improved by many OOMs, and the impossible became possible.
I disagree that nanotech has to predate taking over the world; that wasn’t an assumption I was making or a conclusion I was arguing for at any rate. I agree it is less likely that ASIs will make nanotech before takeover than that they will make nanotech while still on earth.
Sorry, to be clear, I wasn’t actually making a prediction as to whether nanotech predates AI takeover. My point is that that these discussions are in the context of the question “can nanotech be used to defeat humanity”. If AI can only invent nanotech after defeating humanity, that’s interesting but has no bearing on the question.
I also lack the energy or interest to do the modelling, so we’ll have to leave it there.
My closing statement is that I think your kind of reasoning would have been consistently wrong had it been used in the past—e.g. in 1600 you would have declared so many things to be impossible on the grounds that you didn’t see a way for the natural philosophers and engineers of your time to build them. Things like automobiles, flying machines, moving pictures, thinking machines, etc. It was indeed super difficult to build those things, it turns out—‘impossible’ relative to the R&D capabilities of 1600 -- but R&D capabilities improved by many OOMs, and the impossible became possible.
My closing rebuttal: I have never stated that I am certain that nanotech is impossible. I have only stated that it could be impossible, impractical, or disappointing, and that the timelines for development are large, and would remain so even with the advent of AGI.
If I had stated in 1600 that flying machines, moving pictures, thinking machines, etc were at least 100 years off, I would have been entirely correct and accurate. And for every great technological change that turned out to be true and transformative, there are a hundred great ideas that turned out to be prohibitively expensive, or impractical, or just plain not workable. And as for the ones that did work out, and did transform the world: it almost always took a long time to build them, once we had the ability to. And even then they started out shitty as hell, and took a long, long time to become as flawless as they are today.
I’m not saying new tech can’t change the world, I’m just saying it can’t do it instantly.
(I forgot to mention an important part of my argument, oops—You wouldn’t have said “at least 100 years off” you would have said “at least 5000 years off.” Because you are anchoring to recent-past rates of progress rather than looking at how rates of progress increase over time and extrapolating. (This is just an analogy / data point, not the key part of my argument, but look at GWP growth rates as a proxy for tech progress rates: According to this GWP doubling time was something like 600 years back then, whereas it’s more like 20 years now. So 1.5 OOMs faster.) Saying “at least a hundred years off” in 1600 would be like saying “at least 3 years off” today. Which I think is quite reasonable.)
That argument does make more sense, although it still doesn’t apply to me, as I would never confidently state a 5000 year forecast due to the inherent uncertainty of long term predictions. (My estimates for nanotech are also high uncertainty, for the record).
It really seems to me like the galaxy thing is just going to mislead, rather than elucidate. I can make my judgements about a system where one planet is converted into computronium, one planets contains a store of every available element, one planet is tiled completely with experimental labs doing automated experiments, etc. But the results of that hypothetical won’t scale down to what we actually care about. For example, it wouldn’t account for the infrastructure that needs to be built to assemble any of those components in bulk.
If someone wants to try their hand at modelling a more earthly scenario, I’d be happy to offer my insights. Remember, this development of nanotech has to predate the AI taking over the world, or else the whole exercise is pointless. You could look at something like “AI blackmails the dictator of a small country into starting a research program” as a starting point.
Personally, I don’t think there is very much you can be certain about, beyond: “this problem is extremely fucking hard”, and “humans aren’t cracking this anytime soon”. I think building the physical infrastructure required to properly do the research in bulk could easily take more than a year on it’s own.
I agree with the claims “this problem is extremely fucking hard” and “humans aren’t cracking this anytime soon” and I suspect Yudkowsky does too these days.
I disagree that nanotech has to predate taking over the world; that wasn’t an assumption I was making or a conclusion I was arguing for at any rate. I agree it is less likely that ASIs will make nanotech before takeover than that they will make nanotech while still on earth.
I like your suggestion to model a more earthly scenario but I lack the energy and interest to do so right now.
My closing statement is that I think your kind of reasoning would have been consistently wrong had it been used in the past—e.g. in 1600 you would have declared so many things to be impossible on the grounds that you didn’t see a way for the natural philosophers and engineers of your time to build them. Things like automobiles, flying machines, moving pictures, thinking machines, etc. It was indeed super difficult to build those things, it turns out—‘impossible’ relative to the R&D capabilities of 1600 -- but R&D capabilities improved by many OOMs, and the impossible became possible.
Sorry, to be clear, I wasn’t actually making a prediction as to whether nanotech predates AI takeover. My point is that that these discussions are in the context of the question “can nanotech be used to defeat humanity”. If AI can only invent nanotech after defeating humanity, that’s interesting but has no bearing on the question.
I also lack the energy or interest to do the modelling, so we’ll have to leave it there.
My closing rebuttal: I have never stated that I am certain that nanotech is impossible. I have only stated that it could be impossible, impractical, or disappointing, and that the timelines for development are large, and would remain so even with the advent of AGI.
If I had stated in 1600 that flying machines, moving pictures, thinking machines, etc were at least 100 years off, I would have been entirely correct and accurate. And for every great technological change that turned out to be true and transformative, there are a hundred great ideas that turned out to be prohibitively expensive, or impractical, or just plain not workable. And as for the ones that did work out, and did transform the world: it almost always took a long time to build them, once we had the ability to. And even then they started out shitty as hell, and took a long, long time to become as flawless as they are today.
I’m not saying new tech can’t change the world, I’m just saying it can’t do it instantly.
Thanks for discussing with me!
(I forgot to mention an important part of my argument, oops—You wouldn’t have said “at least 100 years off” you would have said “at least 5000 years off.” Because you are anchoring to recent-past rates of progress rather than looking at how rates of progress increase over time and extrapolating. (This is just an analogy / data point, not the key part of my argument, but look at GWP growth rates as a proxy for tech progress rates: According to this GWP doubling time was something like 600 years back then, whereas it’s more like 20 years now. So 1.5 OOMs faster.) Saying “at least a hundred years off” in 1600 would be like saying “at least 3 years off” today. Which I think is quite reasonable.)
That argument does make more sense, although it still doesn’t apply to me, as I would never confidently state a 5000 year forecast due to the inherent uncertainty of long term predictions. (My estimates for nanotech are also high uncertainty, for the record).
no worries, I enjoyed the debate!