This is factoring in massive transformative AI speedups! I’m guessing you didn’t actually read it? The whole point of the story is that it’s about an intelligence explosion going very wrong.
I did read your scenario. I’m guessing you didn’t read my articles? I’m closely tracking the use of AI in material science, and the technical barriers to things like nanotechnology.
“AI” is not a magic word that makes technical advancements appear out of nowhere. There are fundamental physical limits to what you can realistically model with finite computer resources, and the technical hurdles to drexlerian nanotech are absurd in their difficulty. To make experimental advances in something like nanotech, you need extensive experimentation. The AI does not have nanotech to build those labs, and it takes more than a year for humans to build it.
I usually try to avoid the word “impossible” when talking about speculative scenarios… but by giving it a 1 year time limit, the scenario you have written is impossible.
I think you are assuming the limits of intelligence are at ~”human genius level”? It’s not about what is practically possible from today’s human research standpoint, it’s about what the theoretical limits are. I took some care to ground it in theoretical limits.
Re experimentation, I include the sentence:
“Analysis of real-time sensor data on a vast scale – audio, video, robotics; microscopy of all types; particle accelerators, satellites and space probes – had allowed it to reverse engineer a complete understanding of the laws of nature.”
If you think this is impossible, then I could add something like “On matters where it had less than ideal certainty, and where there was insufficient relevant data, it arranged for experiments to be run to verify and refine it’s theories”, without materially effecting the plausibility or outcome of the scenario.
It’s not humans doing the inventing!
This is factoring in massive transformative AI speedups! I’m guessing you didn’t actually read it? The whole point of the story is that it’s about an intelligence explosion going very wrong.
I did read your scenario. I’m guessing you didn’t read my articles? I’m closely tracking the use of AI in material science, and the technical barriers to things like nanotechnology.
“AI” is not a magic word that makes technical advancements appear out of nowhere. There are fundamental physical limits to what you can realistically model with finite computer resources, and the technical hurdles to drexlerian nanotech are absurd in their difficulty. To make experimental advances in something like nanotech, you need extensive experimentation. The AI does not have nanotech to build those labs, and it takes more than a year for humans to build it.
I usually try to avoid the word “impossible” when talking about speculative scenarios… but by giving it a 1 year time limit, the scenario you have written is impossible.
I think you are assuming the limits of intelligence are at ~”human genius level”? It’s not about what is practically possible from today’s human research standpoint, it’s about what the theoretical limits are. I took some care to ground it in theoretical limits.
Re experimentation, I include the sentence:
If you think this is impossible, then I could add something like “On matters where it had less than ideal certainty, and where there was insufficient relevant data, it arranged for experiments to be run to verify and refine it’s theories”, without materially effecting the plausibility or outcome of the scenario.