As someone who studied materials science, I enjoyed this post and appreciated the effort you spent on making technical work legible for laypeople.
As a general comment, I would like to see a technical/mechanistic breakdown of other threat models for how AI could cause doom very soon – I would be surprised if this was the only example of a theoretical threat that is practically very unlikely/bottlenecked due to engineering reasons.
I also would like to see such breakdowns, but I think you are drawing the wrong conclusions from this example.
Just because Yudkowsky’s first guess about how to make nanotech, as an amateur, didn’t pan out, doesn’t mean that nanotech is impossible for a million superintelligences working for a year. In fact it’s very little evidence. When there are a million superintelligences they will surely be able to produce many technological marvels very quickly, and for each such marvel, if you had asked Yudkowsky to speculate about how to build it, he would have failed. (Similarly, the technological marvels produced in the 20th century would not have been correctly guessed-how-to-build by people in the 19th century, yet they still happened, and someone in the 19th century could have predicted that many of them would happen despite not being able to guess how. E.g. heavier-than-air flight.)
As someone who studied materials science, I enjoyed this post and appreciated the effort you spent on making technical work legible for laypeople.
As a general comment, I would like to see a technical/mechanistic breakdown of other threat models for how AI could cause doom very soon – I would be surprised if this was the only example of a theoretical threat that is practically very unlikely/bottlenecked due to engineering reasons.
I also would like to see such breakdowns, but I think you are drawing the wrong conclusions from this example.
Just because Yudkowsky’s first guess about how to make nanotech, as an amateur, didn’t pan out, doesn’t mean that nanotech is impossible for a million superintelligences working for a year. In fact it’s very little evidence. When there are a million superintelligences they will surely be able to produce many technological marvels very quickly, and for each such marvel, if you had asked Yudkowsky to speculate about how to build it, he would have failed.
(Similarly, the technological marvels produced in the 20th century would not have been correctly guessed-how-to-build by people in the 19th century, yet they still happened, and someone in the 19th century could have predicted that many of them would happen despite not being able to guess how. E.g. heavier-than-air flight.)