Perhaps it’s missing from the summary, but there is trivially a much stronger argument that doesn’t seem addressed here.
Humans must be pretty close to the stupidest possible things that could design things smarter than them.
This is especially true when it comes to the domain of scientific R&D, where we only have even our minimal level of capabilities because it turns out that intelligence generalizes from e.g. basic tool-use and social modeling to other things.
We know that we can pretty reliably create systems that are superhuman in various domains when we figure out a proper training regime for those domains. e.g. AlphaZero is vastly superhuman in chess/go/etc, GPT-3 is superhuman at next token prediction (to say nothing of GPT-4 or subsequent systems), etc.
The nature of intelligent search processes is to route around bottlenecks. The argument re: bottlenecks proves too much, and doesn’t even seem to stand up historically. Why did bottlenecks not fail to stymie superhuman capabilities in the domains where we’re achieved them?
Humanity, today, could[1] embark on a moderately expensive project to enable wide-scale genomic selection for intelligence, which within a single generation would probably produce a substantial number of humans smarter than any who’ve ever lived. Humans are not exactly advantaged in their ability to iterate here, compared to AI.
The general shape of Thorstad’s argument doesn’t really make it clear what sort of counterargument he would admit as valid. Like, yes, humans have not (yet) kicked off any process of obvious, rapid, recusive self-improvement. That is indeed evidence that it might take humans a few decades after they invent computing technology to do so. What evidence, short of us stumbling into the situation under discussion, would be convincing?
Perhaps it’s missing from the summary, but there is trivially a much stronger argument that doesn’t seem addressed here.
Humans must be pretty close to the stupidest possible things that could design things smarter than them.
This is especially true when it comes to the domain of scientific R&D, where we only have even our minimal level of capabilities because it turns out that intelligence generalizes from e.g. basic tool-use and social modeling to other things.
We know that we can pretty reliably create systems that are superhuman in various domains when we figure out a proper training regime for those domains. e.g. AlphaZero is vastly superhuman in chess/go/etc, GPT-3 is superhuman at next token prediction (to say nothing of GPT-4 or subsequent systems), etc.
The nature of intelligent search processes is to route around bottlenecks. The argument re: bottlenecks proves too much, and doesn’t even seem to stand up historically. Why did bottlenecks not fail to stymie superhuman capabilities in the domains where we’re achieved them?
Humanity, today, could[1] embark on a moderately expensive project to enable wide-scale genomic selection for intelligence, which within a single generation would probably produce a substantial number of humans smarter than any who’ve ever lived. Humans are not exactly advantaged in their ability to iterate here, compared to AI.
The general shape of Thorstad’s argument doesn’t really make it clear what sort of counterargument he would admit as valid. Like, yes, humans have not (yet) kicked off any process of obvious, rapid, recusive self-improvement. That is indeed evidence that it might take humans a few decades after they invent computing technology to do so. What evidence, short of us stumbling into the situation under discussion, would be convincing?
(Social and political bottlenecks do exist, but the technology is pretty straightforward.)