I think the fundamental assumption that aligned AGI would cause dramatic economic growth are simply wrong. Like super duper wrong.
It’s important to differentiate between AGI and both SuperIntelligence and God. Most EAs are thinking about an omnipotent and omniscient being.. not AGI.
Is being a high IQ person in Russia or India that useful? Why then do their smart people migrate to the US? Because the US can use that intelligence (Sundar Pichai) while at home they may become a mid level bureaucrat (Sundar Pichai’s dad).
Say you create an AGI, comes out and tells you, “I’ve solved fusion, let’s build plants” … you go “Cool bro, need He3? It’s on the moon, costs too much to go there, figure out how to get there cheaper first”
So there are physical, logistical, real limits to what can be achieved in the physical world with lots of intelligence. Also economic. Because there won’t just be one AGI, there are likely to be multiple, at the very least latency means one on Earth and one on Mars, and I suspect latency will dictate multiple on Earth.
So we are likely to find other bottlenecks besides intelligence alone limit economic growth, and that these will have to be figured out. And also AGI again not being omniscient, will take time to figure things out. Like tell it to solve aging and it comes back and asks for 100 years of compute time.. and it’s just not feasible.
This is what weirds me out about Yud and other EAs.. it’s clearly a religious belief that we are creating an omnipotent being.. rather than a perfectly ordinary intelligent creature that is still limited by the availability of data, compute, data storage, network latency etc.
I think the fundamental assumption that aligned AGI would cause dramatic economic growth are simply wrong. Like super duper wrong.
It’s important to differentiate between AGI and both SuperIntelligence and God. Most EAs are thinking about an omnipotent and omniscient being.. not AGI.
Is being a high IQ person in Russia or India that useful? Why then do their smart people migrate to the US? Because the US can use that intelligence (Sundar Pichai) while at home they may become a mid level bureaucrat (Sundar Pichai’s dad).
Say you create an AGI, comes out and tells you, “I’ve solved fusion, let’s build plants” … you go “Cool bro, need He3? It’s on the moon, costs too much to go there, figure out how to get there cheaper first”
So there are physical, logistical, real limits to what can be achieved in the physical world with lots of intelligence. Also economic. Because there won’t just be one AGI, there are likely to be multiple, at the very least latency means one on Earth and one on Mars, and I suspect latency will dictate multiple on Earth.
So we are likely to find other bottlenecks besides intelligence alone limit economic growth, and that these will have to be figured out. And also AGI again not being omniscient, will take time to figure things out. Like tell it to solve aging and it comes back and asks for 100 years of compute time.. and it’s just not feasible.
This is what weirds me out about Yud and other EAs.. it’s clearly a religious belief that we are creating an omnipotent being.. rather than a perfectly ordinary intelligent creature that is still limited by the availability of data, compute, data storage, network latency etc.