I find that 57% very difficult to believe. 10% would be a stretch.
Having intelligent labor that can be quickly produced in factories (by companies that have been able to increase output by millions of times over decades), and do tasks including improving the efficiency of robots (already cheap relative to humans where we have the AI to direct them, and that before reaping economies of scale by producing billions) and solar panels (which already have energy payback times on the order of 1 year in sunny areas), along with still abundant untapped energy resources orders of magnitude greater than our current civilization taps on Earth (and a billionfold for the Solar System) makes it very difficult to make the AGI but no TAI world coherent.
Cyanobacteria can double in 6-12 hours under good conditions, mice can grow their population more than 10,000x in a year. So machinery can be made to replicate quickly, and trillions of von Neumann equivalent researcher-years (but with AI advantages) can move us further towards that from existing technology.
I predict that cashing out the given reasons into detailed descriptions will result in inconsistencies or very implausible requirements.
Thanks for these comments and for the chat earlier!
It sounds like to you, AGI means ~”human minds but better”* (maybe that’s the case for everyone who’s thought deeply about this topic, I don’t know). On the other hand, the definition I used here, “AI that can perform a significant fraction of cognitive tasks as well as any human and for no more money than it would cost for a human to do it”, falls well short of that on at least some reasonable interpretations. I definitely didn’t mean to use an unusually weak definition of AGI here (I was partly basing it on this seemingly very weak definition from Lesswrong, i.e. “a machine capable of behaving intelligently over many domains”), but maybe I did.
Under at least some interpretations of “AI that can perform a significant fraction of cognitive tasks as well as any human and for no more money than it would cost for a human to do it”, you don’t (as I understand it) think that AGI strongly implies TAI; but my impression is that you don’t think AGI under this definition is the right thing to analyse.
Given your AGI definition, I probably want to give a significantly larger probability to “AGI implies TAI” than I did in this post (though on an inside view I’m probably not in “90% seems on the low end” territory, having not thought about this enough to have that much confidence).
I probably also want to push back my AGI timelines at least a bit (e.g. by checking what AGI definitions my outside view sources were using; though I didn’t do this very thoroughly in the first place so the update might not be very large).
*I probably missed some nuance here, please feel free to clarify if so.
I find that 57% very difficult to believe. 10% would be a stretch.
Having intelligent labor that can be quickly produced in factories (by companies that have been able to increase output by millions of times over decades), and do tasks including improving the efficiency of robots (already cheap relative to humans where we have the AI to direct them, and that before reaping economies of scale by producing billions) and solar panels (which already have energy payback times on the order of 1 year in sunny areas), along with still abundant untapped energy resources orders of magnitude greater than our current civilization taps on Earth (and a billionfold for the Solar System) makes it very difficult to make the AGI but no TAI world coherent.
Cyanobacteria can double in 6-12 hours under good conditions, mice can grow their population more than 10,000x in a year. So machinery can be made to replicate quickly, and trillions of von Neumann equivalent researcher-years (but with AI advantages) can move us further towards that from existing technology.
I predict that cashing out the given reasons into detailed descriptions will result in inconsistencies or very implausible requirements.
Thanks for these comments and for the chat earlier!
It sounds like to you, AGI means ~”human minds but better”* (maybe that’s the case for everyone who’s thought deeply about this topic, I don’t know). On the other hand, the definition I used here, “AI that can perform a significant fraction of cognitive tasks as well as any human and for no more money than it would cost for a human to do it”, falls well short of that on at least some reasonable interpretations. I definitely didn’t mean to use an unusually weak definition of AGI here (I was partly basing it on this seemingly very weak definition from Lesswrong, i.e. “a machine capable of behaving intelligently over many domains”), but maybe I did.
Under at least some interpretations of “AI that can perform a significant fraction of cognitive tasks as well as any human and for no more money than it would cost for a human to do it”, you don’t (as I understand it) think that AGI strongly implies TAI; but my impression is that you don’t think AGI under this definition is the right thing to analyse.
Given your AGI definition, I probably want to give a significantly larger probability to “AGI implies TAI” than I did in this post (though on an inside view I’m probably not in “90% seems on the low end” territory, having not thought about this enough to have that much confidence).
I probably also want to push back my AGI timelines at least a bit (e.g. by checking what AGI definitions my outside view sources were using; though I didn’t do this very thoroughly in the first place so the update might not be very large).
*I probably missed some nuance here, please feel free to clarify if so.