If you have technical understanding of current AIs, do you truly believe there are any major obstacles left? The kind of problems that AGI companies could reliably not tear down with their resources? If you do, state so in the comments
I’ve just completed a master’s degree in ML, though not in deep learning. I’m very sure there are still major obstacles to AGI, that will not be overcome in the next 5 years nor in the next 20. Primary among them is robust handling of OOD situations.
Look at self-driving cars as an example. It was a test case for AI companies, requiring much less than AGI to succeed, and they’ve so far failed despite billions in investment. From hearing about a fleet of self-driving cars that would be on the market in 2021 or 2022, estimates are now leaning more towards decades from now.
I will publicly predict now that there will be no AGI in the next 20 years. I expect significant achievements will be made, but only in areas where large amounts of relevant training data exist or can be easily generated. It will also struggle to catch on in areas like healthcare where misfiring results cause large damage and lawsuits.
I will also predict that there might be a “stall” of AI progress in a few years, once all the low-hanging fruit problems are picked off, and the remaining problems like self-driving cars aren’t well suited for the current advantages of AI.
just so we’re clear—self driving cars are, in fact, one of the key factors pushing timelines down, and they’ve also done some pretty impressive work on non-killeveryone-proof safety which may be useful as hunch seeds for ainotkilleveryoneism.
they’re not the only source of interesting research, though.
also, I don’t think most of us who expect agi soon expect reliable agi soon. I certainly don’t expect reliability to come early at all by default.
I’ve just completed a master’s degree in ML, though not in deep learning. I’m very sure there are still major obstacles to AGI, that will not be overcome in the next 5 years nor in the next 20. Primary among them is robust handling of OOD situations.
Look at self-driving cars as an example. It was a test case for AI companies, requiring much less than AGI to succeed, and they’ve so far failed despite billions in investment. From hearing about a fleet of self-driving cars that would be on the market in 2021 or 2022, estimates are now leaning more towards decades from now.
I will publicly predict now that there will be no AGI in the next 20 years. I expect significant achievements will be made, but only in areas where large amounts of relevant training data exist or can be easily generated. It will also struggle to catch on in areas like healthcare where misfiring results cause large damage and lawsuits.
I will also predict that there might be a “stall” of AI progress in a few years, once all the low-hanging fruit problems are picked off, and the remaining problems like self-driving cars aren’t well suited for the current advantages of AI.
Aren’t there self-driving cars on the road in a few cities now? (Cruise and maybe Zoox, if I recall correctly).
just so we’re clear—self driving cars are, in fact, one of the key factors pushing timelines down, and they’ve also done some pretty impressive work on non-killeveryone-proof safety which may be useful as hunch seeds for ainotkilleveryoneism.
they’re not the only source of interesting research, though.
also, I don’t think most of us who expect agi soon expect reliable agi soon. I certainly don’t expect reliability to come early at all by default.