I was specifically responding to your assertion that no one should be able to confidently say X. There are world-class experts like Ilya Sutskever and Demis Hassabis who do confidently say X, and they’re even on the bullish, optimistic end of the spectrum in terms of AI capabilities forecasts/​AGI forecasts, such that they’re some of the public figures in AI that people cite when they want to make an argument for near-term AGI. I was only directly responding to that narrow point.
It doesn’t really have anything to do with different specific definitions of AGI. I’m not sure if Sutskever and Hassabis even define AGI the same way, for example. It’s just what both of them have said about what it will take to get to AGI, which is what you specifically said no one should be able to confidently say X.
On your more general argument that it’s obvious AGI or something enough to AGI is obviously going to be developed soon, or has already been developed, well, no, I don’t agree with that general argument. To try to quickly boil down the main cruxes of my counterargument, AI isn’t that useful for anything and there are a lot of thorny research problems people have already been banging their heads against for years that we need to make progress on to make AI more useful.
But I was just trying to respond to your narrow point about no one being able to confidently say X. I wasn’t trying to open up a general debate about near-term AGI (let alone about regulating the generative AI systems that currently exist). However, if you’re eager, I would be happy to have that debate in the comments of another post (e.g. any of the ones I’ve written on the topic, such as the two I just linked to).
I was specifically responding to your assertion that no one should be able to confidently say X. There are world-class experts like Ilya Sutskever and Demis Hassabis who do confidently say X, and they’re even on the bullish, optimistic end of the spectrum in terms of AI capabilities forecasts/​AGI forecasts, such that they’re some of the public figures in AI that people cite when they want to make an argument for near-term AGI. I was only directly responding to that narrow point.
It doesn’t really have anything to do with different specific definitions of AGI. I’m not sure if Sutskever and Hassabis even define AGI the same way, for example. It’s just what both of them have said about what it will take to get to AGI, which is what you specifically said no one should be able to confidently say X.
On your more general argument that it’s obvious AGI or something enough to AGI is obviously going to be developed soon, or has already been developed, well, no, I don’t agree with that general argument. To try to quickly boil down the main cruxes of my counterargument, AI isn’t that useful for anything and there are a lot of thorny research problems people have already been banging their heads against for years that we need to make progress on to make AI more useful.
But I was just trying to respond to your narrow point about no one being able to confidently say X. I wasn’t trying to open up a general debate about near-term AGI (let alone about regulating the generative AI systems that currently exist). However, if you’re eager, I would be happy to have that debate in the comments of another post (e.g. any of the ones I’ve written on the topic, such as the two I just linked to).