I’ll note that don’t think any of his arguments are good:
It’s easy to discount “true understanding” as an alternative. But I don’t see why “Pattern matching isn’t enough” translates to “True understanding is needed” and not just to “Something else which we can’t pinpoint is needed”.
Which is why I’m way more convinced by Gary Marcus’ examples than by e.g. Scott Alexander. I don’t think they need to be able to describe “true understanding” to demonstrate that current AI is far from human capabilities.
I also don’t really see what makes the track record of those who do think it’s possible with the current paradigm any more impressive.
Breakthroughs may take less than the model predict. They may also take more—for example if much much better knowledge of the human brain proves needed. Or if other advances if the field are tied together with them.
even if some even more efficient paradigm takes over in the coming years, that could make AGI arrive even sooner, rather than later, than we expect.
Only if it comes before the “due date”.
I’ll clarify that I do expect some form of transformative AI this century, and that I am worried about safety, and I’m actually looking for work in the area! But I’m trying to red-team other people who wrote about this because I want to distill the (unclear) reasons I should actually expect this from my deference to high status figures in the movement.
Which is why I’m way more convinced by Gary Marcus’ examples than by e.g. Scott Alexander. I don’t think they need to be able to describe “true understanding” to demonstrate that current AI is far from human capabilities.
My impression is that this debate is mostly people talking past each other. Gary Marcus will often say something to the effect of, “Current systems are not able to do X”. The other side will respond with, “But current systems will be able to do X relatively soon.” People will act like these statements contradict, but they do not.
I recently asked Gary Marcus to name a set of concrete tasks he thinks deep learning systems won’t be able to do in the near-term future. Along with Ernie Davis, he replied with a set of mostly vague and difficult to operationalize tasks, collectively constituting AGI, which he thought won’t happen by the end of 2029 (with no probability attached).
While I can forgive people for being a bit vague, I’m not impressed by the examples Gary Marcus offered. All of the tasks seem like the type of thing that could easily be conquered by deep learning if given enough trial and error, even if the 2029 deadline is too aggressive. I have yet to see anyone—either Gary Marcus, or anyone else—name a credible, specific reason why deep learning will fail in the coming decades. Why exactly, for example, do we think that it will stop short of being able to write books (when it can already write essays), or it will stop short of being able to write 10,000 lines of code (when it can already write 30 lines of code)?
Now, some critiques of deep learning seem right: it’s currently too data-hungry, and very costly to run large training runs, for example. But of course, these objections only tell us that there might be some even more efficient paradigm that brings us AGI sooner. It’s not a good reason to expect AGI to be centuries away.
Thanks.
I’ll note that don’t think any of his arguments are good:
It’s easy to discount “true understanding” as an alternative. But I don’t see why “Pattern matching isn’t enough” translates to “True understanding is needed” and not just to “Something else which we can’t pinpoint is needed”.
Which is why I’m way more convinced by Gary Marcus’ examples than by e.g. Scott Alexander. I don’t think they need to be able to describe “true understanding” to demonstrate that current AI is far from human capabilities.
I also don’t really see what makes the track record of those who do think it’s possible with the current paradigm any more impressive.
Breakthroughs may take less than the model predict. They may also take more—for example if much much better knowledge of the human brain proves needed. Or if other advances if the field are tied together with them.
Only if it comes before the “due date”.
I’ll clarify that I do expect some form of transformative AI this century, and that I am worried about safety, and I’m actually looking for work in the area! But I’m trying to red-team other people who wrote about this because I want to distill the (unclear) reasons I should actually expect this from my deference to high status figures in the movement.
My impression is that this debate is mostly people talking past each other. Gary Marcus will often say something to the effect of, “Current systems are not able to do X”. The other side will respond with, “But current systems will be able to do X relatively soon.” People will act like these statements contradict, but they do not.
I recently asked Gary Marcus to name a set of concrete tasks he thinks deep learning systems won’t be able to do in the near-term future. Along with Ernie Davis, he replied with a set of mostly vague and difficult to operationalize tasks, collectively constituting AGI, which he thought won’t happen by the end of 2029 (with no probability attached).
While I can forgive people for being a bit vague, I’m not impressed by the examples Gary Marcus offered. All of the tasks seem like the type of thing that could easily be conquered by deep learning if given enough trial and error, even if the 2029 deadline is too aggressive. I have yet to see anyone—either Gary Marcus, or anyone else—name a credible, specific reason why deep learning will fail in the coming decades. Why exactly, for example, do we think that it will stop short of being able to write books (when it can already write essays), or it will stop short of being able to write 10,000 lines of code (when it can already write 30 lines of code)?
Now, some critiques of deep learning seem right: it’s currently too data-hungry, and very costly to run large training runs, for example. But of course, these objections only tell us that there might be some even more efficient paradigm that brings us AGI sooner. It’s not a good reason to expect AGI to be centuries away.