Daniel said “I would say that there’s like maybe a 30% or 40% chance that something like this is true, and that the current paradigm basically peters out over the next few years.”
It might have been Carl on the Dwarkesh podcast, but I couldn’t easily find a transcript. But I’ve heard from several others (maybe Paul Christiano?) that they have 10-40% chance that AGI is going to take much longer (or is even impossible), either because the current paradigm doesn’t get us there, or because we can’t keep scaling compute exponentially as fast as we have in the last decade once it becomes a significant fraction of GDP.
Yes, Daniel Kokotajlo did say that, but then he also said if that happens, all the problems will be solved fairly quickly anyway (within 5-10 years), so AGI will be only be delayed from maybe 2030 to 2035, or something like that.
Overall, I find his approach to this question to be quite dismissive of possibilities or scenarios other than near-term AGI and overzealous in his belief that either scaling or sheer financial investment (or utterly implausible scenarios about AI automating AI research) will assuredly solve all roadblocks on the way to AGI in very short order. This is not really a scientific approach, but just hand-waving conceptual arguments and overconfident gut intuition.
So, because he doesn’t really think the consequences of even fundamental problems with the current AI paradigm could end up being particularly significant, I give Kokotajlo credit for thinking about this idea in the first place (which is like saying I give a proponent of the covid lab leak hypothesis credit for thinking about the idea that the virus could have originated naturally), but I don’t give him credit for a particularly good or wise consideration of this issue.
I’d be very interested in seeing the discussions of these topics from Carl Schulman and/or Paul Christiano you are remembering. I am curious to know how deeply they reckon with this uncertainty. Do they mostly dismiss it and hand-wave it away like Kokotajlo? Or do they take it seriously?
In the latter case, it could be helpful for me because I’d have someone else to cite when I’m making the argument that these fundamental, paradigm-level considerations around AI need to be taken seriously when trying to forecast AGI.
Thanks. Do they actually give probability distributions for deep learning being the wrong paradigm for AGI, or anything similar to that?
It looks Ege Erdil said 50% for that question, or something close to that question.
Ajeya Cotra said much less than 50%, but she didn’t say how much less.
I didn’t see Daniel Kokotajlo give a number in that post, but then we have the 30-40% number he gave above, on the 80,000 Hours Podcast.
The probability distributions shown in the graphs at the top of the post are only an indirect proxy for that question. For example, despite Kokotajlo’s percentage being 30-40%, he still thinks that will most likely only slow down AGI by 5-10 years.
I’m just looking at the post very briefly and not reading the whole thing, so I might have missed the key parts you’re referring to.
Was there another example before this? Steven Byrnes commented on one of my posts from October and we had an extended back-and-forth, so I’m a little bit familiar with his views.
Daniel said “I would say that there’s like maybe a 30% or 40% chance that something like this is true, and that the current paradigm basically peters out over the next few years.”
It might have been Carl on the Dwarkesh podcast, but I couldn’t easily find a transcript. But I’ve heard from several others (maybe Paul Christiano?) that they have 10-40% chance that AGI is going to take much longer (or is even impossible), either because the current paradigm doesn’t get us there, or because we can’t keep scaling compute exponentially as fast as we have in the last decade once it becomes a significant fraction of GDP.
Yes, Daniel Kokotajlo did say that, but then he also said if that happens, all the problems will be solved fairly quickly anyway (within 5-10 years), so AGI will be only be delayed from maybe 2030 to 2035, or something like that.
Overall, I find his approach to this question to be quite dismissive of possibilities or scenarios other than near-term AGI and overzealous in his belief that either scaling or sheer financial investment (or utterly implausible scenarios about AI automating AI research) will assuredly solve all roadblocks on the way to AGI in very short order. This is not really a scientific approach, but just hand-waving conceptual arguments and overconfident gut intuition.
So, because he doesn’t really think the consequences of even fundamental problems with the current AI paradigm could end up being particularly significant, I give Kokotajlo credit for thinking about this idea in the first place (which is like saying I give a proponent of the covid lab leak hypothesis credit for thinking about the idea that the virus could have originated naturally), but I don’t give him credit for a particularly good or wise consideration of this issue.
I’d be very interested in seeing the discussions of these topics from Carl Schulman and/or Paul Christiano you are remembering. I am curious to know how deeply they reckon with this uncertainty. Do they mostly dismiss it and hand-wave it away like Kokotajlo? Or do they take it seriously?
In the latter case, it could be helpful for me because I’d have someone else to cite when I’m making the argument that these fundamental, paradigm-level considerations around AI need to be taken seriously when trying to forecast AGI.
Here are some probability distributions from a couple of them.
Thanks. Do they actually give probability distributions for deep learning being the wrong paradigm for AGI, or anything similar to that?
It looks Ege Erdil said 50% for that question, or something close to that question.
Ajeya Cotra said much less than 50%, but she didn’t say how much less.
I didn’t see Daniel Kokotajlo give a number in that post, but then we have the 30-40% number he gave above, on the 80,000 Hours Podcast.
The probability distributions shown in the graphs at the top of the post are only an indirect proxy for that question. For example, despite Kokotajlo’s percentage being 30-40%, he still thinks that will most likely only slow down AGI by 5-10 years.
I’m just looking at the post very briefly and not reading the whole thing, so I might have missed the key parts you’re referring to.
Here’s another example of someone in the LessWrong community thinking that LLMs won’t scale to AGI.
Was there another example before this? Steven Byrnes commented on one of my posts from October and we had an extended back-and-forth, so I’m a little bit familiar with his views.