I genuinely donât know how to answer these polls. I find it much easier to think about a shorter timeframe like the next 20 years â although even just that is hard enough â rather than try to predict the future over a timespan of 275+ years.
I find it much easier to say that the creation of AGI (specifically as I define it here, since some people even call o3 âAGIâ) is extremely unlikely by 2035 (i.e., much less than a 0.01% or 1 in 10,000 chance), let alone the Singularity.
(âCrazyâ seems hazy, to the point that it probably needs to be decomposed into multiple different questions to make it a true forecast â although I can respect just asking people for a vague vibe just as a casual exercise, even though it wonât resolve unambiguously in retrospect.)
My problem with trying to put a median year on AGI is that I have absolutely no idea how to do that. If science and technology continue, indefinitely, to make progress in the sort of way they have for the last 100-300 years, then it seems inevitable humans will eventually invent AGI. Maybe thereâs a chance itâs unattainable for reasons most academics and researchers interested in the subject donât currently anticipate.
For instance, one estimate is that to rival the computation of a single human brain, a computer would need to consume somewhere between 300 times and 300 billion times as much electricity as the entire United States does currently. If that estimate is accurate, and if building AGI requires that much computation and that much energy, then, at the very least, that makes AGI far less attainable than even some relatively pessimistic and cautious academics and researchers might have guessed. Imagine the amount of scientific and technological progress required to produce that much energy, or to perform that much computation commensurately more energy efficiently.
Letâs assume, for the sake of argument, computation and energy are not issues, and itâs just about solving the research problems. I just went on random.org and randomly generated a number between 1 and 275, to represent the range of years asked in this polling question. The result I got was 133 years. 133 years from now is 2158. So, can I do better than that? Can I guess a median year thatâs more likely to accurate, or more likely to be closer, at least, than a random number generator? Do I have a better methodology than using random.org? Why should I think so? This is a fundamental question, and it underlies this whole polling exercise, as well of most if not all forecasting related to AGI. For instance, is there any scientific evidence or historical evidence that anyone has ever been able to predict when scientific research problems would be solved, or when fundamentally new technologies would be developed, with any sort of accuracy at all? If so, whereâs the evidence? Letâs cite it to motivate these exercises. If not, why should we think weâre in a different situation now where we are better able to tell how the future will unfold?
The mental picture I have of the long-term future when I think about forecasting when the fundamental science and technology problems pre-requisite to building AGI will be solved is of a thick fog, where I can see clearly only a little distance in front of me, can see foggily a little bit further, and then after that everything descends into completely opaque gray-white mist. Is 2158 the median year? I tried random.org again. I got 58, which would be 2083. Which year is more likely to be the median, 2083 or 2158? Are they equally likely to be the median? I have no idea how to answer these questions. For all I know, they might be fundamentally impossible to answer. The physicist David Deutsch makes the argument (e.g., in this video at 6:25) that we canât predict the content of scientific knowledge we donât yet know, since predicting the content would be equivalent to knowing it now, and we donât know yet it. This makes sense to me.
We donât yet know what the correct theory of intelligence is. We donât know the content of that theory. The theory that human-like intelligence is just current-generation deep neural networks scaled up 1,000x times would imply a close median year for AGI. Other theories of intelligence would imply something else. If the specific micro-architecture of the whole human brain is whatâs required for human-like intelligence (or general intelligence), then that implies AGI is probably quite far away, since we donât yet know that micro-architecture and donât yet have the tools in neuroscience to find it out. Even if we did know, reconstructing it in a simulation would pose its own set of scientific and technological challenges. Since we donât know what the correct theory of intelligence is, we donât know how hard it will be to build an intelligence like our own using computers, and therefore we canât predict when it will happen.
My personal view that AGI is extremely unlikely (much less than 0.01% likely) before the end of 2035 comes from my beliefs that 1) human-like intelligence is definitely not current-gen deep neural networks scaled up 1,000x, 2) the correct theory of intelligence is not something nearly so simple or easy (e.g., if AGI could have been solved by symbolic AI, it probably would have been a long time ago), and 3) itâs extremely unlikely that all the necessary scientific discoveries and technological breakthroughs required to solve everything from fundamental theory to practical implementation will be solved within the next ~9 years. Scientists, philosophers, and AI researchers have been trying to understand the fundamental nature of intelligence for a long time. The foundational research for deep learning goes back around 40 years, and it built on research thatâs even older than that. Today, if you listen to ambitious AI researchers like Yann LeCun, Richard Sutton, François Chollet, and Jeff Hawkins, they are each confident in a research roadmap to AGI, but they are four completely different roadmaps based on completely different ideas. So, itâs not like the science and philosophy of intelligence is converging toward any particular theory or solution.
Thatâs a long, philosophical answer to this quick poll question, but I believe thatâs the crux of the whole matter.
I genuinely donât know how to answer these polls. I find it much easier to think about a shorter timeframe like the next 20 years â although even just that is hard enough â rather than try to predict the future over a timespan of 275+ years.
I find it much easier to say that the creation of AGI (specifically as I define it here, since some people even call o3 âAGIâ) is extremely unlikely by 2035 (i.e., much less than a 0.01% or 1 in 10,000 chance), let alone the Singularity.
(âCrazyâ seems hazy, to the point that it probably needs to be decomposed into multiple different questions to make it a true forecast â although I can respect just asking people for a vague vibe just as a casual exercise, even though it wonât resolve unambiguously in retrospect.)
My problem with trying to put a median year on AGI is that I have absolutely no idea how to do that. If science and technology continue, indefinitely, to make progress in the sort of way they have for the last 100-300 years, then it seems inevitable humans will eventually invent AGI. Maybe thereâs a chance itâs unattainable for reasons most academics and researchers interested in the subject donât currently anticipate.
For instance, one estimate is that to rival the computation of a single human brain, a computer would need to consume somewhere between 300 times and 300 billion times as much electricity as the entire United States does currently. If that estimate is accurate, and if building AGI requires that much computation and that much energy, then, at the very least, that makes AGI far less attainable than even some relatively pessimistic and cautious academics and researchers might have guessed. Imagine the amount of scientific and technological progress required to produce that much energy, or to perform that much computation commensurately more energy efficiently.
Letâs assume, for the sake of argument, computation and energy are not issues, and itâs just about solving the research problems. I just went on random.org and randomly generated a number between 1 and 275, to represent the range of years asked in this polling question. The result I got was 133 years. 133 years from now is 2158. So, can I do better than that? Can I guess a median year thatâs more likely to accurate, or more likely to be closer, at least, than a random number generator? Do I have a better methodology than using random.org? Why should I think so? This is a fundamental question, and it underlies this whole polling exercise, as well of most if not all forecasting related to AGI. For instance, is there any scientific evidence or historical evidence that anyone has ever been able to predict when scientific research problems would be solved, or when fundamentally new technologies would be developed, with any sort of accuracy at all? If so, whereâs the evidence? Letâs cite it to motivate these exercises. If not, why should we think weâre in a different situation now where we are better able to tell how the future will unfold?
The mental picture I have of the long-term future when I think about forecasting when the fundamental science and technology problems pre-requisite to building AGI will be solved is of a thick fog, where I can see clearly only a little distance in front of me, can see foggily a little bit further, and then after that everything descends into completely opaque gray-white mist. Is 2158 the median year? I tried random.org again. I got 58, which would be 2083. Which year is more likely to be the median, 2083 or 2158? Are they equally likely to be the median? I have no idea how to answer these questions. For all I know, they might be fundamentally impossible to answer. The physicist David Deutsch makes the argument (e.g., in this video at 6:25) that we canât predict the content of scientific knowledge we donât yet know, since predicting the content would be equivalent to knowing it now, and we donât know yet it. This makes sense to me.
We donât yet know what the correct theory of intelligence is. We donât know the content of that theory. The theory that human-like intelligence is just current-generation deep neural networks scaled up 1,000x times would imply a close median year for AGI. Other theories of intelligence would imply something else. If the specific micro-architecture of the whole human brain is whatâs required for human-like intelligence (or general intelligence), then that implies AGI is probably quite far away, since we donât yet know that micro-architecture and donât yet have the tools in neuroscience to find it out. Even if we did know, reconstructing it in a simulation would pose its own set of scientific and technological challenges. Since we donât know what the correct theory of intelligence is, we donât know how hard it will be to build an intelligence like our own using computers, and therefore we canât predict when it will happen.
My personal view that AGI is extremely unlikely (much less than 0.01% likely) before the end of 2035 comes from my beliefs that 1) human-like intelligence is definitely not current-gen deep neural networks scaled up 1,000x, 2) the correct theory of intelligence is not something nearly so simple or easy (e.g., if AGI could have been solved by symbolic AI, it probably would have been a long time ago), and 3) itâs extremely unlikely that all the necessary scientific discoveries and technological breakthroughs required to solve everything from fundamental theory to practical implementation will be solved within the next ~9 years. Scientists, philosophers, and AI researchers have been trying to understand the fundamental nature of intelligence for a long time. The foundational research for deep learning goes back around 40 years, and it built on research thatâs even older than that. Today, if you listen to ambitious AI researchers like Yann LeCun, Richard Sutton, François Chollet, and Jeff Hawkins, they are each confident in a research roadmap to AGI, but they are four completely different roadmaps based on completely different ideas. So, itâs not like the science and philosophy of intelligence is converging toward any particular theory or solution.
Thatâs a long, philosophical answer to this quick poll question, but I believe thatâs the crux of the whole matter.