I don’t think I understand the structure of this estimate, or else I might understand and just be skeptical of it. Here are some quick questions and points of skepticism.
Starting from the top, you say:
We estimate optimistically that there is a 60% chance that all the fundamental algorithmic improvements needed for AGI will be developed on a suitable timeline.
This section appears to be an estimate of all-things-considered feasibility of transformative AI, and draws extensively on evidence about how lots of things go wrong in practice when implementing complicated projects. But then in subsequent sections you talk about how even if we “succeed” at this step there is still a significant probability of failing because the algorithms don’t work in a realistic amount of time.
Can you say what exactly you are assigning a 60% probability to, and why it’s getting multiplied with ten other factors? Are you saying that there is a 40% chance that by 2043 AI algorithms couldn’t yield AGI no matter how much serial time and compute they had available? (It seems surprising to claim that even by 2023!) Presumably not that, but what exactly are you giving a 60% chance?
(ETA: after reading later sections more carefully I think you might be saying 60% chance that our software is about as good as nature’s, and maybe implicitly assuming there is a ~0% chance of being significantly better than that or building TAI without that? I’m not sure if that’s right though, if so it’s a huge point of methodological disagreement. I’ll return to this point later.)
In section 2 you say:
Transformative AGI by 2043 depends critically on the development of non-sequential reinforcement learning training methods with no real human analogue.
And give this a 40% probability. I don’t think I understand this claim or its justification. (This is related to my uncertainty about what your “60%” in the last section was referring to.)
It seems to me that if you had human-like learning you would be able to produce transformative AGI by 2043:
In fact it looks like human-like learning would enable AI to learn human-level physical skills:
10 years is sufficient for humans to learn most physical skills from scratch, and you are talking about 20 year timelines. So why is the serial time for learning even a candidate blocker?
Humans learn new physical skills (including e.g. operating unfamiliar machinery) within tens of hours. This requires transfer from other things humans have learned, but those tasks are not always closely related (e.g. I learn to drive a car based on experience walking) and AI systems will have access to transfer from tasks that seem if anything more similar (e.g. prediction of the relevant physical environments, predictions of expert behavior in similar domains, closed-loop behavior in a wide range of simulated environments, closed-loop behavior on physical tasks with shorter timescales, behavior in virtual environments...).
We can easily run tens of thousands of copies of AI systems in parallel. Existing RL is massively parallelizable. Human evolution gives no evidence about the difficulty of parallelizing learning in this way. Based on observations of human learning it seems extremely likely to me that parallelization 10,000 fold can reduce serial time by at least 10x (which is all that is needed). Extrapolations of existing RL algorithms seem to suggest serial requirements more like 10,000 episodes, with almost all of the compute used to run a massive number of episodes in parallel, which would be 1 year even for a 1-hour task. It seems hard to construct physical tasks that don’t provide rich feedback after even shorter horizons than 1 hour (and therefore suitable for a gradient descent step given enough parallel samples) so this seems pretty conservative.
Regardless of learning physical tasks, humans are able to learn to do R&D after 20 years of experience. AI systems operate at 10x speed and most environments relevant to hardware and software R&D can be sped up by at least 10x. So it seems like AI systems could be human-level at a wide range of tasks, sufficient to accelerate further AI progress, even if they just used non-parallelized human learning over 2 years. If you really thought physical tasks were somehow impossibly difficult (which I don’t think is justified) then this becomes the dominant path to AGI. This is particularly important because multiple of your later points also seem to rest on the distinctive difficulty of automating physical tasks, which should just shift your probability further and further to an explosion of automated R&D which drives automation of physical labor.
I think you are disagreeing with these claims, but I’m not sure about that. For example, you mention parallelizable learning but seem to give it <10% probability despite the fact that it is the overwhelmingly dominant paradigm in current practice and you don’t say anything about why it might not work.
(This isn’t super relevant to my mainline view, since in fact I think AI is much worse at learning quickly than humans and will likely be transformative way before reaching parity. This is related to the general point about being unnecessarily conjunctive, but here I’m just trying to understand and express disagreement with the particular path you lay out and the probabilities you assign.)
In section 3 you say:
Software and hardware efficiencies combine to surpass current computation cost efficiency, and/or the efficiency of the human brain, by at least five orders of magnitude.
I think you claim that each synapse firing event requires about 1-10 million floating point operations (with some error bars), and that there is only a 16% chance that computers will be able to do enough compute for $25/hour.
This is probably the part of the report I am most skeptical of:
How do you square this with our experience in AI so far? Overall you seem to think it is possible that AI will be as effective as brains but unlikely to be much better. But if a biological neuron is probably ten million times more efficient than an artificial neuron, then aren’t we already much better than biology in tons of domains? Is there any task for which performance can be quantified and where you think this estimate provides a sane guideline to the inference-time compute required to solve the task? Shouldn’t you be putting significant probability on our algorithms being radically better than biology in many important ways?
Replicating the human visual cortex should take millions of times more compute than we have ever used, yet we can match human performance on a range of quantifiable perceptual tasks and are making rapid progress, and I’m actually not aware of tasks where it’s even plausible that we are 6 orders of magnitude away.
Learned policies for robotic control using only hundreds of thousands of neurons already seem to reach comparable competence to insects, but you should expect it to be significantly worse than a nematode. Aren’t you surprised to observe successful grasping and walking?
Traditional control systems like those used by Boston Dynamics seem to produce more competent motor control than small animals despite using amounts of compute close to 1 flop per synapse. You focus on ML, but I don’t know why—isn’t classical control a more reasonable point of comparison to small animals that have algorithms designed directly by evolution rather than learned in a lifetime, and doesn’t your argument very strongly predict that it should be impossible?
Qualitatively it’s hard to compare GPT-3 to humans, but just to be clear you are saying that it should behave like a brain with ~1000 neurons. This is at least surprising (e.g. I think would have led to big misses if it had been used to make any qualitative predictions), and to me casts doubt on a story where you can’t get transformative AI using less than the analog of a hundred billion neurons.
Your biological analysis seems to hinge on the assertion that precise simulation of neurons is necessary to get similar levels of computational utility (and even from there the analysis is pretty conservative, e.g. by assuming that performing that you need to perform a very expensive computation thousands of times a second). I don’t personally consider this plausible and I think the main argument given for it is that “if not, why would we have all these proteins?” which I don’t find persuasive (since synapses are under a huge number of important constraints and serve many important functions beyond implementing computationally complex functions at inference time). I’ve seen zero candidates for useful purposes for such an incredible amount of local computation with negligible quantities of long-distance communication, and there are very few examples of human-designed computations structured in this way / it seems to involve an extremely implausible model of what neurons are doing (apparently some nearly-embarassingly parallelizable task with work concentrated in individual neurons?). I don’t really want to argue with this at length, but want to flag that you are very confident about it and it drives a large part of your estimate whereas something like 50-50 seems more appropriate even before updating on the empirical success of ML.
In general you seem to be making the case very unnecessarily conjunctive—you are asking how likely it is that we will find algorithms as good as the brain, and then also build computers that operate at the Landauer limit (as you are apparently confident the brain does), and then also deploy AI in a way that is competitive at a $25/hour price point, and so on. But in fact one of these areas can outperform your benchmark (and if you are right in this section, then it’s definitely the case that we are radically more efficient than biology on many tasks already!), and it seems like you are dropping a lot of probability by ignoring that possibility. It’s like asking about the probability that a sum of 5 normal distributions will be above the mean, and estimating it’s 1/2^5 because each of 5 normal distributions needs to be above its mean.
(ETA: thiscriticism of section 3 is unfair: you do discuss the prospect of much better than human performance in the 2-page section “On the computational intensity of AGI,” and indeed this plays a completely central role in your bottom line estimate. But I’m still left wondering what the earlier 60% and 40% (and all the other numbers!) are supposed to represent, given that you are apparently putting all the work of “maybe humans will design efficient algorithms that are as good as the brain” in this section. You also don’t really discuss existing experience, where your estimates already appear to be many orders of magnitude off in domains where it is easiest to make comparisons between biology and ML (like vision or classical control) and where I don’t see how to argue we aren’t already 1000x better than biology using your 10 million flops per synapse number. Aside from me disagreeing with your mean, you describe these as conservative error bars since they put 20% probability on 1000x improvements over biology, but I think that’s really not the case given that it includes uncertainty about the useful compute done by the brain (where you already disagree by >>3 OOMs with plausible estimates) as well as algorithmic progress (where 1000x improvements over 20 years seem common both within software and ML).)
I’ll stop here rather than going on to sections 4+, though I think I have a lot to object to along similar lines (primarily that the story is being made unreasonably conjunctive).
Overall your estimation strategy looks crazy to me and I’m skeptical of the the implicit claim that this kind of methodology would perform well in historical examples. That said, if this sort of methodology actually does work well in practice then I think that trumps some a priori speculation and would be an important thing for me to really absorb. Your personal forecasting successes seem like a big part of the evidence for that, so it might be helpful to understand what kinds of predictions were involved and how methodologically analogous they are. Superficially it looks like the SciCast technology forecasting tournament is by far the most relevant; is there a pointer to the list of questions (other info like participants and list of predictions would also be awesome if available)? Or do you think one of the other items is more relevant?
Excellent comment; thank you for engaging in such detail. I’ll respond piece by piece. I’ll also try to highlight the things you think we believe but don’t actually believe.
Section 1: Likelihood of AGI algorithms
“Can you say what exactly you are assigning a 60% probability to, and why it’s getting multiplied with ten other factors? Are you saying that there is a 40% chance that by 2043 AI algorithms couldn’t yield AGI no matter how much serial time and compute they had available? (It seems surprising to claim that even by 2023!) Presumably not that, but what exactly are you giving a 60% chance?
Yes, we assign a 40% chance that we don’t have AI algorithms by 2043 capable of learning to do nearly any human task with realistic amounts of time and compute. Some things we probably agree on:
Progress has been promising and investment is rising.
Obviously the development of AI that can do AI research more cheaply than humans could be a huge accelerant, with the magnitude depending on the value-to-cost ratio. Already GPT-4 is accelerating my own software productivity, and future models over the next twenty years will no doubt be leagues better (as well as more efficient).
Obviously slow progress in the past is not great evidence of slow progress in the future, as any exponential curve shows.
But as we discuss in the essay, 20 years is not a long time, much easier problems are taking longer, and there’s a long track record of AI scientists being overconfident about the pace of progress (counterbalanced, to be sure, by folks on the other side who are overconfident about things that would not be achieved and subsequently were). These factors give us pause, so while agree it’s likely we’ll have algorithms for AGI by 2043, we’re not certain of it, which is why we forecast 60%. We think forecasts higher than 60% are completely reasonable, but we personally struggle to justify anything near 100%.
Incidentally, I’m puzzled by your comment and others that suggest we might already have algorithms for AGI in 2023. Perhaps we’re making different implicit assumptions of realistic compute vs infinite compute, or something else. To me, it feels clear we don’t have the algorithms and data for AGI at present.
(ETA: after reading later sections more carefully I think you might be saying 60% chance that our software is about as good as nature’s, and maybe implicitly assuming there is a ~0% chance of being significantly better than that or building TAI without that? I’m not sure if that’s right though, if so it’s a huge point of methodological disagreement. I’ll return to this point later.)”
Lastly, no, we emphatically do not assume a ~0% chance that AGI will be smarter than nature’s brains. That feels like a ridiculous and overconfident thing to believe, and it pains me that we gave this impression. Already GPT-4 is smarter than me in ways, and as time goes on, the number of ways AI is smarter than me will undoubtedly grow.
Section 2: Likelihood of fast reinforcement training
10 years is sufficient for humans to learn most physical skills from scratch, and you are talking about 20 year timelines. So why is the serial time for learning even a candidate blocker?
Agree—if we had AGI today, this would not be a blocker. This becomes a greater and greater blocker the later AGI is developed. E.g., if AGI is developed in 2038, we’d have only 4 years to train it to do nearly every human task. So this factor is heavily entangled with the timeline on which AGI is developed.
(And obviously the development of AGI is not going to a clean line passed on a particular year, but the idea is the same even applied to AGI systems developed gradually and unevenly.)
We can easily run tens of thousands of copies of AI systems in parallel. Existing RL is massively parallelizable. Human evolution gives no evidence about the difficulty of parallelizing learning in this way. Based on observations of human learning it seems extremely likely to me that parallelization 10,000 fold can reduce serial time by at least 10x (which is all that is needed). Extrapolations of existing RL algorithms seem to suggest serial requirements more like 10,000 episodes, with almost all of the compute used to run a massive number of episodes in parallel, which would be 1 year even for a 1-hour task. It seems hard to construct physical tasks that don’t provide rich feedback after even shorter horizons than 1 hour (and therefore suitable for a gradient descent step given enough parallel samples) so this seems pretty conservative.
Agree on nearly everything here. I think the crux on which we differ is that we think interaction with the real world will be a substantial bottleneck (and therefore being able to run 10,000 parallel copies may not save us).
As I mentioned to Zach below:
With AlphaZero in particular, fast reinforcement training is possible because (a) the game state can be efficiently modeled by a computer and (b) the reward can be efficiently computed by a computer.
In contrast, look at a task like self-driving. Despite massive investment, our self-driving AIs are learning more slowly than human teenagers. Part of the reason for this is that conditions (a) and (b) no longer hold. First, our simulations of reality are imperfect, and therefore fleets must be deployed to drive millions of miles. Second, calculating reward functions (i.e., “this action causes a collision”) is expensive and typically requires human supervision (e.g., test drivers, labelers), as the actual reward (e.g., a real-life collision) is even more expensive to acquire. This bottleneck of expensive feedback is partly why we can’t just throw more GPUs at the problem and learn self-driving overnight in the way we can with Go.
To recap, we can of course parallelize a million self-driving car AIs and have them drive billions of miles in simulation. But that only works to the extent that (a) our simulations reflect reality and (b) we have the compute resources to do so. And so real self-driving car companies are spending billions on fleets and human supervision in order to gather the necessary data. In general, if an AGI cannot easily and cheaply simulate reality, it will have to learn from real-world interactions. And to the extent that it needs to learn from interactions with the consequences of its earlier actions, that training will need to be sequential.
This isn’t super relevant to my mainline view, since in fact I think AI is much worse at learning quickly than humans and will likely be transformative way before reaching parity.
Agreed. Our expectation is that early AGIs will be expensive and uneven. If they end up being incredibly sample efficient, then this task will be much easier than we’ve forecasted.
In general, I’m pretty open to updating higher here. I don’t think there are any insurmountable barriers here; but a sense that this will both be hard to do (as self-driving illustrates) as well as unlikely to be done (as all sorts of tasks not currently automated illustrate). My coauthor is a bit more negative on this factor than me and may chime in with his own thoughts later.
I personally struggle to imagine how an AlphaZero-like algorithm would learn to become the world’s best swim instructor via massively parallelized reinforcement learning on children, but that may well be a failure of my imagination. Certainly one route is massively parallelized RL to become excellent at AI R&D, then massively parallelized RL to become excellent at many tasks, and then quickly transferring that understanding to teaching children to swim, without any children ever drowning.
Section 3: Operating costs
Here, I think you ascribe many beliefs to us which we do not hold, and I apologize for not being clearer. I’ll start by emphasizing what we do not believe.
Overall you seem to think it is possible that AI will be as effective as brains but unlikely to be much better.
We do not believe this.
AI is already vastly better than human brains at some tasks, and the number of tasks on which AI is superhuman will rise with time. We do expect that early AGIs will be expensive and uneven, as all earliest versions of a technology are. And then they will improve from there.
just to be clear you are saying that [GPT-3] should behave like a brain with ~1000 neurons.
We do not believe this.
Your biological analysis seems to hinge on the assertion that precise simulation of neurons is necessary to get similar levels of computational utility
We do not believe this.
build computers that operate at the Landauer limit (as you are apparently confident the brain does)
We do not believe this. We do not believe that brains operate at the Landauer limit, nor do we believe computers will operate at this limit by 2043.
Incidentally, I studied the Landauer limit deeply during my physics PhD and could write an essay on the many ways it’s misinterpreted, but will save that for another day. :)
It’s like asking about the probability that a sum of 5 normal distributions will be above the mean, and estimating it’s 1/2^5 because each of 5 normal distributions needs to be above its mean.
We do not believe this.
To multiply these probabilities together, one cannot multiply their unconditional expectations; rather, one must multiply their cascading conditional probabilities. You may disagree with our probabilities, but our framework specifically addresses this point. Our unconditional probabilities are far lower for some of these events, because we believe they will be rapidly accelerated conditional on progress in AGI.
Forecasting credentials
Your personal forecasting successes seem like a big part of the evidence for that, so it might be helpful to understand what kinds of predictions were involved and how methodologically analogous they are. Superficially it looks like the SciCast technology forecasting tournament is by far the most relevant; is there a pointer to the list of questions (other info like participants and list of predictions would also be awesome if available)? Or do you think one of the other items is more relevant?
Honestly, I wouldn’t put too much weight on my forecasting success. It’s mostly a mix of common sense, time invested, and luck. I do think it reflects a decent mental model of how the world works, which leads to decent calibration for what’s 3% likely vs 30% likely. The main reason I mention it in the paper is just to help folks realize that we’re not wackos predicting 1% because we “really feel” confident. In many other situations (e.g., election forecasting, sports betting, etc.) I often find myself on the humble and uncertain side of the fence, trying to warn people that the world is more complicated and unpredictable than their gut is telling them. Even here, I consider our component forecasts quite uncertain, ranging from 16% to 95%. It’s precisely our uncertainty about the future which leads to a small product of 0.4%. (From my point of view, you are staking out a much higher confidence position in asserting that AGI algorithms is very likely and that rapid self-improvement is very likely.)
What will be the highest reported efficiency of a perovskite photovoltaic cell by date X
What will be the volume of deployed solar in the USA by date X
At the Brazil World Cup, how far will the paraplegic exoskeleton kick the ball for the opening kickoff
Will Amazon offer drone delivery by date X
Will physicists discover Y by date X
Most forecasts related to scientific discoveries and technological inventions and had timescales of months to years.
Conclusion
From your comment, I think the biggest crux between us is the rate of AI self-improvement. If the rate is lower, the world may look like what we’re envisioning. If the rate is higher, progress may take off in a way not well predicted by current trends, and the world may look more like what you’re envisioning. This causes our conditional probabilities to look too low and too independent, from your point of view. Do you think that’s a fair assessment?
Lastly, can I kindly ask what your cascading conditional probabilities would be in our framework? (Let’s hold the framework constant for this question, even if you prefer another.)
If you disagree with our admittedly imperfect guesses, we kindly ask that you supply your own preferred probabilities (or framework modifications). It’s easier to tear down than build up, and we’d love to hear how you think this analysis can be improved.
Incidentally, I’m puzzled by your comment and others that suggest we might already have algorithms for AGI in 2023. Perhaps we’re making different implicit assumptions of realistic compute vs infinite compute, or something else. To me, it feels clear we don’t have the algorithms and data for AGI at present
I would guess that more or less anything done by current ML can be done by ML from 2013 but with much more compute and fiddling. So it’s not at all clear to me whether existing algorithms are sufficient for AGI given enough compute, just as it wasn’t clear in 2013. I don’t have any idea what makes this clear to you.
Given that I feel like compute and algorithms mostly trade off, hopefully it’s clear why I’m confused about what the 60% represents. But I’m happy for it to mean something like: it makes sense at all to compare AI performance vs brain performance, and expect them to be able to solve a similar range of tasks within 5-10 orders of magnitude of the same amount of compute.
But as we discuss in the essay, 20 years is not a long time, much easier problems are taking longer, and there’s a long track record of AI scientists being overconfident about the pace of progress (counterbalanced, to be sure, by folks on the other side who are overconfident about things that would not be achieved and subsequently were).
If 60% is your estimate for “possible with any amount of compute,” I don’t know why you think that anything is taking a long time. We just don’t get to observe how easy problems are if you have plenty of compute, and it seems increasingly clear that weak performance is often explained by limited compute. In fact, even if 60% is your estimate for “doable with similar compute to the brain,” I don’t see why you are updating from our failure to do tasks with orders of magnitude less compute than a brain (even before considering that you think individual neurons are incredibly potent).
Section 2: Likelihood of fast reinforcement training
I still don’t fully understand the claims being made in this section. I guess you are saying that there’s a significant chance that the serial time requirements will be large and that will lead to a large delay? Like maybe you’re saying something like: a 20% chance that it will add >20 years of delay, a 30% chance of 10-20 years of delay, a 40% chance of 1-10 years of delay, a 10% chance of <1 year of delay?
In addition to not fully understanding the view, I don’t fully understand the discussion in this section or why it’s justifying this probability. It seems like if you had human-level learning (as we are conditioning on from sections 1+3) then things would probably work in <2 years unless parallelization is surprisingly inefficient. And even setting aside the comparison to humans, such large serial bottlenecks aren’t really consistent with any evidence to date. And setting any concrete details, you are already assuming we have truly excellent algorithms and so there are lots of ways people could succeed. So I don’t buy the number, but that may just be a disagreement.
You seem to be leaning heavily on the analogy to self-driving cars but I don’t find that persuasive—you’ve already postulated multiple reasons why you shouldn’t expect them to have worked so far. Moreover, the difficulties there also just don’t seem very similar to the kind of delay from serial time you are positing here, they seem much more closely related to “man we don’t have algorithms that learn anything like humans.”
Section 3: Operating costs
I think I’ve somehow misunderstood this section.
It looks to me like you are trying to estimate the difficulty of automating tasks by comparing to the size of brains of animals that perform the task (and in particular human brains). And you are saying that you expect it to take about 1e7 flops for each synapse in a human brain, and then define a probability distribution around there. Am I misunderstanding what’s going on here or is that a fair summary?
(I think my comment about GPT-3 = small brain isn’t fair, but the reverse direction seems fair: “takes a giant human brain to do human-level vision” --> “takes 7 orders of magnitude larger model to do vision.” If that isn’t valid, then why is “takes a giant human brain to do job X” --> “takes 7 orders of magnitude larger model to automate job X” valid? Is it because you are considering the worst-case profession?)
Your biological analysis seems to hinge on the assertion that precise simulation of neurons is necessary to get similar levels of computational utility
We do not believe this.
I don’t think I understand where your estimates come from, unless we are just disagreeing about the word “precise.” You cite the computational cost of learning a fairly precise model of a neuron’s behavior as an estimate for the complexity per neuron. You also talk about some low level dynamics without trying to explain why they may be computationally relevant. And then you give pretty confident estimates for the useful computation done in a brain. Could you fill in the missing steps in that estimate a bit more, both for the mean (of 1e6 per neuron*spike) and for the standard deviation of the log (which seems to be about ~1 oom)?
build computers that operate at the Landauer limit (as you are apparently confident the brain does)
I think I misunderstood your claims somehow.
I think you are claiming that the brain does 1e20-1e21 flops of useful computation. I don’t know exactly how you are comparing between brains and floating point operations. A floating point operation is more like 1e5 bit erasures today and is necessarily at least 16 bit erasures at fp16 (and your estimates don’t allow for large precision reductions e.g. to 1 bit arithmetic). Let’s call it 1.6e21 bit erasures per second, I think quite conservatively?
I might be totally wrong about the Landauer limit, but I made this statement by looking at Wikipedia which claims 3e-21 J per bit erasure at room temperature. So if you multiply that by 1.6e21 bit erasures per second, isn’t that 5 W, nearly half the power consumption of the brain?
Is there a mistake somewhere in there? Am I somehow thinking about this differently from you?
To multiply these probabilities together, one cannot multiply their unconditional expectations; rather, one must multiply their cascading conditional probabilities. You may disagree with our probabilities, but our framework specifically addresses this point. Our unconditional probabilities are far lower for some of these events, because we believe they will be rapidly accelerated conditional on progress in AGI.
I understand this, but the same objection applies for normal distributions being more than 0. Talking about conditional probabilities doesn’t help.
Are you saying that e.g. a war between China and Taiwan makes it impossible to build AGI? Or that serial time requirements make AGI impossible? Or that scaling chips means AGI is impossible? It seems like each of these just makes it harder. These are factors you should be adding up. Some things can go wrong and you can still get AGI by 2043. If you want to argue you can’t build AGI if something goes wrong, that’s a whole different story. So multiplying probabilities (even conditional probabilities) for none of these things happening doesn’t seem right.
Lastly, can I kindly ask what your cascading conditional probabilities would be in our framework? (Let’s hold the framework constant for this question, even if you disagree with it.)
I don’t know what the events in your decomposition refer to well enough to assign them probabilities:
I still don’t know what “algorithms for AGI” means. I think you are somehow ignoring compute costs, but if so I don’t know on what basis you are making any kind of generalization from our experience with the difficulty of designing extremely fast algorithms. In most domains algorithmic issues are ~the whole game and that seems true in AI as well.
I don’t really know what “invent a way for AGI to learn faster than humans” means, as distinct from the estimates in the next section about the cost of AGI algorithms. Again, are you trying to somehow abstract out compute costs of learning here? Then my probabilities are very but uninteresting.
Taken on its own, it seems like the third probability (“AGI inference costs drop below $25/hr (per human equivalent)”) implies the conclusion. So I assume you are doing something where you say “Ignoring increases in demand and the possibility of supply chain disruptions and...” or something like that? So the forecast you are making about compute prices aren’t unconditional forecasts?
I don’t know what level of cheap, quality robots you refer to. The quality of robotics needed to achieve transformative AI depends completely on the quality of your AI. For powerful AI it can be done with existing robot bodies, for weak AI it would need wildly superhuman bodies, at intermediate levels it can be done if humanoid robots cost millions of dollars each. And conversely the previous points aren’t really defined unless you specify something about the robotic platform. I assume you address this in the section but I think it’s going to be hard to define enough that I can give a number.
I don’t know what massively scaling chips mean—again, it seems like this just depends crucially on how good your algorithms are. It feels more like you should be estimating multiple numbers and then seeing the probability that the product is large enough to be impactful.
I don’t know what “avoid derailment” means. It seems like these are just factors that affect the earlier estimates, so I guess the earlier quantities were supposed to be something like “the probability of developing AGI given that nothing weird happens in the world”? Or something? But weird stuff is guaranteed to be happening in the world. I feel like this is the same deal as above, you should be multiplying out factors.
From your comment, I think the biggest crux between us is the rate of AI self-improvement. If the rate is lower, the world may look like what we’re envisioning. If the rate is higher, progress may take off in a way not well predicted by current trends, and the world may look more like what you’re envisioning. This causes our conditional probabilities to look too low and too independent, from your point of view. Do you think that’s a fair assessment?
I think this seems right.
In particular, it seems like some of your estimates make more sense to me if I read them as saying “Well there will likely exist some task that AI systems can’t do.” But I think such claims aren’t very relevant for transformative AI, which would in turn lead to AGI.
By the same token, if the AIs were looking at humans they might say “Well there will exist some tasks that humans can’t do” and of course they’d be right, but the relevant thing is the single non-cherry-picked variable of overall economic impact. The AIs would be wrong to conclude that humans have slow economic growth because we can’t do some tasks that AIs are great at, and the humans would be wrong to conclude that AIs will have slow economic growth because they can’t do some tasks we are great at. The exact comparison is only relevant for assessing things like complementarity, which make large impacts happen strictly more quickly than they would otherwise.
(This might be related to me disliking AGI though, and then it’s kind of on OpenPhil for asking about it. They could also have asked about timelines to 100000x electricity production and I’d be making broadly the same arguments, so in some sense it must be me who is missing the point.)
I do think it reflects a decent mental model of how the world works, which leads to decent calibration for what’s 3% likely vs 30% likely. The main reason I mention it in the paper is just to help folks realize that we’re not wackos predicting 1% because we “really feel” confident. In many other situations (e.g., election forecasting, sports betting, etc.) I often find myself on the humble and uncertain side of the fence, trying to warn people that the world is more complicated and unpredictable than their gut is telling them.
That makes sense, and I’m ready to believe you have more calibrated judgments on average than I do. I’m also in the business of predicting a lot of things, but not as many and not with nearly as much tracking and accountability. That seems relevant to the question at hand, but still leaves me feeling very intuitively skeptical about this kind of decomposition.
Are you saying that e.g. a war between China and Taiwan makes it impossible to build AGI? Or that serial time requirements make AGI impossible? Or that scaling chips means AGI is impossible?
C’mon Paul—please extend some principle of charity here. :)
You have repeatedly ascribed silly, impossible beliefs to us and I don’t know why (to be fair, in this particular case you’re just asking, not ascribing). Genuinely, man, I feel bad that our writing has either (a) given the impression that we believe such things or (b) given the impression that we’re the type of people who’d believe such things.
Like, are these sincere questions? Is your mental model of us that there’s a genuine uncertainty over whether we’ll say “Yes, a war precludes AGI” vs “No, a war does preclude AGI.”
To make it clear: No, of course a war between China and Taiwan does not make it impossible to build AGI by 2043. As our essay explicitly says.
Some things can go wrong and you can still get AGI by 2043. If you want to argue you can’t build AGI if something goes wrong, that’s a whole different story. So multiplying probabilities (even conditional probabilities) for none of these things happening doesn’t seem right.
To make it clear: our forecasts are not the odds of wars, pandemics, and depressions not occurring. They are the odds of wars, pandemics, and depressions not delaying AGI beyond 2043. Most wars, most pandemics, and most depressions will not delay AGI beyond 2043, we think. Our methodology is to forecast only the most severe events, and then assume a good fraction won’t delay AGI. As our essay explicitly says.
We probably forecast higher odds of delay than you, because our low likelihoods of TAGI mean that TAGI, if developed, is likeliest to be developed nearer to the end of the period, without many years of slack. If TAGI is easy, and can be developed early or with plenty of slack, then it becomes much harder for these types of events to derail TAGI.
My point in asking “Are you assigning probabilities to a war making AGI impossible?” was to emphasize that I don’t understand what 70% is a probability of, or why you are multiplying these numbers. I’m sorry if the rhetorical question caused confusion.
My current understanding is that 0.7 is basically just the ratio (Probability of AGI before thinking explicitly about the prospect of war) / (Probability of AGI after thinking explicitly about prospect of war). This isn’t really a separate event from the others in the list, it’s just a consideration that lengthens timelines. It feels like it would also make sense to list other considerations that tend to shorten timelines.
(I do think disruptions and weird events tend to make technological progress slower rather than faster, though I also think they tend to pull tiny probabilities up by adding uncertainty.)
A floating point operation is more like 1e5 bit erasures today and is necessarily at least 16 bit erasures at fp16 (and your estimates don’t allow for large precision reductions e.g. to 1 bit arithmetic). Let’s call it 1.6e21 bit erasures per second, I think quite conservatively?
I don’t follow you here.
Why is a floating point operation 1e5 bit erasures today?
Why does a fp16 operation necessitate 16 bit erasures? As an example, if we have two 16-bit registers (A, B) and we do a multiplication to get (A, A*B), where is the 16 bits of information loss?
(In any case, no real need to reply to this. As someone who has spent a lot of time thinking about the Landauer limit, my main takeaway is that it’s more irrelevant than often supposed, and I suspect getting to the bottom of this rabbit hole is not going to yield much for us in terms of TAGI timelines.)
In particular, it seems like some of your estimates make more sense to me if I read them as saying “Well there will likely exist some task that AI systems can’t do.” But I think such claims aren’t very relevant for transformative AI, which would in turn lead to AGI.
By the same token, if the AIs were looking at humans they might say “Well there will exist some tasks that humans can’t do” and of course they’d be right, but the relevant thing is the single non-cherry-picked variable of overall economic impact. The AIs would be wrong to conclude that humans have slow economic growth because we can’t do some tasks that AIs are great at, and the humans would be wrong to conclude that AIs will have slow economic growth because they can’t do some tasks we are great at. The exact comparison is only relevant for assessing things like complementarity, which make large impacts happen strictly more quickly than they would otherwise.
(This might be related to me disliking AGI though, and then it’s kind of on OpenPhil for asking about it. They could also have asked about timelines to 100000x electricity production and I’d be making broadly the same arguments, so in some sense it must be me who is missing the point.)
Yep. We’re using the main definition supplied by Open Philanthropy, which I’ll paraphrase as “nearly all human work at human cost or less by 2043.”
If the definition was more liberal, e.g., AGI as smart as humans, or AI causing world GDP to rise by >100%, we would have forecasted higher probabilities. We expect AI to get wildly more powerful over the next decades and wildly change the face of human life and work. The public is absolutely unprepared. We are very bullish on AI progress, and we think AI safety is an important, tractable, and neglected problem. Creating new entities with the potential to be more powerful than humanity is a scary, scary thing.
I don’t know what level of cheap, quality robots you refer to. The quality of robotics needed to achieve transformative AI depends completely on the quality of your AI. For powerful AI it can be done with existing robot bodies, for weak AI it would need wildly superhuman bodies, at intermediate levels it can be done if humanoid robots cost millions of dollars each.
Interesting—this is perhaps another good crux between us.
My impression is that existing robot bodies are not good enough to do most human jobs, even if we had human-level AGI today. Human bodies self-repair, need infrequent maintenance, last decades, have multi-modal high bandwidth sensors built in, and are incredibly energy efficient.
One piece of evidence for this is how rare tele-operated robots are. There are plenty of generally intelligent humans around the world who would be happy to control robots for $1/hr, and yet they are not being employed to do so.
I didn’t mean to imply that human-level AGI could do human-level physical labor with existing robotics technology; I was using “powerful” to refer to a higher level of competence. I was using “intermediate levels” to refer to human-level AGI, and assuming it would need cheap human-like bodies.
Though mostly this seems like a digression. As you mention elsewhere, the bigger crux is that it seems to me like automating R&D would radically shorten timelines to AGI and be amongst the most important considerations in forecasting AGI.
(For this reason I don’t often think about AGI timelines, especially not for this relatively extreme definition. Instead I think about transformative AI, or AI that is as economically impactful as a simulated human for $X, or something along those lines.)
I don’t know what “avoid derailment” means. It seems like these are just factors that affect the earlier estimates, so I guess the earlier quantities were supposed to be something like “the probability of developing AGI given that nothing weird happens in the world”? Or something?
Bingo. We didn’t take the time to articulate it fully, but yeah you got it. We think it makes it easier to forecast these things separately rather than invisibly smushing them together into a smaller set of factors.
But weird stuff is guaranteed to be happening in the world. I feel like this is the same deal as above, you should be multiplying out factors.
We are multiplying out factors. Not sure I follow you here.
I don’t know what massively scaling chips mean—again, it seems like this just depends crucially on how good your algorithms are. It feels more like you should be estimating multiple numbers and then seeing the probability that the product is large enough to be impactful.
Agree 100%. Our essay does exactly this, forecasting over a wide range of potential compute needs, before taking an expected value to arrive a single summary likelihood.
Sounds like you think we should have ascribed more probability to lower ranges, which is a totally fair disagreement.
It looks to me like you are trying to estimate the difficulty of automating tasks by comparing to the size of brains of animals that perform the task (and in particular human brains). And you are saying that you expect it to take about 1e7 flops for each synapse in a human brain, and then define a probability distribution around there. Am I misunderstanding what’s going on here or is that a fair summary?
Pretty fair summary. 1e6, though, not 1e7. And honestly I could be pretty easily persuaded to go a bit lower by arguments such as:
Max firing rate of 100 Hz is not the informational content of the channel (that buys maybe 1 OOM)
Maybe a smaller DNN could be found, but wasn’t
It might take a lot of computational neurons to simulate the I/O of a single synapse, but it also probably takes a lot of synapses to simulate the I/O of a single computational neuron
Dropping our estimate by 1-2 OOMs would increase step 3 by 10%abs-20%abs. It wouldn’t have much effect on later estimates, as they are already conditional on success in step 3.
In addition to not fully understanding the view, I don’t fully understand the discussion in this section or why it’s justifying this probability. It seems like if you had human-level learning (as we are conditioning on from sections 1+3) then things would probably work in <2 years unless parallelization is surprisingly inefficient.
Maybe, but maybe not, which is why we forecast a number below 100%.
For example, it is very very rare to ever see a CEO hired with <2 years of experience, even if they are very intelligent and have read a lot of books and have watched a lot of interviews. Some reasons might be irrational or irrelevant, but surely some of it is real. A CEO job requires a large constellation of skills practiced and refined over many years. E.g., relationship building with customers, suppliers, shareholders, and employees.
For an AGI to be installed as CEO of a corporation in under two years, human-level learning would not be enough—it would need to be superhuman in its ability to learn. Such superhuman learning could come from simulation (e.g., modeling and simulating how a potential human partner would react to various communication styles), come from parallelization (e.g., being installed as a manager in 1,000 companies and then compiling and sharing learnings across copies), or from something else.
I agree that skills learned from reading or thinking or simulating could happen very fast. Skills requiring real-world feedback that is expensive, rare, or long-delayed would progress more slowly.
You seem to be missing the possibility of superhuman learning being from superhuman sample efficiency in the sense of requiring less feedback to aquire skills.
Including actively experimenting in usefull directions more efectively.
Nope, we didn’t miss the possibility of AGIs being very sample efficient in their learning. We just don’t think it’s certain, which is why we forecast a number below 100%. Sounds like your estimate is higher than ours; however, that doesn’t mean we missed the possibility.
I would guess that more or less anything done by current ML can be done by ML from 2013 but with much more compute and fiddling. So it’s not at all clear to me whether existing algorithms are sufficient for AGI given enough compute, just as it wasn’t clear in 2013. I don’t have any idea what makes this clear to you.
What’s an algorithm from 2013 that you think could yield AGI, if given enough compute? What would its inputs, outputs, and training look like? You’re more informed than me here and I would be happy to learn more.
I’m not sure I buy ’2013 algorithms are literally enough’, but it does seem very likely to me that in practice you get AGI very quickly (<2 years) if you give out GPUs which have (say) 10^50 FLOPS. (These GPUS are physically impossible, but I’m just supposing this to make the hypothetical easier. In particular, 2013 algorithms don’t parallelize very well and I’m just supposing this away.)
And, I think 2023 algorithms are literally enough with this amount of FLOP (perhaps with 90% probability).
For a concrete story of how this could happen, let’s imagine training a model with around 10^50 FLOP to predict all human data ever produced (say represented as uncompressed bytes and doing next token prediction) and simultaneously training with RL to play every game ever. We’ll use the largest model we can get with this flop budget, probably well over 10^25 parameters. Then, you RL on various tasks, prompt the AI, or finetune on some data (as needed).
This can be done with either 2013 or 2023 algorithms. I’m not sure if it’s enough with 2013 algorithms (in particular, I’d be worried that the AI would be extremely smart but the elicitation technology wasn’t there to get the AI to do anything useful). I’d put success with 2013 algos and this exact plan at 50%. It seems likely enough with 2023 algorithms (perhaps 80% chance of success).
In 2013 this would look like training an LSTM. Deep RL was barely developed, but did exist.
In 2023 this looks similar to GPT4 but scaled way up and trained on all source of data and trained to play games etc.
Let me replay my understanding to you, to see if I understand. You are predicting that...
IF:
we gathered all files stored on hard drives
...decompressed them into streams of bytes
...trained a monstrous model to predict the next chunk in each stream
...and also trained it to play every winnable computer game ever made
THEN:
You are 50% confident we’d get AGI* using 2013 algos
You are 80% confident we’d get AGI* using 2023 algos
WHERE:
*AGI means AI that is general; i.e., able to generalize to all sorts of data way outside its training distribution. Meaning:
It avoids overfitting on the data despite its massive parameter count. E.g., not just memorizing every file or brute forcing all the exploitable speedrunning bugs in a game that don’t generalize to real-world understanding.
It can learn skills and tasks that are barely represented in the computer dataset but that real-life humans are nonetheless able to quickly understand and learn due to their general world models
It can made to develop planning, reasoning, and strategy skills not well represented by next-token prediction (e.g., it would learn to how write a draft, reflect on it, and edit it, even though it’s never been trained to do that and has only been optimized to append single tokens in sequence)
It simultaneously avoids underfitting due to any regularization techniques used to avoid the above overfitting problems
ASSUMING:
We don’t train on data not stored on computers
We don’t train on non-computer games (but not a big crux if you want to posit high fidelity basketball simulations, for example)
We don’t train on games without win conditions (but not a big crux, as most have them)
Is this a correct restatement of your prediction?
And are your confidence levels for this resulting in AGI on the first try? Within ten tries? Within a year of trial and error? Within a decade of trial and error?
(Rounding to the nearest tenth of a percent, I personally am 0.0% confident we’d get AGI on our first try with a system like this, even with 10^50 FLOPS.)
This seems like a pretty good description of this prediction.
Your description misses needing a finishing step of doing some RL, prompting, and generally finetuning on the task of interest (similar to GPT4). But this isn’t doing much of the work, so it’s not a big deal. Additionally, this sort of finishing step wasn’t really developed in 2013, so it seems less applicable to that version.
I’m also assuming some iteration on hyperparameters and data manipulation etc. in keeping with the techniques used in the respective time periods. So, ‘first try’ isn’t doing that much work here because you’ll be iterating a bit in the same way that people generally iterate a bit (but you won’t be doing novel research).
My probabilities are for the ‘first shot’ but after you do some preliminary experiments to verify hyper-params etc. And with some iteration on the finetuning. There might be a non-trivial amount of work on the finetuning step also, I don’t have a strong view here.
It’s worth noting that I think that GPT5 (with finetuning and scaffolding, etc.) is perhaps around 2% likely to be AGI. Of course, you’d need serious robotic infrastructure and much larger pool of GPUs to automate all labor.
My general view is ‘if the compute is there, the AGI will come’. I’m going out on more of a limb with this exact plan and I’m much less confident in the plan than in this general principle.
Here are some examples reasons why I think my high probabilities are plausible:
The training proposal I gave is pretty close to how models like GPT4 are trained. These models are pretty general and are quite strategic etc. Adding more FLOP makes a pretty big qualitative difference.
It doesn’t seem to me like you have to generalize very far for this to succeed. I think existing data trains you to do basically everything humans can do. (See GPT4 and prompting)
Even if this proposal is massively inefficient, we’re throwing an absurd amount of FLOP at it.
It seems like the story for why humans are intelligent looks reasonably similar to this story: have big, highly functional brains, learn to predict what you see, train to achieve various goals, generalize far. Perhaps you think humans intelligence is very unlikely ex-ante (<0.04% likely).
Am I really the only person who thinks it’s a bit crazy that we use this blobby comment thread as if it’s the best way we have to organize disagreement/argumentation for audiences? I feel like we could almost certainly improve by using, e.g., a horizontal flow as is relatively standard in debate.[1]
With a generic example below:
To be clear, the commentary could still incorporate non-block/prose text.
Alternatively, people could use something like Kialo.com. But surely there has to be something better than this comment thread, in terms of 1) ease of determining where points go unrefuted, 2) ease of quickly tracing all responses in specific branches (rather than having to skim through the entire blob to find any related responses), and 3) seeing claims side-by-side, rather than having to scroll back and forth to see the full text. (Quoting definitely helps with this, though!)
How hard do you suppose it might be to use an AI to scrub the comments and generate something like this? It may be worth doing manually for some threads, even, but it’s easier to get people to adopt if the debate already exists and only needs tweaking. There may even already exist software that accepts text as input and outputs a Kialo-like debate map (thank you for alerting me that Kialo exists, it’s neat).
Over the past few months I have occasionally tried getting LLMs to do some tasks related to argument mapping, but I actually don’t think I’ve tried that specifically, and probably should. I’ll make a note to myself to try here.
But I don’t think we could have predicted people would die into the comments like this. Usually comments have minimal engagement. There’s a lesswrong debate format for posts but that’s usually with a moderator and such. This seems spontaneous.
Are your referring to this format on LessWrong? If so I can’t say I’m particularly impressed, as it still seems to suffer from the problems of linear dialogue vs. a branching structure (e.g., it is hard to see where points have been dropped, it is harder to trace specific lines of argument). But I don’t recall seeing this, so thanks for the flag.
As for “I don’t think we could have predicted people…”, that’s missing my point(s). I’m partially saying “this comment thread seems like it should be a lesson/example of how text-blob comment-threads are inefficient in general.” However, even in this specific case Paul knew that he was laying out a multi-pronged criticism, and if the flow format existed he could have presented his claims that way, to make following the debate easier—assuming Ted would reply.
Ultimately, it just seems to me like it would be really logical to have a horizontal flow UI,[1] although I recognize I am a bit biased by my familiarity with such note taking methods from competitive debate.
In theory it need not be as strictly horizontal as I lay out; it could be a series of vertically nested claims, kept largely within one column—where the idea is that instead of replying to the entire comment you can just reply to specific blocks in the original comment (e.g., accessible in a drop down at the end of a specific argument block rather than the end of the entire comment).
I don’t know. As someone who was/still is quite good at debating and connected to debating communities I would find a flow-centric comment thread bothersome and unhelpful for reading the dialogues. I quite like internet comments as is in this UI.
I find this strange/curious. Is your preference more a matter of “Traditional interfaces have good features that a flowing interface would lack“ (or some other disadvantage to switching) or “The benefits of switching to a flowing interface would be relatively minor”?
For example on the latter, do you not find it more difficult with the traditional UI to identify dropped arguments? Or suppose you are fairly knowledgeable about most of the topics but there’s just one specific branch of arguments you want to follow: do you find it easy to do that? (And more on the less-obvious side, do you think the current structure disincentivizes authors from deeply expanding on branches?)
On the former, I do think that there are benefits to having less-structured text (e.g., introductions/summaries and conclusions) and that most argument mapping is way too formal/rigid with its structure, but I think these issues could be addressed in the format I have in mind.
I asked other debaters/EAs intersecting and they agreed with my line of reasoning that it would be contrived and lead to poorly structured arguments. I can elaborate if you really want but I hesitate spending time to write this out because I’m behind on work and don’t think it’ll have any impact on anything to be honest.
I don’t think I understand the structure of this estimate, or else I might understand and just be skeptical of it. Here are some quick questions and points of skepticism.
Starting from the top, you say:
This section appears to be an estimate of all-things-considered feasibility of transformative AI, and draws extensively on evidence about how lots of things go wrong in practice when implementing complicated projects. But then in subsequent sections you talk about how even if we “succeed” at this step there is still a significant probability of failing because the algorithms don’t work in a realistic amount of time.
Can you say what exactly you are assigning a 60% probability to, and why it’s getting multiplied with ten other factors? Are you saying that there is a 40% chance that by 2043 AI algorithms couldn’t yield AGI no matter how much serial time and compute they had available? (It seems surprising to claim that even by 2023!) Presumably not that, but what exactly are you giving a 60% chance?
(ETA: after reading later sections more carefully I think you might be saying 60% chance that our software is about as good as nature’s, and maybe implicitly assuming there is a ~0% chance of being significantly better than that or building TAI without that? I’m not sure if that’s right though, if so it’s a huge point of methodological disagreement. I’ll return to this point later.)
In section 2 you say:
And give this a 40% probability. I don’t think I understand this claim or its justification. (This is related to my uncertainty about what your “60%” in the last section was referring to.)
It seems to me that if you had human-like learning you would be able to produce transformative AGI by 2043:
In fact it looks like human-like learning would enable AI to learn human-level physical skills:
10 years is sufficient for humans to learn most physical skills from scratch, and you are talking about 20 year timelines. So why is the serial time for learning even a candidate blocker?
Humans learn new physical skills (including e.g. operating unfamiliar machinery) within tens of hours. This requires transfer from other things humans have learned, but those tasks are not always closely related (e.g. I learn to drive a car based on experience walking) and AI systems will have access to transfer from tasks that seem if anything more similar (e.g. prediction of the relevant physical environments, predictions of expert behavior in similar domains, closed-loop behavior in a wide range of simulated environments, closed-loop behavior on physical tasks with shorter timescales, behavior in virtual environments...).
We can easily run tens of thousands of copies of AI systems in parallel. Existing RL is massively parallelizable. Human evolution gives no evidence about the difficulty of parallelizing learning in this way. Based on observations of human learning it seems extremely likely to me that parallelization 10,000 fold can reduce serial time by at least 10x (which is all that is needed). Extrapolations of existing RL algorithms seem to suggest serial requirements more like 10,000 episodes, with almost all of the compute used to run a massive number of episodes in parallel, which would be 1 year even for a 1-hour task. It seems hard to construct physical tasks that don’t provide rich feedback after even shorter horizons than 1 hour (and therefore suitable for a gradient descent step given enough parallel samples) so this seems pretty conservative.
Regardless of learning physical tasks, humans are able to learn to do R&D after 20 years of experience. AI systems operate at 10x speed and most environments relevant to hardware and software R&D can be sped up by at least 10x. So it seems like AI systems could be human-level at a wide range of tasks, sufficient to accelerate further AI progress, even if they just used non-parallelized human learning over 2 years. If you really thought physical tasks were somehow impossibly difficult (which I don’t think is justified) then this becomes the dominant path to AGI. This is particularly important because multiple of your later points also seem to rest on the distinctive difficulty of automating physical tasks, which should just shift your probability further and further to an explosion of automated R&D which drives automation of physical labor.
I think you are disagreeing with these claims, but I’m not sure about that. For example, you mention parallelizable learning but seem to give it <10% probability despite the fact that it is the overwhelmingly dominant paradigm in current practice and you don’t say anything about why it might not work.
(This isn’t super relevant to my mainline view, since in fact I think AI is much worse at learning quickly than humans and will likely be transformative way before reaching parity. This is related to the general point about being unnecessarily conjunctive, but here I’m just trying to understand and express disagreement with the particular path you lay out and the probabilities you assign.)
In section 3 you say:
I think you claim that each synapse firing event requires about 1-10 million floating point operations (with some error bars), and that there is only a 16% chance that computers will be able to do enough compute for $25/hour.
This is probably the part of the report I am most skeptical of:
How do you square this with our experience in AI so far? Overall you seem to think it is possible that AI will be as effective as brains but unlikely to be much better. But if a biological neuron is probably ten million times more efficient than an artificial neuron, then aren’t we already much better than biology in tons of domains? Is there any task for which performance can be quantified and where you think this estimate provides a sane guideline to the inference-time compute required to solve the task? Shouldn’t you be putting significant probability on our algorithms being radically better than biology in many important ways?
Replicating the human visual cortex should take millions of times more compute than we have ever used, yet we can match human performance on a range of quantifiable perceptual tasks and are making rapid progress, and I’m actually not aware of tasks where it’s even plausible that we are 6 orders of magnitude away.
Learned policies for robotic control using only hundreds of thousands of neurons already seem to reach comparable competence to insects, but you should expect it to be significantly worse than a nematode. Aren’t you surprised to observe successful grasping and walking?
Traditional control systems like those used by Boston Dynamics seem to produce more competent motor control than small animals despite using amounts of compute close to 1 flop per synapse. You focus on ML, but I don’t know why—isn’t classical control a more reasonable point of comparison to small animals that have algorithms designed directly by evolution rather than learned in a lifetime, and doesn’t your argument very strongly predict that it should be impossible?
Qualitatively it’s hard to compare GPT-3 to humans, but just to be clear you are saying that it should behave like a brain with ~1000 neurons. This is at least surprising (e.g. I think would have led to big misses if it had been used to make any qualitative predictions), and to me casts doubt on a story where you can’t get transformative AI using less than the analog of a hundred billion neurons.
Your biological analysis seems to hinge on the assertion that precise simulation of neurons is necessary to get similar levels of computational utility (and even from there the analysis is pretty conservative, e.g. by assuming that performing that you need to perform a very expensive computation thousands of times a second). I don’t personally consider this plausible and I think the main argument given for it is that “if not, why would we have all these proteins?” which I don’t find persuasive (since synapses are under a huge number of important constraints and serve many important functions beyond implementing computationally complex functions at inference time). I’ve seen zero candidates for useful purposes for such an incredible amount of local computation with negligible quantities of long-distance communication, and there are very few examples of human-designed computations structured in this way / it seems to involve an extremely implausible model of what neurons are doing (apparently some nearly-embarassingly parallelizable task with work concentrated in individual neurons?). I don’t really want to argue with this at length, but want to flag that you are very confident about it and it drives a large part of your estimate whereas something like 50-50 seems more appropriate even before updating on the empirical success of ML.
In general you seem to be making the case very unnecessarily conjunctive—you are asking how likely it is that we will find algorithms as good as the brain, and then also build computers that operate at the Landauer limit (as you are apparently confident the brain does), and then also deploy AI in a way that is competitive at a $25/hour price point, and so on. But in fact one of these areas can outperform your benchmark (and if you are right in this section, then it’s definitely the case that we are radically more efficient than biology on many tasks already!), and it seems like you are dropping a lot of probability by ignoring that possibility. It’s like asking about the probability that a sum of 5 normal distributions will be above the mean, and estimating it’s 1/2^5 because each of 5 normal distributions needs to be above its mean.
(ETA: this criticism of section 3 is unfair: you do discuss the prospect of much better than human performance in the 2-page section “On the computational intensity of AGI,” and indeed this plays a completely central role in your bottom line estimate. But I’m still left wondering what the earlier 60% and 40% (and all the other numbers!) are supposed to represent, given that you are apparently putting all the work of “maybe humans will design efficient algorithms that are as good as the brain” in this section. You also don’t really discuss existing experience, where your estimates already appear to be many orders of magnitude off in domains where it is easiest to make comparisons between biology and ML (like vision or classical control) and where I don’t see how to argue we aren’t already 1000x better than biology using your 10 million flops per synapse number. Aside from me disagreeing with your mean, you describe these as conservative error bars since they put 20% probability on 1000x improvements over biology, but I think that’s really not the case given that it includes uncertainty about the useful compute done by the brain (where you already disagree by >>3 OOMs with plausible estimates) as well as algorithmic progress (where 1000x improvements over 20 years seem common both within software and ML).)
I’ll stop here rather than going on to sections 4+, though I think I have a lot to object to along similar lines (primarily that the story is being made unreasonably conjunctive).
Overall your estimation strategy looks crazy to me and I’m skeptical of the the implicit claim that this kind of methodology would perform well in historical examples. That said, if this sort of methodology actually does work well in practice then I think that trumps some a priori speculation and would be an important thing for me to really absorb. Your personal forecasting successes seem like a big part of the evidence for that, so it might be helpful to understand what kinds of predictions were involved and how methodologically analogous they are. Superficially it looks like the SciCast technology forecasting tournament is by far the most relevant; is there a pointer to the list of questions (other info like participants and list of predictions would also be awesome if available)? Or do you think one of the other items is more relevant?
Excellent comment; thank you for engaging in such detail. I’ll respond piece by piece. I’ll also try to highlight the things you think we believe but don’t actually believe.
Section 1: Likelihood of AGI algorithms
Yes, we assign a 40% chance that we don’t have AI algorithms by 2043 capable of learning to do nearly any human task with realistic amounts of time and compute. Some things we probably agree on:
Progress has been promising and investment is rising.
Obviously the development of AI that can do AI research more cheaply than humans could be a huge accelerant, with the magnitude depending on the value-to-cost ratio. Already GPT-4 is accelerating my own software productivity, and future models over the next twenty years will no doubt be leagues better (as well as more efficient).
Obviously slow progress in the past is not great evidence of slow progress in the future, as any exponential curve shows.
But as we discuss in the essay, 20 years is not a long time, much easier problems are taking longer, and there’s a long track record of AI scientists being overconfident about the pace of progress (counterbalanced, to be sure, by folks on the other side who are overconfident about things that would not be achieved and subsequently were). These factors give us pause, so while agree it’s likely we’ll have algorithms for AGI by 2043, we’re not certain of it, which is why we forecast 60%. We think forecasts higher than 60% are completely reasonable, but we personally struggle to justify anything near 100%.
Incidentally, I’m puzzled by your comment and others that suggest we might already have algorithms for AGI in 2023. Perhaps we’re making different implicit assumptions of realistic compute vs infinite compute, or something else. To me, it feels clear we don’t have the algorithms and data for AGI at present.
Lastly, no, we emphatically do not assume a ~0% chance that AGI will be smarter than nature’s brains. That feels like a ridiculous and overconfident thing to believe, and it pains me that we gave this impression. Already GPT-4 is smarter than me in ways, and as time goes on, the number of ways AI is smarter than me will undoubtedly grow.
Section 2: Likelihood of fast reinforcement training
Agree—if we had AGI today, this would not be a blocker. This becomes a greater and greater blocker the later AGI is developed. E.g., if AGI is developed in 2038, we’d have only 4 years to train it to do nearly every human task. So this factor is heavily entangled with the timeline on which AGI is developed.
(And obviously the development of AGI is not going to a clean line passed on a particular year, but the idea is the same even applied to AGI systems developed gradually and unevenly.)
Agree on nearly everything here. I think the crux on which we differ is that we think interaction with the real world will be a substantial bottleneck (and therefore being able to run 10,000 parallel copies may not save us).
As I mentioned to Zach below:
To recap, we can of course parallelize a million self-driving car AIs and have them drive billions of miles in simulation. But that only works to the extent that (a) our simulations reflect reality and (b) we have the compute resources to do so. And so real self-driving car companies are spending billions on fleets and human supervision in order to gather the necessary data. In general, if an AGI cannot easily and cheaply simulate reality, it will have to learn from real-world interactions. And to the extent that it needs to learn from interactions with the consequences of its earlier actions, that training will need to be sequential.
Agreed. Our expectation is that early AGIs will be expensive and uneven. If they end up being incredibly sample efficient, then this task will be much easier than we’ve forecasted.
In general, I’m pretty open to updating higher here. I don’t think there are any insurmountable barriers here; but a sense that this will both be hard to do (as self-driving illustrates) as well as unlikely to be done (as all sorts of tasks not currently automated illustrate). My coauthor is a bit more negative on this factor than me and may chime in with his own thoughts later.
I personally struggle to imagine how an AlphaZero-like algorithm would learn to become the world’s best swim instructor via massively parallelized reinforcement learning on children, but that may well be a failure of my imagination. Certainly one route is massively parallelized RL to become excellent at AI R&D, then massively parallelized RL to become excellent at many tasks, and then quickly transferring that understanding to teaching children to swim, without any children ever drowning.
Section 3: Operating costs
Here, I think you ascribe many beliefs to us which we do not hold, and I apologize for not being clearer. I’ll start by emphasizing what we do not believe.
We do not believe this.
AI is already vastly better than human brains at some tasks, and the number of tasks on which AI is superhuman will rise with time. We do expect that early AGIs will be expensive and uneven, as all earliest versions of a technology are. And then they will improve from there.
We do not believe this.
We do not believe this.
We do not believe this. We do not believe that brains operate at the Landauer limit, nor do we believe computers will operate at this limit by 2043.
Incidentally, I studied the Landauer limit deeply during my physics PhD and could write an essay on the many ways it’s misinterpreted, but will save that for another day. :)
We do not believe this.
To multiply these probabilities together, one cannot multiply their unconditional expectations; rather, one must multiply their cascading conditional probabilities. You may disagree with our probabilities, but our framework specifically addresses this point. Our unconditional probabilities are far lower for some of these events, because we believe they will be rapidly accelerated conditional on progress in AGI.
Forecasting credentials
Honestly, I wouldn’t put too much weight on my forecasting success. It’s mostly a mix of common sense, time invested, and luck. I do think it reflects a decent mental model of how the world works, which leads to decent calibration for what’s 3% likely vs 30% likely. The main reason I mention it in the paper is just to help folks realize that we’re not wackos predicting 1% because we “really feel” confident. In many other situations (e.g., election forecasting, sports betting, etc.) I often find myself on the humble and uncertain side of the fence, trying to warn people that the world is more complicated and unpredictable than their gut is telling them. Even here, I consider our component forecasts quite uncertain, ranging from 16% to 95%. It’s precisely our uncertainty about the future which leads to a small product of 0.4%. (From my point of view, you are staking out a much higher confidence position in asserting that AGI algorithms is very likely and that rapid self-improvement is very likely.)
As for SciCast, here’s at least one publication that resulted from the project: https://ieeexplore.ieee.org/abstract/document/7266786
Example questions (from memory) included:
What will be the highest reported efficiency of a perovskite photovoltaic cell by date X
What will be the volume of deployed solar in the USA by date X
At the Brazil World Cup, how far will the paraplegic exoskeleton kick the ball for the opening kickoff
Will Amazon offer drone delivery by date X
Will physicists discover Y by date X
Most forecasts related to scientific discoveries and technological inventions and had timescales of months to years.
Conclusion
From your comment, I think the biggest crux between us is the rate of AI self-improvement. If the rate is lower, the world may look like what we’re envisioning. If the rate is higher, progress may take off in a way not well predicted by current trends, and the world may look more like what you’re envisioning. This causes our conditional probabilities to look too low and too independent, from your point of view. Do you think that’s a fair assessment?
Lastly, can I kindly ask what your cascading conditional probabilities would be in our framework? (Let’s hold the framework constant for this question, even if you prefer another.)
I would guess that more or less anything done by current ML can be done by ML from 2013 but with much more compute and fiddling. So it’s not at all clear to me whether existing algorithms are sufficient for AGI given enough compute, just as it wasn’t clear in 2013. I don’t have any idea what makes this clear to you.
Given that I feel like compute and algorithms mostly trade off, hopefully it’s clear why I’m confused about what the 60% represents. But I’m happy for it to mean something like: it makes sense at all to compare AI performance vs brain performance, and expect them to be able to solve a similar range of tasks within 5-10 orders of magnitude of the same amount of compute.
If 60% is your estimate for “possible with any amount of compute,” I don’t know why you think that anything is taking a long time. We just don’t get to observe how easy problems are if you have plenty of compute, and it seems increasingly clear that weak performance is often explained by limited compute. In fact, even if 60% is your estimate for “doable with similar compute to the brain,” I don’t see why you are updating from our failure to do tasks with orders of magnitude less compute than a brain (even before considering that you think individual neurons are incredibly potent).
I still don’t fully understand the claims being made in this section. I guess you are saying that there’s a significant chance that the serial time requirements will be large and that will lead to a large delay? Like maybe you’re saying something like: a 20% chance that it will add >20 years of delay, a 30% chance of 10-20 years of delay, a 40% chance of 1-10 years of delay, a 10% chance of <1 year of delay?
In addition to not fully understanding the view, I don’t fully understand the discussion in this section or why it’s justifying this probability. It seems like if you had human-level learning (as we are conditioning on from sections 1+3) then things would probably work in <2 years unless parallelization is surprisingly inefficient. And even setting aside the comparison to humans, such large serial bottlenecks aren’t really consistent with any evidence to date. And setting any concrete details, you are already assuming we have truly excellent algorithms and so there are lots of ways people could succeed. So I don’t buy the number, but that may just be a disagreement.
You seem to be leaning heavily on the analogy to self-driving cars but I don’t find that persuasive—you’ve already postulated multiple reasons why you shouldn’t expect them to have worked so far. Moreover, the difficulties there also just don’t seem very similar to the kind of delay from serial time you are positing here, they seem much more closely related to “man we don’t have algorithms that learn anything like humans.”
I think I’ve somehow misunderstood this section.
It looks to me like you are trying to estimate the difficulty of automating tasks by comparing to the size of brains of animals that perform the task (and in particular human brains). And you are saying that you expect it to take about 1e7 flops for each synapse in a human brain, and then define a probability distribution around there. Am I misunderstanding what’s going on here or is that a fair summary?
(I think my comment about GPT-3 = small brain isn’t fair, but the reverse direction seems fair: “takes a giant human brain to do human-level vision” --> “takes 7 orders of magnitude larger model to do vision.” If that isn’t valid, then why is “takes a giant human brain to do job X” --> “takes 7 orders of magnitude larger model to automate job X” valid? Is it because you are considering the worst-case profession?)
I don’t think I understand where your estimates come from, unless we are just disagreeing about the word “precise.” You cite the computational cost of learning a fairly precise model of a neuron’s behavior as an estimate for the complexity per neuron. You also talk about some low level dynamics without trying to explain why they may be computationally relevant. And then you give pretty confident estimates for the useful computation done in a brain. Could you fill in the missing steps in that estimate a bit more, both for the mean (of 1e6 per neuron*spike) and for the standard deviation of the log (which seems to be about ~1 oom)?
I think I misunderstood your claims somehow.
I think you are claiming that the brain does 1e20-1e21 flops of useful computation. I don’t know exactly how you are comparing between brains and floating point operations. A floating point operation is more like 1e5 bit erasures today and is necessarily at least 16 bit erasures at fp16 (and your estimates don’t allow for large precision reductions e.g. to 1 bit arithmetic). Let’s call it 1.6e21 bit erasures per second, I think quite conservatively?
I might be totally wrong about the Landauer limit, but I made this statement by looking at Wikipedia which claims 3e-21 J per bit erasure at room temperature. So if you multiply that by 1.6e21 bit erasures per second, isn’t that 5 W, nearly half the power consumption of the brain?
Is there a mistake somewhere in there? Am I somehow thinking about this differently from you?
I understand this, but the same objection applies for normal distributions being more than 0. Talking about conditional probabilities doesn’t help.
Are you saying that e.g. a war between China and Taiwan makes it impossible to build AGI? Or that serial time requirements make AGI impossible? Or that scaling chips means AGI is impossible? It seems like each of these just makes it harder. These are factors you should be adding up. Some things can go wrong and you can still get AGI by 2043. If you want to argue you can’t build AGI if something goes wrong, that’s a whole different story. So multiplying probabilities (even conditional probabilities) for none of these things happening doesn’t seem right.
I don’t know what the events in your decomposition refer to well enough to assign them probabilities:
I still don’t know what “algorithms for AGI” means. I think you are somehow ignoring compute costs, but if so I don’t know on what basis you are making any kind of generalization from our experience with the difficulty of designing extremely fast algorithms. In most domains algorithmic issues are ~the whole game and that seems true in AI as well.
I don’t really know what “invent a way for AGI to learn faster than humans” means, as distinct from the estimates in the next section about the cost of AGI algorithms. Again, are you trying to somehow abstract out compute costs of learning here? Then my probabilities are very but uninteresting.
Taken on its own, it seems like the third probability (“AGI inference costs drop below $25/hr (per human equivalent)”) implies the conclusion. So I assume you are doing something where you say “Ignoring increases in demand and the possibility of supply chain disruptions and...” or something like that? So the forecast you are making about compute prices aren’t unconditional forecasts?
I don’t know what level of cheap, quality robots you refer to. The quality of robotics needed to achieve transformative AI depends completely on the quality of your AI. For powerful AI it can be done with existing robot bodies, for weak AI it would need wildly superhuman bodies, at intermediate levels it can be done if humanoid robots cost millions of dollars each. And conversely the previous points aren’t really defined unless you specify something about the robotic platform. I assume you address this in the section but I think it’s going to be hard to define enough that I can give a number.
I don’t know what massively scaling chips mean—again, it seems like this just depends crucially on how good your algorithms are. It feels more like you should be estimating multiple numbers and then seeing the probability that the product is large enough to be impactful.
I don’t know what “avoid derailment” means. It seems like these are just factors that affect the earlier estimates, so I guess the earlier quantities were supposed to be something like “the probability of developing AGI given that nothing weird happens in the world”? Or something? But weird stuff is guaranteed to be happening in the world. I feel like this is the same deal as above, you should be multiplying out factors.
I think this seems right.
In particular, it seems like some of your estimates make more sense to me if I read them as saying “Well there will likely exist some task that AI systems can’t do.” But I think such claims aren’t very relevant for transformative AI, which would in turn lead to AGI.
By the same token, if the AIs were looking at humans they might say “Well there will exist some tasks that humans can’t do” and of course they’d be right, but the relevant thing is the single non-cherry-picked variable of overall economic impact. The AIs would be wrong to conclude that humans have slow economic growth because we can’t do some tasks that AIs are great at, and the humans would be wrong to conclude that AIs will have slow economic growth because they can’t do some tasks we are great at. The exact comparison is only relevant for assessing things like complementarity, which make large impacts happen strictly more quickly than they would otherwise.
(This might be related to me disliking AGI though, and then it’s kind of on OpenPhil for asking about it. They could also have asked about timelines to 100000x electricity production and I’d be making broadly the same arguments, so in some sense it must be me who is missing the point.)
That makes sense, and I’m ready to believe you have more calibrated judgments on average than I do. I’m also in the business of predicting a lot of things, but not as many and not with nearly as much tracking and accountability. That seems relevant to the question at hand, but still leaves me feeling very intuitively skeptical about this kind of decomposition.
C’mon Paul—please extend some principle of charity here. :)
You have repeatedly ascribed silly, impossible beliefs to us and I don’t know why (to be fair, in this particular case you’re just asking, not ascribing). Genuinely, man, I feel bad that our writing has either (a) given the impression that we believe such things or (b) given the impression that we’re the type of people who’d believe such things.
Like, are these sincere questions? Is your mental model of us that there’s a genuine uncertainty over whether we’ll say “Yes, a war precludes AGI” vs “No, a war does preclude AGI.”
To make it clear: No, of course a war between China and Taiwan does not make it impossible to build AGI by 2043. As our essay explicitly says.
To make it clear: our forecasts are not the odds of wars, pandemics, and depressions not occurring. They are the odds of wars, pandemics, and depressions not delaying AGI beyond 2043. Most wars, most pandemics, and most depressions will not delay AGI beyond 2043, we think. Our methodology is to forecast only the most severe events, and then assume a good fraction won’t delay AGI. As our essay explicitly says.
We probably forecast higher odds of delay than you, because our low likelihoods of TAGI mean that TAGI, if developed, is likeliest to be developed nearer to the end of the period, without many years of slack. If TAGI is easy, and can be developed early or with plenty of slack, then it becomes much harder for these types of events to derail TAGI.
My point in asking “Are you assigning probabilities to a war making AGI impossible?” was to emphasize that I don’t understand what 70% is a probability of, or why you are multiplying these numbers. I’m sorry if the rhetorical question caused confusion.
My current understanding is that 0.7 is basically just the ratio (Probability of AGI before thinking explicitly about the prospect of war) / (Probability of AGI after thinking explicitly about prospect of war). This isn’t really a separate event from the others in the list, it’s just a consideration that lengthens timelines. It feels like it would also make sense to list other considerations that tend to shorten timelines.
(I do think disruptions and weird events tend to make technological progress slower rather than faster, though I also think they tend to pull tiny probabilities up by adding uncertainty.)
I don’t follow you here.
Why is a floating point operation 1e5 bit erasures today?
Why does a fp16 operation necessitate 16 bit erasures? As an example, if we have two 16-bit registers (A, B) and we do a multiplication to get (A, A*B), where is the 16 bits of information loss?
(In any case, no real need to reply to this. As someone who has spent a lot of time thinking about the Landauer limit, my main takeaway is that it’s more irrelevant than often supposed, and I suspect getting to the bottom of this rabbit hole is not going to yield much for us in terms of TAGI timelines.)
Yep. We’re using the main definition supplied by Open Philanthropy, which I’ll paraphrase as “nearly all human work at human cost or less by 2043.”
If the definition was more liberal, e.g., AGI as smart as humans, or AI causing world GDP to rise by >100%, we would have forecasted higher probabilities. We expect AI to get wildly more powerful over the next decades and wildly change the face of human life and work. The public is absolutely unprepared. We are very bullish on AI progress, and we think AI safety is an important, tractable, and neglected problem. Creating new entities with the potential to be more powerful than humanity is a scary, scary thing.
Interesting—this is perhaps another good crux between us.
My impression is that existing robot bodies are not good enough to do most human jobs, even if we had human-level AGI today. Human bodies self-repair, need infrequent maintenance, last decades, have multi-modal high bandwidth sensors built in, and are incredibly energy efficient.
One piece of evidence for this is how rare tele-operated robots are. There are plenty of generally intelligent humans around the world who would be happy to control robots for $1/hr, and yet they are not being employed to do so.
I didn’t mean to imply that human-level AGI could do human-level physical labor with existing robotics technology; I was using “powerful” to refer to a higher level of competence. I was using “intermediate levels” to refer to human-level AGI, and assuming it would need cheap human-like bodies.
Though mostly this seems like a digression. As you mention elsewhere, the bigger crux is that it seems to me like automating R&D would radically shorten timelines to AGI and be amongst the most important considerations in forecasting AGI.
(For this reason I don’t often think about AGI timelines, especially not for this relatively extreme definition. Instead I think about transformative AI, or AI that is as economically impactful as a simulated human for $X, or something along those lines.)
Bingo. We didn’t take the time to articulate it fully, but yeah you got it. We think it makes it easier to forecast these things separately rather than invisibly smushing them together into a smaller set of factors.
We are multiplying out factors. Not sure I follow you here.
Agree 100%. Our essay does exactly this, forecasting over a wide range of potential compute needs, before taking an expected value to arrive a single summary likelihood.
Sounds like you think we should have ascribed more probability to lower ranges, which is a totally fair disagreement.
Pretty fair summary. 1e6, though, not 1e7. And honestly I could be pretty easily persuaded to go a bit lower by arguments such as:
Max firing rate of 100 Hz is not the informational content of the channel (that buys maybe 1 OOM)
Maybe a smaller DNN could be found, but wasn’t
It might take a lot of computational neurons to simulate the I/O of a single synapse, but it also probably takes a lot of synapses to simulate the I/O of a single computational neuron
Dropping our estimate by 1-2 OOMs would increase step 3 by 10%abs-20%abs. It wouldn’t have much effect on later estimates, as they are already conditional on success in step 3.
Maybe, but maybe not, which is why we forecast a number below 100%.
For example, it is very very rare to ever see a CEO hired with <2 years of experience, even if they are very intelligent and have read a lot of books and have watched a lot of interviews. Some reasons might be irrational or irrelevant, but surely some of it is real. A CEO job requires a large constellation of skills practiced and refined over many years. E.g., relationship building with customers, suppliers, shareholders, and employees.
For an AGI to be installed as CEO of a corporation in under two years, human-level learning would not be enough—it would need to be superhuman in its ability to learn. Such superhuman learning could come from simulation (e.g., modeling and simulating how a potential human partner would react to various communication styles), come from parallelization (e.g., being installed as a manager in 1,000 companies and then compiling and sharing learnings across copies), or from something else.
I agree that skills learned from reading or thinking or simulating could happen very fast. Skills requiring real-world feedback that is expensive, rare, or long-delayed would progress more slowly.
You seem to be missing the possibility of superhuman learning being from superhuman sample efficiency in the sense of requiring less feedback to aquire skills. Including actively experimenting in usefull directions more efectively.
Nope, we didn’t miss the possibility of AGIs being very sample efficient in their learning. We just don’t think it’s certain, which is why we forecast a number below 100%. Sounds like your estimate is higher than ours; however, that doesn’t mean we missed the possibility.
What’s an algorithm from 2013 that you think could yield AGI, if given enough compute? What would its inputs, outputs, and training look like? You’re more informed than me here and I would be happy to learn more.
I’m not sure I buy ’2013 algorithms are literally enough’, but it does seem very likely to me that in practice you get AGI very quickly (<2 years) if you give out GPUs which have (say) 10^50 FLOPS. (These GPUS are physically impossible, but I’m just supposing this to make the hypothetical easier. In particular, 2013 algorithms don’t parallelize very well and I’m just supposing this away.)
And, I think 2023 algorithms are literally enough with this amount of FLOP (perhaps with 90% probability).
For a concrete story of how this could happen, let’s imagine training a model with around 10^50 FLOP to predict all human data ever produced (say represented as uncompressed bytes and doing next token prediction) and simultaneously training with RL to play every game ever. We’ll use the largest model we can get with this flop budget, probably well over 10^25 parameters. Then, you RL on various tasks, prompt the AI, or finetune on some data (as needed).
This can be done with either 2013 or 2023 algorithms. I’m not sure if it’s enough with 2013 algorithms (in particular, I’d be worried that the AI would be extremely smart but the elicitation technology wasn’t there to get the AI to do anything useful). I’d put success with 2013 algos and this exact plan at 50%. It seems likely enough with 2023 algorithms (perhaps 80% chance of success).
In 2013 this would look like training an LSTM. Deep RL was barely developed, but did exist.
In 2023 this looks similar to GPT4 but scaled way up and trained on all source of data and trained to play games etc.
Let me replay my understanding to you, to see if I understand. You are predicting that...
IF:
we gathered all files stored on hard drives
...decompressed them into streams of bytes
...trained a monstrous model to predict the next chunk in each stream
...and also trained it to play every winnable computer game ever made
THEN:
You are 50% confident we’d get AGI* using 2013 algos
You are 80% confident we’d get AGI* using 2023 algos
WHERE:
*AGI means AI that is general; i.e., able to generalize to all sorts of data way outside its training distribution. Meaning:
It avoids overfitting on the data despite its massive parameter count. E.g., not just memorizing every file or brute forcing all the exploitable speedrunning bugs in a game that don’t generalize to real-world understanding.
It can learn skills and tasks that are barely represented in the computer dataset but that real-life humans are nonetheless able to quickly understand and learn due to their general world models
It can made to develop planning, reasoning, and strategy skills not well represented by next-token prediction (e.g., it would learn to how write a draft, reflect on it, and edit it, even though it’s never been trained to do that and has only been optimized to append single tokens in sequence)
It simultaneously avoids underfitting due to any regularization techniques used to avoid the above overfitting problems
ASSUMING:
We don’t train on data not stored on computers
We don’t train on non-computer games (but not a big crux if you want to posit high fidelity basketball simulations, for example)
We don’t train on games without win conditions (but not a big crux, as most have them)
Is this a correct restatement of your prediction?
And are your confidence levels for this resulting in AGI on the first try? Within ten tries? Within a year of trial and error? Within a decade of trial and error?
(Rounding to the nearest tenth of a percent, I personally am 0.0% confident we’d get AGI on our first try with a system like this, even with 10^50 FLOPS.)
This seems like a pretty good description of this prediction.
Your description misses needing a finishing step of doing some RL, prompting, and generally finetuning on the task of interest (similar to GPT4). But this isn’t doing much of the work, so it’s not a big deal. Additionally, this sort of finishing step wasn’t really developed in 2013, so it seems less applicable to that version.
I’m also assuming some iteration on hyperparameters and data manipulation etc. in keeping with the techniques used in the respective time periods. So, ‘first try’ isn’t doing that much work here because you’ll be iterating a bit in the same way that people generally iterate a bit (but you won’t be doing novel research).
My probabilities are for the ‘first shot’ but after you do some preliminary experiments to verify hyper-params etc. And with some iteration on the finetuning. There might be a non-trivial amount of work on the finetuning step also, I don’t have a strong view here.
It’s worth noting that I think that GPT5 (with finetuning and scaffolding, etc.) is perhaps around 2% likely to be AGI. Of course, you’d need serious robotic infrastructure and much larger pool of GPUs to automate all labor.
My general view is ‘if the compute is there, the AGI will come’. I’m going out on more of a limb with this exact plan and I’m much less confident in the plan than in this general principle.
Here are some examples reasons why I think my high probabilities are plausible:
The training proposal I gave is pretty close to how models like GPT4 are trained. These models are pretty general and are quite strategic etc. Adding more FLOP makes a pretty big qualitative difference.
It doesn’t seem to me like you have to generalize very far for this to succeed. I think existing data trains you to do basically everything humans can do. (See GPT4 and prompting)
Even if this proposal is massively inefficient, we’re throwing an absurd amount of FLOP at it.
It seems like the story for why humans are intelligent looks reasonably similar to this story: have big, highly functional brains, learn to predict what you see, train to achieve various goals, generalize far. Perhaps you think humans intelligence is very unlikely ex-ante (<0.04% likely).
Am I really the only person who thinks it’s a bit crazy that we use this blobby comment thread as if it’s the best way we have to organize disagreement/argumentation for audiences? I feel like we could almost certainly improve by using, e.g., a horizontal flow as is relatively standard in debate.[1]
With a generic example below:
To be clear, the commentary could still incorporate non-block/prose text.
Alternatively, people could use something like Kialo.com. But surely there has to be something better than this comment thread, in terms of 1) ease of determining where points go unrefuted, 2) ease of quickly tracing all responses in specific branches (rather than having to skim through the entire blob to find any related responses), and 3) seeing claims side-by-side, rather than having to scroll back and forth to see the full text. (Quoting definitely helps with this, though!)
(Depending on the format: this is definitely standard in many policy debate leagues.)
How hard do you suppose it might be to use an AI to scrub the comments and generate something like this? It may be worth doing manually for some threads, even, but it’s easier to get people to adopt if the debate already exists and only needs tweaking. There may even already exist software that accepts text as input and outputs a Kialo-like debate map (thank you for alerting me that Kialo exists, it’s neat).
Over the past few months I have occasionally tried getting LLMs to do some tasks related to argument mapping, but I actually don’t think I’ve tried that specifically, and probably should. I’ll make a note to myself to try here.
But I don’t think we could have predicted people would die into the comments like this. Usually comments have minimal engagement. There’s a lesswrong debate format for posts but that’s usually with a moderator and such. This seems spontaneous.
Are your referring to this format on LessWrong? If so I can’t say I’m particularly impressed, as it still seems to suffer from the problems of linear dialogue vs. a branching structure (e.g., it is hard to see where points have been dropped, it is harder to trace specific lines of argument). But I don’t recall seeing this, so thanks for the flag.
As for “I don’t think we could have predicted people…”, that’s missing my point(s). I’m partially saying “this comment thread seems like it should be a lesson/example of how text-blob comment-threads are inefficient in general.” However, even in this specific case Paul knew that he was laying out a multi-pronged criticism, and if the flow format existed he could have presented his claims that way, to make following the debate easier—assuming Ted would reply.
Ultimately, it just seems to me like it would be really logical to have a horizontal flow UI,[1] although I recognize I am a bit biased by my familiarity with such note taking methods from competitive debate.
In theory it need not be as strictly horizontal as I lay out; it could be a series of vertically nested claims, kept largely within one column—where the idea is that instead of replying to the entire comment you can just reply to specific blocks in the original comment (e.g., accessible in a drop down at the end of a specific argument block rather than the end of the entire comment).
I don’t know. As someone who was/still is quite good at debating and connected to debating communities I would find a flow-centric comment thread bothersome and unhelpful for reading the dialogues. I quite like internet comments as is in this UI.
I find this strange/curious. Is your preference more a matter of “Traditional interfaces have good features that a flowing interface would lack“ (or some other disadvantage to switching) or “The benefits of switching to a flowing interface would be relatively minor”?
For example on the latter, do you not find it more difficult with the traditional UI to identify dropped arguments? Or suppose you are fairly knowledgeable about most of the topics but there’s just one specific branch of arguments you want to follow: do you find it easy to do that? (And more on the less-obvious side, do you think the current structure disincentivizes authors from deeply expanding on branches?)
On the former, I do think that there are benefits to having less-structured text (e.g., introductions/summaries and conclusions) and that most argument mapping is way too formal/rigid with its structure, but I think these issues could be addressed in the format I have in mind.
I asked other debaters/EAs intersecting and they agreed with my line of reasoning that it would be contrived and lead to poorly structured arguments. I can elaborate if you really want but I hesitate spending time to write this out because I’m behind on work and don’t think it’ll have any impact on anything to be honest.