Iâm curious: What do you think are the rough odds that invasion of Taiwan increases the likelihood of TAGI by 2043?
Maybe 20% that it increases the likelihood? Higher if war starts by 2030 or so, and near 0% if it starts in 2041 (but maybe >0% if it starts in 2042?). What number would you put on it, and how would you update your model if that number changed?
However, we feel somewhat more comfortable with our predictions prior to scaled, cheap AGI. Like, if it takes 3e30 â 3e35 operations to train an early AGI, then I donât think we can condition on that AGI accelerating us towards construction of the resources needed to generate 3e30 â 3e35 operations. It would be putting the cart before the horse.
What we can (and try to) condition on are potential predecessors to that AGI; e.g., improved narrow AI or expensive human-level AGI. Both of those we have experience with today, which gives us more confidence that we wonât get an insane productivity explosion in the physical construction of fabs and power plants.
I think what youâre saying here is, âyes, we condition on such a world, but even in such a world these things wonât be true for all of 2023-2043, but mainly only towards the latter years in that rangeâ. Is that right?
I agree to some extent, but as you wrote, âtransformative AGI is a much higher bar than merely massive progress in AIâ: I think in a lot of those previous years weâll still have AI doing lots of work to speed up R&D and carry out lots of other economically useful tasks. Like, we know in this world that weâre headed for AGI in 2043 or even earlier, so we should be seeing really capable and useful AI systems already in 2030 and 2035 and so on.
Maybe you think the progression from todayâs systems to potentially-transformative AGI will be discontinuous or something like that, with lots of progress (on algorithms, hardware, robotics, etc.) happening near the end?
I think in a lot of those previous years weâll still have AI doing lots of work to speed up R&D and carry out lots of other economically useful tasks. Like, we know in this world that weâre headed for AGI in 2043 or even earlier, so we should be seeing really capable and useful AI systems already in 2030 and 2035 and so on.
Maybe you think the progression from todayâs systems to potentially-transformative AGI will be discontinuous or something like that, with lots of progress (on algorithms, hardware, robotics, etc.) happening near the end?
No, I actually fully agree with you. I donât think progress will be discontinuous, and I do think we will see increasingly capable and useful systems by 2030 and 2035 that accelerate rates of progress.
I think where we may differ is that:
I think the acceleration will likely be more âin lineâ than âout of lineâ with the exponential acceleration we already see from improving computer tools and specifically LLM computer tools (e.g., GitHub Copilot, GPT-4). Already a software engineer today is many multiples more productive (by some metrics) than a software engineer in the 90s.
I think that tools that, say, cheaply automate half of work, or expensively automate 100% work probably wonât lead to wild, extra orders of magnitude levels of progress. OpenAI has what, 400 employees?
Scenario one: If half their work was automated, ok now those 400 people could do the work of 800 people. Thatâs great, but honestly I donât think itâs path-breaking. And sure, thatâs only the first order effect. If half the work was automated, weâd of course elastically start spending way much more on the cheap automated half. But on the other hand, there would be diminishing returns, and for every step that becomes free, we just hit bottlenecks in the hard to automate parts. Even in the limit of cheap AGI, those AGIs may be limited by the GPUs they have to experiment on. Labor becoming free just means capital is the constraint.
Scenario two: Or, suppose we have human-cost human-level AGIs. Iâm not convinced that would, to first order, change much either. There are millions of smart people on earth who arenât working on AI research now. We could hire them, but we donât. Weâre not limited by brains. Weâre limited by willingness to spend. So even if we invent human-cost human-level brains, it actually doesnât change much, because that wasnât the constraint. (Of course, this is massively oversimplifying, and obviously human-cost human-level AGIs would be a bigger deal than human workers because of their ability to be rapidly improved and copied. But I hope it nevertheless conveys why I think AGI will need to get close to transformative levels before growth really explodes.)
Overall where it feels like Iâm differing from some folks here is that I think higher levels of AI capability will be needed before we get wild self-improvement takeoff. I donât think it will be early, because even if we get massive automation due to uneven AI weâll still be bottlenecked by the things itâs bad at. I acknowledge this is a pretty squishy argument and I find it difficult to quantify and articulate, so I think itâs quite reasonable to disagree with me here. In general though, I think weâve seen a long history of things being harder to automate than we thought (e.g., self-driving, radiology, etc.). It will be exciting to see what happens!
Already a software engineer today is many multiples more productive (by some metrics) than a software engineer in the 90s.
Do you have any material on this? It sounds plausible to me but I couldnât find anything with a quick search.
I think that tools that, say, cheaply automate half of work, or expensively automate 100% work probably wonât lead to wild, extra orders of magnitude levels of progress.
Supposing you take âprogressâ to mean something like GDP per capita or AI capabilities as measured on various benchmarks, I agree that it probably wonât (though I wouldnât completely rule it out). But also, I donât think progress would need to jump by OOMs for the chances of a financial crisis large enough to derail transformative AGI to be drastically reduced. (To be clear, I donât think drastic self-improvement is necessary for this, and I expect to see something more like increasingly sophisticated versions of âwe use AI to automate AI research/âengineeringâ.)
I also think itâs pretty likely that, if there is a financial crisis in these worlds, AI progress isnât noticeably impacted. If you look at papers published in various fields, patent applications, adoption of various IT technologies, numbers of researchers per capitaânone of these things seem to slow down in the wake of financial crises. Same thing for AI: I donât see any derailment from financial crises when looking at model sizes (both in terms of parameters and training compute), dataset sizes or chess program Elo.
Maybe capital expenditure will decrease, and that might only start being really important once SOTA models are extremely expensive, but on the other hand: if thereâs anything in these worlds you want to keep investing in itâs probably the technology thatâs headed towards full-blown AGI? Maybe I think 1 in 10 financial crises would substantially derail transformative AGI in these worlds, but it seems you think itâs more like 1 in 2.
Scenario one: If half their work was automated, ok now those 400 people could do the work of 800 people. Thatâs great, but honestly I donât think itâs path-breaking. And sure, thatâs only the first order effect. If half the work was automated, weâd of course elastically start spending way much more on the cheap automated half. But on the other hand, there would be diminishing returns, and for every step that becomes free, we just hit bottlenecks in the hard to automate parts. Even in the limit of cheap AGI, those AGIs may be limited by the GPUs they have to experiment on. Labor becoming free just means capital is the constraint.
Yeah, but why only focus on OAI? In this world we have AIs that cheaply automate half of work. That seems like it would have immense economic value and promise, enough to inspire massive new investments in AI companies.
Scenario two: Or, suppose we have human-cost human-level AGIs. Iâm not convinced that would, to first order, change much either. There are millions of smart people on earth who arenât working on AI research now. We could hire them, but we donât. Weâre not limited by brains. Weâre limited by willingness to spend. So even if we invent human-cost human-level brains, it actually doesnât change much, because that wasnât the constraint. (Of course, this is massively oversimplifying, and obviously human-cost human-level AGIs would be a bigger deal than human workers because of their ability to be rapidly improved and copied. But I hope it nevertheless conveys why I think AGI will need to get close to transformative levels before growth really explodes.)
Ah, I think we have a crux here. I think that, if you could hireâfor the same price as a humanâa human-level AGI, that would indeed change things a lot. Iâd reckon the AGI would have a 3-4x productivity boost from being able to work 24â7, and would be perfectly obedient, wouldnât be limited to working in a single field, could more easily transfer knowledge to other AIs, could be backed up and/âor replicated, wouldnât need an office or a fun work environment, can be âhiredâ or âfiredâ ~instantly without difficulty, etc.
That feels somehow beside the point, though. I think in any such scenario, thereâs also going to be very cheap AIs with sub-human intelligence that would have broad economic impact too.
Do you have any material on this? It sounds plausible to me but I couldnât find anything with a quick search.
Nope, itâs just an unsubstantiated guess based on seeing what small teams can build today vs 30 years ago. Also based on the massive improvement in open-source libraries and tooling compared to then. Todayâs developers can work faster at higher levels of abstraction compared to folks back then.
In this world we have AIs that cheaply automate half of work. That seems like it would have immense economic value and promise, enough to inspire massive new investments in AI companies....
Ah, I think we have a crux here. I think that, if you could hireâfor the same price as a humanâa human-level AGI, that would indeed change things a lot. Iâd reckon the AGI would have a 3-4x productivity boost from being able to work 24â7, and would be perfectly obedient, wouldnât be limited to working in a single field, could more easily transfer knowledge to other AIs, could be backed up and/âor replicated, wouldnât need an office or a fun work environment, can be âhiredâ or âfiredâ ~instantly without difficulty, etc.
That feels somehow beside the point, though. I think in any such scenario, thereâs also going to be very cheap AIs with sub-human intelligence that would have broad economic impact too.
Absolutely agree. AI and AGI will likely provide immense economic value even before the threshold of transformative AGI is crossed.
Still, supposing that AI research today is:
50â50 mix of capital and labor
faces diminishing returns
and has elastic demand
...then even a 4x labor productivity boost may not be all that path-breaking when you zoom out enough. Things will speed up, surely, but they might wonât create transformative AGI overnight. Even AGI researchers will need time and compute to do their experiments.
Maybe 20% that it increases the likelihood? Higher if war starts by 2030 or so, and near 0% if it starts in 2041 (but maybe >0% if it starts in 2042?). What number would you put on it, and how would you update your model if that number changed?
I think what youâre saying here is, âyes, we condition on such a world, but even in such a world these things wonât be true for all of 2023-2043, but mainly only towards the latter years in that rangeâ. Is that right?
I agree to some extent, but as you wrote, âtransformative AGI is a much higher bar than merely massive progress in AIâ: I think in a lot of those previous years weâll still have AI doing lots of work to speed up R&D and carry out lots of other economically useful tasks. Like, we know in this world that weâre headed for AGI in 2043 or even earlier, so we should be seeing really capable and useful AI systems already in 2030 and 2035 and so on.
Maybe you think the progression from todayâs systems to potentially-transformative AGI will be discontinuous or something like that, with lots of progress (on algorithms, hardware, robotics, etc.) happening near the end?
No, I actually fully agree with you. I donât think progress will be discontinuous, and I do think we will see increasingly capable and useful systems by 2030 and 2035 that accelerate rates of progress.
I think where we may differ is that:
I think the acceleration will likely be more âin lineâ than âout of lineâ with the exponential acceleration we already see from improving computer tools and specifically LLM computer tools (e.g., GitHub Copilot, GPT-4). Already a software engineer today is many multiples more productive (by some metrics) than a software engineer in the 90s.
I think that tools that, say, cheaply automate half of work, or expensively automate 100% work probably wonât lead to wild, extra orders of magnitude levels of progress. OpenAI has what, 400 employees?
Scenario one: If half their work was automated, ok now those 400 people could do the work of 800 people. Thatâs great, but honestly I donât think itâs path-breaking. And sure, thatâs only the first order effect. If half the work was automated, weâd of course elastically start spending way much more on the cheap automated half. But on the other hand, there would be diminishing returns, and for every step that becomes free, we just hit bottlenecks in the hard to automate parts. Even in the limit of cheap AGI, those AGIs may be limited by the GPUs they have to experiment on. Labor becoming free just means capital is the constraint.
Scenario two: Or, suppose we have human-cost human-level AGIs. Iâm not convinced that would, to first order, change much either. There are millions of smart people on earth who arenât working on AI research now. We could hire them, but we donât. Weâre not limited by brains. Weâre limited by willingness to spend. So even if we invent human-cost human-level brains, it actually doesnât change much, because that wasnât the constraint. (Of course, this is massively oversimplifying, and obviously human-cost human-level AGIs would be a bigger deal than human workers because of their ability to be rapidly improved and copied. But I hope it nevertheless conveys why I think AGI will need to get close to transformative levels before growth really explodes.)
Overall where it feels like Iâm differing from some folks here is that I think higher levels of AI capability will be needed before we get wild self-improvement takeoff. I donât think it will be early, because even if we get massive automation due to uneven AI weâll still be bottlenecked by the things itâs bad at. I acknowledge this is a pretty squishy argument and I find it difficult to quantify and articulate, so I think itâs quite reasonable to disagree with me here. In general though, I think weâve seen a long history of things being harder to automate than we thought (e.g., self-driving, radiology, etc.). It will be exciting to see what happens!
Do you have any material on this? It sounds plausible to me but I couldnât find anything with a quick search.
Supposing you take âprogressâ to mean something like GDP per capita or AI capabilities as measured on various benchmarks, I agree that it probably wonât (though I wouldnât completely rule it out). But also, I donât think progress would need to jump by OOMs for the chances of a financial crisis large enough to derail transformative AGI to be drastically reduced. (To be clear, I donât think drastic self-improvement is necessary for this, and I expect to see something more like increasingly sophisticated versions of âwe use AI to automate AI research/âengineeringâ.)
I also think itâs pretty likely that, if there is a financial crisis in these worlds, AI progress isnât noticeably impacted. If you look at papers published in various fields, patent applications, adoption of various IT technologies, numbers of researchers per capitaânone of these things seem to slow down in the wake of financial crises. Same thing for AI: I donât see any derailment from financial crises when looking at model sizes (both in terms of parameters and training compute), dataset sizes or chess program Elo.
Maybe capital expenditure will decrease, and that might only start being really important once SOTA models are extremely expensive, but on the other hand: if thereâs anything in these worlds you want to keep investing in itâs probably the technology thatâs headed towards full-blown AGI? Maybe I think 1 in 10 financial crises would substantially derail transformative AGI in these worlds, but it seems you think itâs more like 1 in 2.
Yeah, but why only focus on OAI? In this world we have AIs that cheaply automate half of work. That seems like it would have immense economic value and promise, enough to inspire massive new investments in AI companies.
Ah, I think we have a crux here. I think that, if you could hireâfor the same price as a humanâa human-level AGI, that would indeed change things a lot. Iâd reckon the AGI would have a 3-4x productivity boost from being able to work 24â7, and would be perfectly obedient, wouldnât be limited to working in a single field, could more easily transfer knowledge to other AIs, could be backed up and/âor replicated, wouldnât need an office or a fun work environment, can be âhiredâ or âfiredâ ~instantly without difficulty, etc.
That feels somehow beside the point, though. I think in any such scenario, thereâs also going to be very cheap AIs with sub-human intelligence that would have broad economic impact too.
Nope, itâs just an unsubstantiated guess based on seeing what small teams can build today vs 30 years ago. Also based on the massive improvement in open-source libraries and tooling compared to then. Todayâs developers can work faster at higher levels of abstraction compared to folks back then.
Absolutely agree. AI and AGI will likely provide immense economic value even before the threshold of transformative AGI is crossed.
Still, supposing that AI research today is:
50â50 mix of capital and labor
faces diminishing returns
and has elastic demand
...then even a 4x labor productivity boost may not be all that path-breaking when you zoom out enough. Things will speed up, surely, but they might wonât create transformative AGI overnight. Even AGI researchers will need time and compute to do their experiments.