Iām curious: What do you think are the rough odds that invasion of Taiwan increases the likelihood of TAGI by 2043?
Maybe 20% that it increases the likelihood? Higher if war starts by 2030 or so, and near 0% if it starts in 2041 (but maybe >0% if it starts in 2042?). What number would you put on it, and how would you update your model if that number changed?
However, we feel somewhat more comfortable with our predictions prior to scaled, cheap AGI. Like, if it takes 3e30 ā 3e35 operations to train an early AGI, then I donāt think we can condition on that AGI accelerating us towards construction of the resources needed to generate 3e30 ā 3e35 operations. It would be putting the cart before the horse.
What we can (and try to) condition on are potential predecessors to that AGI; e.g., improved narrow AI or expensive human-level AGI. Both of those we have experience with today, which gives us more confidence that we wonāt get an insane productivity explosion in the physical construction of fabs and power plants.
I think what youāre saying here is, āyes, we condition on such a world, but even in such a world these things wonāt be true for all of 2023-2043, but mainly only towards the latter years in that rangeā. Is that right?
I agree to some extent, but as you wrote, ātransformative AGI is a much higher bar than merely massive progress in AIā: I think in a lot of those previous years weāll still have AI doing lots of work to speed up R&D and carry out lots of other economically useful tasks. Like, we know in this world that weāre headed for AGI in 2043 or even earlier, so we should be seeing really capable and useful AI systems already in 2030 and 2035 and so on.
Maybe you think the progression from todayās systems to potentially-transformative AGI will be discontinuous or something like that, with lots of progress (on algorithms, hardware, robotics, etc.) happening near the end?
I think in a lot of those previous years weāll still have AI doing lots of work to speed up R&D and carry out lots of other economically useful tasks. Like, we know in this world that weāre headed for AGI in 2043 or even earlier, so we should be seeing really capable and useful AI systems already in 2030 and 2035 and so on.
Maybe you think the progression from todayās systems to potentially-transformative AGI will be discontinuous or something like that, with lots of progress (on algorithms, hardware, robotics, etc.) happening near the end?
No, I actually fully agree with you. I donāt think progress will be discontinuous, and I do think we will see increasingly capable and useful systems by 2030 and 2035 that accelerate rates of progress.
I think where we may differ is that:
I think the acceleration will likely be more āin lineā than āout of lineā with the exponential acceleration we already see from improving computer tools and specifically LLM computer tools (e.g., GitHub Copilot, GPT-4). Already a software engineer today is many multiples more productive (by some metrics) than a software engineer in the 90s.
I think that tools that, say, cheaply automate half of work, or expensively automate 100% work probably wonāt lead to wild, extra orders of magnitude levels of progress. OpenAI has what, 400 employees?
Scenario one: If half their work was automated, ok now those 400 people could do the work of 800 people. Thatās great, but honestly I donāt think itās path-breaking. And sure, thatās only the first order effect. If half the work was automated, weād of course elastically start spending way much more on the cheap automated half. But on the other hand, there would be diminishing returns, and for every step that becomes free, we just hit bottlenecks in the hard to automate parts. Even in the limit of cheap AGI, those AGIs may be limited by the GPUs they have to experiment on. Labor becoming free just means capital is the constraint.
Scenario two: Or, suppose we have human-cost human-level AGIs. Iām not convinced that would, to first order, change much either. There are millions of smart people on earth who arenāt working on AI research now. We could hire them, but we donāt. Weāre not limited by brains. Weāre limited by willingness to spend. So even if we invent human-cost human-level brains, it actually doesnāt change much, because that wasnāt the constraint. (Of course, this is massively oversimplifying, and obviously human-cost human-level AGIs would be a bigger deal than human workers because of their ability to be rapidly improved and copied. But I hope it nevertheless conveys why I think AGI will need to get close to transformative levels before growth really explodes.)
Overall where it feels like Iām differing from some folks here is that I think higher levels of AI capability will be needed before we get wild self-improvement takeoff. I donāt think it will be early, because even if we get massive automation due to uneven AI weāll still be bottlenecked by the things itās bad at. I acknowledge this is a pretty squishy argument and I find it difficult to quantify and articulate, so I think itās quite reasonable to disagree with me here. In general though, I think weāve seen a long history of things being harder to automate than we thought (e.g., self-driving, radiology, etc.). It will be exciting to see what happens!
Already a software engineer today is many multiples more productive (by some metrics) than a software engineer in the 90s.
Do you have any material on this? It sounds plausible to me but I couldnāt find anything with a quick search.
I think that tools that, say, cheaply automate half of work, or expensively automate 100% work probably wonāt lead to wild, extra orders of magnitude levels of progress.
Supposing you take āprogressā to mean something like GDP per capita or AI capabilities as measured on various benchmarks, I agree that it probably wonāt (though I wouldnāt completely rule it out). But also, I donāt think progress would need to jump by OOMs for the chances of a financial crisis large enough to derail transformative AGI to be drastically reduced. (To be clear, I donāt think drastic self-improvement is necessary for this, and I expect to see something more like increasingly sophisticated versions of āwe use AI to automate AI research/āengineeringā.)
I also think itās pretty likely that, if there is a financial crisis in these worlds, AI progress isnāt noticeably impacted. If you look at papers published in various fields, patent applications, adoption of various IT technologies, numbers of researchers per capitaānone of these things seem to slow down in the wake of financial crises. Same thing for AI: I donāt see any derailment from financial crises when looking at model sizes (both in terms of parameters and training compute), dataset sizes or chess program Elo.
Maybe capital expenditure will decrease, and that might only start being really important once SOTA models are extremely expensive, but on the other hand: if thereās anything in these worlds you want to keep investing in itās probably the technology thatās headed towards full-blown AGI? Maybe I think 1 in 10 financial crises would substantially derail transformative AGI in these worlds, but it seems you think itās more like 1 in 2.
Scenario one: If half their work was automated, ok now those 400 people could do the work of 800 people. Thatās great, but honestly I donāt think itās path-breaking. And sure, thatās only the first order effect. If half the work was automated, weād of course elastically start spending way much more on the cheap automated half. But on the other hand, there would be diminishing returns, and for every step that becomes free, we just hit bottlenecks in the hard to automate parts. Even in the limit of cheap AGI, those AGIs may be limited by the GPUs they have to experiment on. Labor becoming free just means capital is the constraint.
Yeah, but why only focus on OAI? In this world we have AIs that cheaply automate half of work. That seems like it would have immense economic value and promise, enough to inspire massive new investments in AI companies.
Scenario two: Or, suppose we have human-cost human-level AGIs. Iām not convinced that would, to first order, change much either. There are millions of smart people on earth who arenāt working on AI research now. We could hire them, but we donāt. Weāre not limited by brains. Weāre limited by willingness to spend. So even if we invent human-cost human-level brains, it actually doesnāt change much, because that wasnāt the constraint. (Of course, this is massively oversimplifying, and obviously human-cost human-level AGIs would be a bigger deal than human workers because of their ability to be rapidly improved and copied. But I hope it nevertheless conveys why I think AGI will need to get close to transformative levels before growth really explodes.)
Ah, I think we have a crux here. I think that, if you could hireāfor the same price as a humanāa human-level AGI, that would indeed change things a lot. Iād reckon the AGI would have a 3-4x productivity boost from being able to work 24ā7, and would be perfectly obedient, wouldnāt be limited to working in a single field, could more easily transfer knowledge to other AIs, could be backed up and/āor replicated, wouldnāt need an office or a fun work environment, can be āhiredā or āfiredā ~instantly without difficulty, etc.
That feels somehow beside the point, though. I think in any such scenario, thereās also going to be very cheap AIs with sub-human intelligence that would have broad economic impact too.
Do you have any material on this? It sounds plausible to me but I couldnāt find anything with a quick search.
Nope, itās just an unsubstantiated guess based on seeing what small teams can build today vs 30 years ago. Also based on the massive improvement in open-source libraries and tooling compared to then. Todayās developers can work faster at higher levels of abstraction compared to folks back then.
In this world we have AIs that cheaply automate half of work. That seems like it would have immense economic value and promise, enough to inspire massive new investments in AI companies....
Ah, I think we have a crux here. I think that, if you could hireāfor the same price as a humanāa human-level AGI, that would indeed change things a lot. Iād reckon the AGI would have a 3-4x productivity boost from being able to work 24ā7, and would be perfectly obedient, wouldnāt be limited to working in a single field, could more easily transfer knowledge to other AIs, could be backed up and/āor replicated, wouldnāt need an office or a fun work environment, can be āhiredā or āfiredā ~instantly without difficulty, etc.
That feels somehow beside the point, though. I think in any such scenario, thereās also going to be very cheap AIs with sub-human intelligence that would have broad economic impact too.
Absolutely agree. AI and AGI will likely provide immense economic value even before the threshold of transformative AGI is crossed.
Still, supposing that AI research today is:
50ā50 mix of capital and labor
faces diminishing returns
and has elastic demand
...then even a 4x labor productivity boost may not be all that path-breaking when you zoom out enough. Things will speed up, surely, but they might wonāt create transformative AGI overnight. Even AGI researchers will need time and compute to do their experiments.
Maybe 20% that it increases the likelihood? Higher if war starts by 2030 or so, and near 0% if it starts in 2041 (but maybe >0% if it starts in 2042?). What number would you put on it, and how would you update your model if that number changed?
I think what youāre saying here is, āyes, we condition on such a world, but even in such a world these things wonāt be true for all of 2023-2043, but mainly only towards the latter years in that rangeā. Is that right?
I agree to some extent, but as you wrote, ātransformative AGI is a much higher bar than merely massive progress in AIā: I think in a lot of those previous years weāll still have AI doing lots of work to speed up R&D and carry out lots of other economically useful tasks. Like, we know in this world that weāre headed for AGI in 2043 or even earlier, so we should be seeing really capable and useful AI systems already in 2030 and 2035 and so on.
Maybe you think the progression from todayās systems to potentially-transformative AGI will be discontinuous or something like that, with lots of progress (on algorithms, hardware, robotics, etc.) happening near the end?
No, I actually fully agree with you. I donāt think progress will be discontinuous, and I do think we will see increasingly capable and useful systems by 2030 and 2035 that accelerate rates of progress.
I think where we may differ is that:
I think the acceleration will likely be more āin lineā than āout of lineā with the exponential acceleration we already see from improving computer tools and specifically LLM computer tools (e.g., GitHub Copilot, GPT-4). Already a software engineer today is many multiples more productive (by some metrics) than a software engineer in the 90s.
I think that tools that, say, cheaply automate half of work, or expensively automate 100% work probably wonāt lead to wild, extra orders of magnitude levels of progress. OpenAI has what, 400 employees?
Scenario one: If half their work was automated, ok now those 400 people could do the work of 800 people. Thatās great, but honestly I donāt think itās path-breaking. And sure, thatās only the first order effect. If half the work was automated, weād of course elastically start spending way much more on the cheap automated half. But on the other hand, there would be diminishing returns, and for every step that becomes free, we just hit bottlenecks in the hard to automate parts. Even in the limit of cheap AGI, those AGIs may be limited by the GPUs they have to experiment on. Labor becoming free just means capital is the constraint.
Scenario two: Or, suppose we have human-cost human-level AGIs. Iām not convinced that would, to first order, change much either. There are millions of smart people on earth who arenāt working on AI research now. We could hire them, but we donāt. Weāre not limited by brains. Weāre limited by willingness to spend. So even if we invent human-cost human-level brains, it actually doesnāt change much, because that wasnāt the constraint. (Of course, this is massively oversimplifying, and obviously human-cost human-level AGIs would be a bigger deal than human workers because of their ability to be rapidly improved and copied. But I hope it nevertheless conveys why I think AGI will need to get close to transformative levels before growth really explodes.)
Overall where it feels like Iām differing from some folks here is that I think higher levels of AI capability will be needed before we get wild self-improvement takeoff. I donāt think it will be early, because even if we get massive automation due to uneven AI weāll still be bottlenecked by the things itās bad at. I acknowledge this is a pretty squishy argument and I find it difficult to quantify and articulate, so I think itās quite reasonable to disagree with me here. In general though, I think weāve seen a long history of things being harder to automate than we thought (e.g., self-driving, radiology, etc.). It will be exciting to see what happens!
Do you have any material on this? It sounds plausible to me but I couldnāt find anything with a quick search.
Supposing you take āprogressā to mean something like GDP per capita or AI capabilities as measured on various benchmarks, I agree that it probably wonāt (though I wouldnāt completely rule it out). But also, I donāt think progress would need to jump by OOMs for the chances of a financial crisis large enough to derail transformative AGI to be drastically reduced. (To be clear, I donāt think drastic self-improvement is necessary for this, and I expect to see something more like increasingly sophisticated versions of āwe use AI to automate AI research/āengineeringā.)
I also think itās pretty likely that, if there is a financial crisis in these worlds, AI progress isnāt noticeably impacted. If you look at papers published in various fields, patent applications, adoption of various IT technologies, numbers of researchers per capitaānone of these things seem to slow down in the wake of financial crises. Same thing for AI: I donāt see any derailment from financial crises when looking at model sizes (both in terms of parameters and training compute), dataset sizes or chess program Elo.
Maybe capital expenditure will decrease, and that might only start being really important once SOTA models are extremely expensive, but on the other hand: if thereās anything in these worlds you want to keep investing in itās probably the technology thatās headed towards full-blown AGI? Maybe I think 1 in 10 financial crises would substantially derail transformative AGI in these worlds, but it seems you think itās more like 1 in 2.
Yeah, but why only focus on OAI? In this world we have AIs that cheaply automate half of work. That seems like it would have immense economic value and promise, enough to inspire massive new investments in AI companies.
Ah, I think we have a crux here. I think that, if you could hireāfor the same price as a humanāa human-level AGI, that would indeed change things a lot. Iād reckon the AGI would have a 3-4x productivity boost from being able to work 24ā7, and would be perfectly obedient, wouldnāt be limited to working in a single field, could more easily transfer knowledge to other AIs, could be backed up and/āor replicated, wouldnāt need an office or a fun work environment, can be āhiredā or āfiredā ~instantly without difficulty, etc.
That feels somehow beside the point, though. I think in any such scenario, thereās also going to be very cheap AIs with sub-human intelligence that would have broad economic impact too.
Nope, itās just an unsubstantiated guess based on seeing what small teams can build today vs 30 years ago. Also based on the massive improvement in open-source libraries and tooling compared to then. Todayās developers can work faster at higher levels of abstraction compared to folks back then.
Absolutely agree. AI and AGI will likely provide immense economic value even before the threshold of transformative AGI is crossed.
Still, supposing that AI research today is:
50ā50 mix of capital and labor
faces diminishing returns
and has elastic demand
...then even a 4x labor productivity boost may not be all that path-breaking when you zoom out enough. Things will speed up, surely, but they might wonāt create transformative AGI overnight. Even AGI researchers will need time and compute to do their experiments.