Already a software engineer today is many multiples more productive (by some metrics) than a software engineer in the 90s.
Do you have any material on this? It sounds plausible to me but I couldnât find anything with a quick search.
I think that tools that, say, cheaply automate half of work, or expensively automate 100% work probably wonât lead to wild, extra orders of magnitude levels of progress.
Supposing you take âprogressâ to mean something like GDP per capita or AI capabilities as measured on various benchmarks, I agree that it probably wonât (though I wouldnât completely rule it out). But also, I donât think progress would need to jump by OOMs for the chances of a financial crisis large enough to derail transformative AGI to be drastically reduced. (To be clear, I donât think drastic self-improvement is necessary for this, and I expect to see something more like increasingly sophisticated versions of âwe use AI to automate AI research/âengineeringâ.)
I also think itâs pretty likely that, if there is a financial crisis in these worlds, AI progress isnât noticeably impacted. If you look at papers published in various fields, patent applications, adoption of various IT technologies, numbers of researchers per capitaânone of these things seem to slow down in the wake of financial crises. Same thing for AI: I donât see any derailment from financial crises when looking at model sizes (both in terms of parameters and training compute), dataset sizes or chess program Elo.
Maybe capital expenditure will decrease, and that might only start being really important once SOTA models are extremely expensive, but on the other hand: if thereâs anything in these worlds you want to keep investing in itâs probably the technology thatâs headed towards full-blown AGI? Maybe I think 1 in 10 financial crises would substantially derail transformative AGI in these worlds, but it seems you think itâs more like 1 in 2.
Scenario one: If half their work was automated, ok now those 400 people could do the work of 800 people. Thatâs great, but honestly I donât think itâs path-breaking. And sure, thatâs only the first order effect. If half the work was automated, weâd of course elastically start spending way much more on the cheap automated half. But on the other hand, there would be diminishing returns, and for every step that becomes free, we just hit bottlenecks in the hard to automate parts. Even in the limit of cheap AGI, those AGIs may be limited by the GPUs they have to experiment on. Labor becoming free just means capital is the constraint.
Yeah, but why only focus on OAI? In this world we have AIs that cheaply automate half of work. That seems like it would have immense economic value and promise, enough to inspire massive new investments in AI companies.
Scenario two: Or, suppose we have human-cost human-level AGIs. Iâm not convinced that would, to first order, change much either. There are millions of smart people on earth who arenât working on AI research now. We could hire them, but we donât. Weâre not limited by brains. Weâre limited by willingness to spend. So even if we invent human-cost human-level brains, it actually doesnât change much, because that wasnât the constraint. (Of course, this is massively oversimplifying, and obviously human-cost human-level AGIs would be a bigger deal than human workers because of their ability to be rapidly improved and copied. But I hope it nevertheless conveys why I think AGI will need to get close to transformative levels before growth really explodes.)
Ah, I think we have a crux here. I think that, if you could hireâfor the same price as a humanâa human-level AGI, that would indeed change things a lot. Iâd reckon the AGI would have a 3-4x productivity boost from being able to work 24â7, and would be perfectly obedient, wouldnât be limited to working in a single field, could more easily transfer knowledge to other AIs, could be backed up and/âor replicated, wouldnât need an office or a fun work environment, can be âhiredâ or âfiredâ ~instantly without difficulty, etc.
That feels somehow beside the point, though. I think in any such scenario, thereâs also going to be very cheap AIs with sub-human intelligence that would have broad economic impact too.
Do you have any material on this? It sounds plausible to me but I couldnât find anything with a quick search.
Nope, itâs just an unsubstantiated guess based on seeing what small teams can build today vs 30 years ago. Also based on the massive improvement in open-source libraries and tooling compared to then. Todayâs developers can work faster at higher levels of abstraction compared to folks back then.
In this world we have AIs that cheaply automate half of work. That seems like it would have immense economic value and promise, enough to inspire massive new investments in AI companies....
Ah, I think we have a crux here. I think that, if you could hireâfor the same price as a humanâa human-level AGI, that would indeed change things a lot. Iâd reckon the AGI would have a 3-4x productivity boost from being able to work 24â7, and would be perfectly obedient, wouldnât be limited to working in a single field, could more easily transfer knowledge to other AIs, could be backed up and/âor replicated, wouldnât need an office or a fun work environment, can be âhiredâ or âfiredâ ~instantly without difficulty, etc.
That feels somehow beside the point, though. I think in any such scenario, thereâs also going to be very cheap AIs with sub-human intelligence that would have broad economic impact too.
Absolutely agree. AI and AGI will likely provide immense economic value even before the threshold of transformative AGI is crossed.
Still, supposing that AI research today is:
50â50 mix of capital and labor
faces diminishing returns
and has elastic demand
...then even a 4x labor productivity boost may not be all that path-breaking when you zoom out enough. Things will speed up, surely, but they might wonât create transformative AGI overnight. Even AGI researchers will need time and compute to do their experiments.
Do you have any material on this? It sounds plausible to me but I couldnât find anything with a quick search.
Supposing you take âprogressâ to mean something like GDP per capita or AI capabilities as measured on various benchmarks, I agree that it probably wonât (though I wouldnât completely rule it out). But also, I donât think progress would need to jump by OOMs for the chances of a financial crisis large enough to derail transformative AGI to be drastically reduced. (To be clear, I donât think drastic self-improvement is necessary for this, and I expect to see something more like increasingly sophisticated versions of âwe use AI to automate AI research/âengineeringâ.)
I also think itâs pretty likely that, if there is a financial crisis in these worlds, AI progress isnât noticeably impacted. If you look at papers published in various fields, patent applications, adoption of various IT technologies, numbers of researchers per capitaânone of these things seem to slow down in the wake of financial crises. Same thing for AI: I donât see any derailment from financial crises when looking at model sizes (both in terms of parameters and training compute), dataset sizes or chess program Elo.
Maybe capital expenditure will decrease, and that might only start being really important once SOTA models are extremely expensive, but on the other hand: if thereâs anything in these worlds you want to keep investing in itâs probably the technology thatâs headed towards full-blown AGI? Maybe I think 1 in 10 financial crises would substantially derail transformative AGI in these worlds, but it seems you think itâs more like 1 in 2.
Yeah, but why only focus on OAI? In this world we have AIs that cheaply automate half of work. That seems like it would have immense economic value and promise, enough to inspire massive new investments in AI companies.
Ah, I think we have a crux here. I think that, if you could hireâfor the same price as a humanâa human-level AGI, that would indeed change things a lot. Iâd reckon the AGI would have a 3-4x productivity boost from being able to work 24â7, and would be perfectly obedient, wouldnât be limited to working in a single field, could more easily transfer knowledge to other AIs, could be backed up and/âor replicated, wouldnât need an office or a fun work environment, can be âhiredâ or âfiredâ ~instantly without difficulty, etc.
That feels somehow beside the point, though. I think in any such scenario, thereâs also going to be very cheap AIs with sub-human intelligence that would have broad economic impact too.
Nope, itâs just an unsubstantiated guess based on seeing what small teams can build today vs 30 years ago. Also based on the massive improvement in open-source libraries and tooling compared to then. Todayâs developers can work faster at higher levels of abstraction compared to folks back then.
Absolutely agree. AI and AGI will likely provide immense economic value even before the threshold of transformative AGI is crossed.
Still, supposing that AI research today is:
50â50 mix of capital and labor
faces diminishing returns
and has elastic demand
...then even a 4x labor productivity boost may not be all that path-breaking when you zoom out enough. Things will speed up, surely, but they might wonât create transformative AGI overnight. Even AGI researchers will need time and compute to do their experiments.