I take your overall point as the static optimization problem may not be properly specified. For example, costs may not be linear in labor size because of adjustment costs to growing very quickly or costs may not be linear in compute because of bulk discounting. Moreover, these non-linear costs may be changing over time (e.g., adjustment costs might only matter in 2021-2024 as OpenAI, Anthropic have been scaling labor aggressively). I agree that this would bias the estimate of σ. Given the data we have, there should be some way to at least partially deal with this (e.g., by adding lagged labor as a control). I’ll have to think about it more.
On some of the smaller comments:
wages/r_{research} is around 0.28 (maybe you have better data here)
The best data we have is The Information’s article that OpenAI spent $700M on salaries and $1000M on research compute in 2024, so the wLrK=.7 (assuming you meant wLrK instead of wr).
The whole industry is much larger now and elasticity of substitution might not be constant; if so this is worrying because to predict whether there’s a software-only singularity we’ll need to extrapolate over more orders of magnitude of growth and the human labor → AI labor transition.
I agree.σ might not be constant over time, which is a problem for both estimation/extrapolation and also predicting what an intelligence explosion might look like. For example, if σ falls over time, then we may have a foom for a bit until σ falls below 1 and then fizzles. I’ve been thinking about writing something up about this.
Are you planning follow-up work, or is there other economic data we could theoretically collect that could give us higher confidence estimates?
Yes, although we’re not decided yet on what is the most useful to follow-up on. Very short-term there is trying to accomodate non-linear pricing. Of course, data on what non-linear pricing looks like would be helpful e.g., how does Nvidia bulk discount.
We also may try to estimate ϕ with the data we have.
Thanks for the insightful comment.
I take your overall point as the static optimization problem may not be properly specified. For example, costs may not be linear in labor size because of adjustment costs to growing very quickly or costs may not be linear in compute because of bulk discounting. Moreover, these non-linear costs may be changing over time (e.g., adjustment costs might only matter in 2021-2024 as OpenAI, Anthropic have been scaling labor aggressively). I agree that this would bias the estimate of σ. Given the data we have, there should be some way to at least partially deal with this (e.g., by adding lagged labor as a control). I’ll have to think about it more.
On some of the smaller comments:
The best data we have is The Information’s article that OpenAI spent $700M on salaries and $1000M on research compute in 2024, so the wLrK=.7 (assuming you meant wLrK instead of wr).
I agree.σ might not be constant over time, which is a problem for both estimation/extrapolation and also predicting what an intelligence explosion might look like. For example, if σ falls over time, then we may have a foom for a bit until σ falls below 1 and then fizzles. I’ve been thinking about writing something up about this.
Yes, although we’re not decided yet on what is the most useful to follow-up on. Very short-term there is trying to accomodate non-linear pricing. Of course, data on what non-linear pricing looks like would be helpful e.g., how does Nvidia bulk discount.
We also may try to estimate ϕ with the data we have.