However, in practice, I expect that the native labor share of income would decline almost in proportion to their share of the total population.
Again, I’m assuming that the AIs won’t get this money. Almost everything AIs do basically gets done for “free”, in an efficient market, without AIs themselves earning money. This is similar to how most automation works.
If AIs do get the money, things would be completely different to my expectations. Though in that case, I’d imagine that tech might move much more slowly, if these AIs didn’t have some extreme race to the bottom, in terms of being willing to do a lot of work for incredibly cheap. I’m really not sure how to price the marginal supply curve for AIs.
Again, I’m assuming that the AIs won’t get this money. Almost everything AIs do basically gets done for “free”, in an efficient market, without AIs themselves earning money. This is similar to how most automation works.
That’s not what I meant. I expect the human labor share to decline to near-zero levels even if AIs don’t own their own labor.
In the case AIs are owned by humans, their wages will accrue to their owners, who will be humans. In this case, aggregate human wages will likely be small relative to aggregate capital income (i.e., GDP that is paid to capital owners, including people who own AIs).
In the case AIs own their own labor, I expect aggregate human wages will be both small compared to aggregate AI wages, and small compared to aggregate capital income.
In both cases, I expect the total share of GDP paid out as human wages will be small. (Which is not to say humans will be doing poorly. You can enjoy high living standards even without high wages: rich retirees do that all the time.)
In the case AIs are owned by humans, their wages will accrue to their owners, who will be humans.
I imagine some of the question would be “how monopolistic will these conditions be?” If there’s one monopoly, they’d charge a ton, and I’d expect them to quickly dominate the entire world.
If there’s “perfect competition”, I’d expect this area to be far cheaper.
Right now LLMs seem much closer to “perfect competition” to me—companies are losing money selling them (I’m quite sure). I’m not sure what to expect going forwards. I assume that people won’t allow 1-2 companies to just start owning the entire economy, but it is a possibility. (This is basically a Decisive Strategic Advantage, at that point)
All that said, I don’t imagine the period I’m describing lasting all too long. Once humans become simulated well, and we really reach TAI++, lots of bets are off. It seems really tough to have a great model of that world, outside of “humans basically split up light cone, by dividing the sources of production, which will basically be AIs”[1]
I agree that humans will basically stop being useful at that point.
But if that point is far away (40-90 years), this could be enough time for many humans to make a lot of that money/capital, for that time. [1] “Split up” could mean “The CCP gets all of it”
Basically, I naively expect there to be some period where we have a lot of AI, but humans are still getting paid a lot—followed by some point where humans just altogether stop. (unless weird lock-in happens)
Maybe one good forecasting question is something like, “How much future wealth will be owned by AIs themselves, at different points in time?” I’ll guess that the answer is likely to either be roughly 0% (as with most automations), or 100% (AI Takeover, though in these cases, it’s not clear how to define the market)
Again, I’m assuming that the AIs won’t get this money. Almost everything AIs do basically gets done for “free”, in an efficient market, without AIs themselves earning money. This is similar to how most automation works.
If AIs do get the money, things would be completely different to my expectations. Though in that case, I’d imagine that tech might move much more slowly, if these AIs didn’t have some extreme race to the bottom, in terms of being willing to do a lot of work for incredibly cheap. I’m really not sure how to price the marginal supply curve for AIs.
That’s not what I meant. I expect the human labor share to decline to near-zero levels even if AIs don’t own their own labor.
In the case AIs are owned by humans, their wages will accrue to their owners, who will be humans. In this case, aggregate human wages will likely be small relative to aggregate capital income (i.e., GDP that is paid to capital owners, including people who own AIs).
In the case AIs own their own labor, I expect aggregate human wages will be both small compared to aggregate AI wages, and small compared to aggregate capital income.
In both cases, I expect the total share of GDP paid out as human wages will be small. (Which is not to say humans will be doing poorly. You can enjoy high living standards even without high wages: rich retirees do that all the time.)
I imagine some of the question would be “how monopolistic will these conditions be?” If there’s one monopoly, they’d charge a ton, and I’d expect them to quickly dominate the entire world.
If there’s “perfect competition”, I’d expect this area to be far cheaper.
Right now LLMs seem much closer to “perfect competition” to me—companies are losing money selling them (I’m quite sure). I’m not sure what to expect going forwards. I assume that people won’t allow 1-2 companies to just start owning the entire economy, but it is a possibility. (This is basically a Decisive Strategic Advantage, at that point)
All that said, I don’t imagine the period I’m describing lasting all too long. Once humans become simulated well, and we really reach TAI++, lots of bets are off. It seems really tough to have a great model of that world, outside of “humans basically split up light cone, by dividing the sources of production, which will basically be AIs”[1]
I agree that humans will basically stop being useful at that point.
But if that point is far away (40-90 years), this could be enough time for many humans to make a lot of that money/capital, for that time.
[1] “Split up” could mean “The CCP gets all of it”
Basically, I naively expect there to be some period where we have a lot of AI, but humans are still getting paid a lot—followed by some point where humans just altogether stop. (unless weird lock-in happens)
Maybe one good forecasting question is something like, “How much future wealth will be owned by AIs themselves, at different points in time?” I’ll guess that the answer is likely to either be roughly 0% (as with most automations), or 100% (AI Takeover, though in these cases, it’s not clear how to define the market)