Quickly—“absent consumer preferences for human-specific services, or regulations barring AIs from doing certain tasks—AIs will be ~perfectly substitutable for human labor.” → This part is doing a lot of work. Functionally, I expect these to be a very large deal for a while.
Perhaps you can expand on this point. I personally don’t think there are many economic services for which I would strongly prefer a human perform them compared to a functionally identical service produced by an AI. I have a hard time imagining paying >50% of my income on human-specific services if I could spend less money to obtain essentially identical services from AIs, and thereby greatly expand my consumption as a result.
However, if we are counting the value of interpersonal relationships (which are not usually counted in economic statistics), then I agree the claim is more plausible. Nonetheless, this also seems somewhat unimportant when talking about things like whether humans would win a war with AIs.
> AIs would collectively have far more economic power than humans. I mean, only if we treat them as individuals with their own property rights.
In this context, it doesn’t matter that much whether AIs have legal property rights, since I was talking about whether AIs will collectively be more productive and powerful than humans. This distinction is important because, if there is a war between humans and AIs, I expect their actual productive abilities to be more important than their legal share of income on paper, in determining who wins the war.
But I agree that, if humans retain their property rights, then they will likely be economically more powerful than AIs in the foreseeable future by virtue of their ownership over capital (which could include both AIs and more traditional forms of physical capital).
if there is a war between humans and AIs, I expect their actual productive abilities to be more important than their legal share of income on paper, in determining who wins the war.
I ultimately definitely agree that humans would very much lose, at some point of AI. I was instead trying to just discuss economic situations in the cases where AIs don’t rebel.
> I personally don’t think there are many economic services for which I would strongly prefer a human perform them compared to a functionally identical service produced by an AI
I think that even small bottlecks would eventually become a large deal. If 0.1% of a process is done by humans, but the rest gets automated and done for ~free, then that 0.1% is what gets paid for.
For example, solar panels have gotten far cheaper, but solar installations haven’t, negating much of the impact past a point.
So I think that wherever humans can’t be automated, there might be a lot more human work being done, or at least pay available for it.
I think that even small bottlecks would eventually become a large deal. If 0.1% of a process is done by humans, but the rest gets automated and done for ~free, then that 0.1% is what gets paid for.
I agree with this in theory, but in practice I expect these bottlenecks to be quite insignificant in both the short and long-run.
We can compare to an analogous case in which we open up the labor market to foreigners (i.e., allowing them to immigrate into our country). In theory, preferences for services produced by natives could end up implying that, no matter how many people immigrate to our country, natives will always command the majority of aggregate wages. However, in practice, I expect that the native labor share of income would decline almost in proportion to their share of the total population.
In the immigration analogy, the reason why native workers would see their aggregate share of wages decline is essentially the same as the reason why I expect the human labor share to decline with AI: foreigners, like AIs, can learn to do our jobs about as well as we can do them. In general, it is quite rare for people to have strong preferences about who produces the goods and services they buy, relative to their preferences about the functional traits of those goods and services (such as their physical quality and design).
(However, the analogy is imperfect, of course, because immigrants tend to be both consumers and producers, and therefore their preferences impact the market too—whereas you might think AIs will purely be producers, with no consumption preferences.)
However, in practice, I expect that the native labor share of income would decline almost in proportion to their share of the total population.
Again, I’m assuming that the AIs won’t get this money. Almost everything AIs do basically gets done for “free”, in an efficient market, without AIs themselves earning money. This is similar to how most automation works.
If AIs do get the money, things would be completely different to my expectations. Though in that case, I’d imagine that tech might move much more slowly, if these AIs didn’t have some extreme race to the bottom, in terms of being willing to do a lot of work for incredibly cheap. I’m really not sure how to price the marginal supply curve for AIs.
Again, I’m assuming that the AIs won’t get this money. Almost everything AIs do basically gets done for “free”, in an efficient market, without AIs themselves earning money. This is similar to how most automation works.
That’s not what I meant. I expect the human labor share to decline to near-zero levels even if AIs don’t own their own labor.
In the case AIs are owned by humans, their wages will accrue to their owners, who will be humans. In this case, aggregate human wages will likely be small relative to aggregate capital income (i.e., GDP that is paid to capital owners, including people who own AIs).
In the case AIs own their own labor, I expect aggregate human wages will be both small compared to aggregate AI wages, and small compared to aggregate capital income.
In both cases, I expect the total share of GDP paid out as human wages will be small. (Which is not to say humans will be doing poorly. You can enjoy high living standards even without high wages: rich retirees do that all the time.)
In the case AIs are owned by humans, their wages will accrue to their owners, who will be humans.
I imagine some of the question would be “how monopolistic will these conditions be?” If there’s one monopoly, they’d charge a ton, and I’d expect them to quickly dominate the entire world.
If there’s “perfect competition”, I’d expect this area to be far cheaper.
Right now LLMs seem much closer to “perfect competition” to me—companies are losing money selling them (I’m quite sure). I’m not sure what to expect going forwards. I assume that people won’t allow 1-2 companies to just start owning the entire economy, but it is a possibility. (This is basically a Decisive Strategic Advantage, at that point)
All that said, I don’t imagine the period I’m describing lasting all too long. Once humans become simulated well, and we really reach TAI++, lots of bets are off. It seems really tough to have a great model of that world, outside of “humans basically split up light cone, by dividing the sources of production, which will basically be AIs”[1]
I agree that humans will basically stop being useful at that point.
But if that point is far away (40-90 years), this could be enough time for many humans to make a lot of that money/capital, for that time. [1] “Split up” could mean “The CCP gets all of it”
Basically, I naively expect there to be some period where we have a lot of AI, but humans are still getting paid a lot—followed by some point where humans just altogether stop. (unless weird lock-in happens)
Maybe one good forecasting question is something like, “How much future wealth will be owned by AIs themselves, at different points in time?” I’ll guess that the answer is likely to either be roughly 0% (as with most automations), or 100% (AI Takeover, though in these cases, it’s not clear how to define the market)
Perhaps you can expand on this point. I personally don’t think there are many economic services for which I would strongly prefer a human perform them compared to a functionally identical service produced by an AI. I have a hard time imagining paying >50% of my income on human-specific services if I could spend less money to obtain essentially identical services from AIs, and thereby greatly expand my consumption as a result.
However, if we are counting the value of interpersonal relationships (which are not usually counted in economic statistics), then I agree the claim is more plausible. Nonetheless, this also seems somewhat unimportant when talking about things like whether humans would win a war with AIs.
In this context, it doesn’t matter that much whether AIs have legal property rights, since I was talking about whether AIs will collectively be more productive and powerful than humans. This distinction is important because, if there is a war between humans and AIs, I expect their actual productive abilities to be more important than their legal share of income on paper, in determining who wins the war.
But I agree that, if humans retain their property rights, then they will likely be economically more powerful than AIs in the foreseeable future by virtue of their ownership over capital (which could include both AIs and more traditional forms of physical capital).
I ultimately definitely agree that humans would very much lose, at some point of AI. I was instead trying to just discuss economic situations in the cases where AIs don’t rebel.
> I personally don’t think there are many economic services for which I would strongly prefer a human perform them compared to a functionally identical service produced by an AI
I think that even small bottlecks would eventually become a large deal. If 0.1% of a process is done by humans, but the rest gets automated and done for ~free, then that 0.1% is what gets paid for.
For example, solar panels have gotten far cheaper, but solar installations haven’t, negating much of the impact past a point.
So I think that wherever humans can’t be automated, there might be a lot more human work being done, or at least pay available for it.
I agree with this in theory, but in practice I expect these bottlenecks to be quite insignificant in both the short and long-run.
We can compare to an analogous case in which we open up the labor market to foreigners (i.e., allowing them to immigrate into our country). In theory, preferences for services produced by natives could end up implying that, no matter how many people immigrate to our country, natives will always command the majority of aggregate wages. However, in practice, I expect that the native labor share of income would decline almost in proportion to their share of the total population.
In the immigration analogy, the reason why native workers would see their aggregate share of wages decline is essentially the same as the reason why I expect the human labor share to decline with AI: foreigners, like AIs, can learn to do our jobs about as well as we can do them. In general, it is quite rare for people to have strong preferences about who produces the goods and services they buy, relative to their preferences about the functional traits of those goods and services (such as their physical quality and design).
(However, the analogy is imperfect, of course, because immigrants tend to be both consumers and producers, and therefore their preferences impact the market too—whereas you might think AIs will purely be producers, with no consumption preferences.)
Again, I’m assuming that the AIs won’t get this money. Almost everything AIs do basically gets done for “free”, in an efficient market, without AIs themselves earning money. This is similar to how most automation works.
If AIs do get the money, things would be completely different to my expectations. Though in that case, I’d imagine that tech might move much more slowly, if these AIs didn’t have some extreme race to the bottom, in terms of being willing to do a lot of work for incredibly cheap. I’m really not sure how to price the marginal supply curve for AIs.
That’s not what I meant. I expect the human labor share to decline to near-zero levels even if AIs don’t own their own labor.
In the case AIs are owned by humans, their wages will accrue to their owners, who will be humans. In this case, aggregate human wages will likely be small relative to aggregate capital income (i.e., GDP that is paid to capital owners, including people who own AIs).
In the case AIs own their own labor, I expect aggregate human wages will be both small compared to aggregate AI wages, and small compared to aggregate capital income.
In both cases, I expect the total share of GDP paid out as human wages will be small. (Which is not to say humans will be doing poorly. You can enjoy high living standards even without high wages: rich retirees do that all the time.)
I imagine some of the question would be “how monopolistic will these conditions be?” If there’s one monopoly, they’d charge a ton, and I’d expect them to quickly dominate the entire world.
If there’s “perfect competition”, I’d expect this area to be far cheaper.
Right now LLMs seem much closer to “perfect competition” to me—companies are losing money selling them (I’m quite sure). I’m not sure what to expect going forwards. I assume that people won’t allow 1-2 companies to just start owning the entire economy, but it is a possibility. (This is basically a Decisive Strategic Advantage, at that point)
All that said, I don’t imagine the period I’m describing lasting all too long. Once humans become simulated well, and we really reach TAI++, lots of bets are off. It seems really tough to have a great model of that world, outside of “humans basically split up light cone, by dividing the sources of production, which will basically be AIs”[1]
I agree that humans will basically stop being useful at that point.
But if that point is far away (40-90 years), this could be enough time for many humans to make a lot of that money/capital, for that time.
[1] “Split up” could mean “The CCP gets all of it”
Basically, I naively expect there to be some period where we have a lot of AI, but humans are still getting paid a lot—followed by some point where humans just altogether stop. (unless weird lock-in happens)
Maybe one good forecasting question is something like, “How much future wealth will be owned by AIs themselves, at different points in time?” I’ll guess that the answer is likely to either be roughly 0% (as with most automations), or 100% (AI Takeover, though in these cases, it’s not clear how to define the market)