I think this argument is interesting. Maybe this is my neoclassical-econ bias speaking, but I’m more skeptical of automation displacing human labor (as I’ve said in this shortform). It’s not clear to me that AI firms will have economic incentives to produce general AIs as opposed to more narrow AIs, and I think mass technological unemployment is less likely without general AI.
I think endnotes 12 and 13, within my cave of endnotes, may partly address this concern.
I don’t think the prediction that the labor share will fall in the future depends on (a) the assumption that the amount of work to be done in the economy is constant, (b) the assumption that automation is currently reducing the demand for labor, or (c) the assumption that individual AI systems will tend to have highly general capabilities. I do agree that the first two assumptions are wrong. I also think the third assumption is very plausibly wrong, in line with some of the analysis in Reframing Superintelligence.
I think the prediction only depends on the assumption that, in the future, it will become unnecessary (and comparatively more expensive) to hire humans workers to produce goods and services. I find this assumption really plausible. The human brain is ultimately just a physical thing, so there’s no fundamental physical reason why (at least in aggregate) human-made machines couldn’t perform all of the same tasks that the brain is capable of.[1] I also think it’s likely that engineers will eventually be able to make these kinds of machines; seemingly, the vast majority of AI researchers expect this to happen eventually. There are also, I think, very strong economic incentives to make and use these machines. If a business or state can produce goods and services more cheaply or effectively, by escaping the need to hire human workers, then it will typically want to do this. Any group that continues to pay for a lot of unnecessary human workers will be at a disadvantage.
This prediction is consistent with the observation that, historically, automation has tended to increase overall demand for labor. When one domain becomes highly automated, this tends to increase the demand for labor in complementary domains (inc. domains that did not previously exist) which are not highly automated. My understanding is that this dynamic explains why automation has mainly been driving wages up for the past couple hundred years. But the dynamic seems to break down once there are no longer any complementary automation-resistant domains.
For example: Suppose we live in a cheese-and-bread economy. People like eating cheese sandwiches, but don’t like eating cheese on its own. It then seems like completely automating cheese production (using machines that are more efficient than humans) will tend to increase demand for workers to staff bread factories. Automating both cheese and bread production, though, seems like it would pretty much eliminate the demand for labor. If either factory has an extra ten thousand dollars to spare, then (seemingly) they have no incentive to use it to pay a human worker a living wage, rather than spending it on capital that will increase output by a larger amount.[2]
My thought process here is largely based on my memory of this paper and this paper. I’m not an economist, though, so I’m curious whether you or anyone else reading this comment thinks there’s a significant gap/​mistake in this analysis.
As a caveat, in some cases, people might intrinsically prefer for certain goods or services to be provided by humans. For example, people might naturally prefer to watch human athletes, talk to human therapists, listen to sermons by human religious leaders, etc. Human labor could also become a kind of status good in its own right; paying people do things could be sort of the future equivalent of buying rare paintings or NFTs. As a more direct and ominous analogy, my impression is that slaves used to be a really common status/​luxury good for elites in lots of different parts of the world; maybe free human workers could play a similar social role in the future.
This would prevent the labor share from going to zero, even if AI systems can (at the physical level) do everything that human workers can do. But I’d find it kind of surprising if this kind of work was enough to maintain very high labor force participation. It also seems like, if all remaining work was in this category, then we should still be worried about democracy. If military operations, law enforcement, the production of nearly all physical stuff, etc., were all highly and effectively automated, then that would still seem to undercut a lot of the hypothesized economic basis for democracy.
I don’t think comparative advantage arguments ultimately help here. At the same time, though, I also don’t feel like I have a great grasp of how to apply them to capital-labor substitution.
Thanks for your very thorough response! I’m going to try to articulate my reasons for being skeptical based on what I understand about AI and econ (although I’m not an expert in either). And I’ll definitely read the papers you linked when I have more time.
The human brain is ultimately just a physical thing, so there’s no fundamental physical reason why (at least in aggregate) human-made machines couldn’t perform all of the same tasks that the brain is capable of.
I agree that it’s theoretically possible to build AGI; as I like to put it, it’s a no-brainer (pun very much intended).
But I think that replicating the capabilities of the human brain will be very expensive. Even if algorithmic improvements drive down the amounts of compute needed for ML training and inference, I would expect narrow AI systems to be cheaper and easier to train than more general ones at any point in time. If you wanted to automate 3 different tasks, you would train 3 separate ML systems to do each of them, because you could develop them independently from each other. Whereas if you tried to train a single AI system to do all of them, I think it would be more complicated to ensure that it reaches the same performance as the collection of narrow AI systems, and it would require more compute.
Also, if you wanted a general intelligence (whether a human or machine) to do tasks that require <insert property of general intelligence>, I think it would be cheaper to hire humans, up to a point. This is partly because, until AGI is commercially viable, the process of developing and maintaining AI systems necessarily involves human labor. Machine intelligence scales because computation does, but I think it would be unlikely to scale enough to make machine labor more cost-effective than human labor in all cases.
I do think that AGI depressing human wages to the point of mass unemployment is a tail risk that society should watch for, and that it would lead to humans losing control of society through enfeeblement, but I don’t think it’s a necessary outcome of further AI development.
I think this argument is interesting. Maybe this is my neoclassical-econ bias speaking, but I’m more skeptical of automation displacing human labor (as I’ve said in this shortform). It’s not clear to me that AI firms will have economic incentives to produce general AIs as opposed to more narrow AIs, and I think mass technological unemployment is less likely without general AI.
Thanks for the comment!
I think endnotes 12 and 13, within my cave of endnotes, may partly address this concern.
I don’t think the prediction that the labor share will fall in the future depends on (a) the assumption that the amount of work to be done in the economy is constant, (b) the assumption that automation is currently reducing the demand for labor, or (c) the assumption that individual AI systems will tend to have highly general capabilities. I do agree that the first two assumptions are wrong. I also think the third assumption is very plausibly wrong, in line with some of the analysis in Reframing Superintelligence.
I think the prediction only depends on the assumption that, in the future, it will become unnecessary (and comparatively more expensive) to hire humans workers to produce goods and services. I find this assumption really plausible. The human brain is ultimately just a physical thing, so there’s no fundamental physical reason why (at least in aggregate) human-made machines couldn’t perform all of the same tasks that the brain is capable of.[1] I also think it’s likely that engineers will eventually be able to make these kinds of machines; seemingly, the vast majority of AI researchers expect this to happen eventually. There are also, I think, very strong economic incentives to make and use these machines. If a business or state can produce goods and services more cheaply or effectively, by escaping the need to hire human workers, then it will typically want to do this. Any group that continues to pay for a lot of unnecessary human workers will be at a disadvantage.
This prediction is consistent with the observation that, historically, automation has tended to increase overall demand for labor. When one domain becomes highly automated, this tends to increase the demand for labor in complementary domains (inc. domains that did not previously exist) which are not highly automated. My understanding is that this dynamic explains why automation has mainly been driving wages up for the past couple hundred years. But the dynamic seems to break down once there are no longer any complementary automation-resistant domains.
For example: Suppose we live in a cheese-and-bread economy. People like eating cheese sandwiches, but don’t like eating cheese on its own. It then seems like completely automating cheese production (using machines that are more efficient than humans) will tend to increase demand for workers to staff bread factories. Automating both cheese and bread production, though, seems like it would pretty much eliminate the demand for labor. If either factory has an extra ten thousand dollars to spare, then (seemingly) they have no incentive to use it to pay a human worker a living wage, rather than spending it on capital that will increase output by a larger amount.[2]
My thought process here is largely based on my memory of this paper and this paper. I’m not an economist, though, so I’m curious whether you or anyone else reading this comment thinks there’s a significant gap/​mistake in this analysis.
As a caveat, in some cases, people might intrinsically prefer for certain goods or services to be provided by humans. For example, people might naturally prefer to watch human athletes, talk to human therapists, listen to sermons by human religious leaders, etc. Human labor could also become a kind of status good in its own right; paying people do things could be sort of the future equivalent of buying rare paintings or NFTs. As a more direct and ominous analogy, my impression is that slaves used to be a really common status/​luxury good for elites in lots of different parts of the world; maybe free human workers could play a similar social role in the future.
This would prevent the labor share from going to zero, even if AI systems can (at the physical level) do everything that human workers can do. But I’d find it kind of surprising if this kind of work was enough to maintain very high labor force participation. It also seems like, if all remaining work was in this category, then we should still be worried about democracy. If military operations, law enforcement, the production of nearly all physical stuff, etc., were all highly and effectively automated, then that would still seem to undercut a lot of the hypothesized economic basis for democracy.
I don’t think comparative advantage arguments ultimately help here. At the same time, though, I also don’t feel like I have a great grasp of how to apply them to capital-labor substitution.
Thanks for your very thorough response! I’m going to try to articulate my reasons for being skeptical based on what I understand about AI and econ (although I’m not an expert in either). And I’ll definitely read the papers you linked when I have more time.
I agree that it’s theoretically possible to build AGI; as I like to put it, it’s a no-brainer (pun very much intended).
But I think that replicating the capabilities of the human brain will be very expensive. Even if algorithmic improvements drive down the amounts of compute needed for ML training and inference, I would expect narrow AI systems to be cheaper and easier to train than more general ones at any point in time. If you wanted to automate 3 different tasks, you would train 3 separate ML systems to do each of them, because you could develop them independently from each other. Whereas if you tried to train a single AI system to do all of them, I think it would be more complicated to ensure that it reaches the same performance as the collection of narrow AI systems, and it would require more compute.
Also, if you wanted a general intelligence (whether a human or machine) to do tasks that require <insert property of general intelligence>, I think it would be cheaper to hire humans, up to a point. This is partly because, until AGI is commercially viable, the process of developing and maintaining AI systems necessarily involves human labor. Machine intelligence scales because computation does, but I think it would be unlikely to scale enough to make machine labor more cost-effective than human labor in all cases.
I do think that AGI depressing human wages to the point of mass unemployment is a tail risk that society should watch for, and that it would lead to humans losing control of society through enfeeblement, but I don’t think it’s a necessary outcome of further AI development.