Thanks for your very thorough response! I’m going to try to articulate my reasons for being skeptical based on what I understand about AI and econ (although I’m not an expert in either). And I’ll definitely read the papers you linked when I have more time.
The human brain is ultimately just a physical thing, so there’s no fundamental physical reason why (at least in aggregate) human-made machines couldn’t perform all of the same tasks that the brain is capable of.
I agree that it’s theoretically possible to build AGI; as I like to put it, it’s a no-brainer (pun very much intended).
But I think that replicating the capabilities of the human brain will be very expensive. Even if algorithmic improvements drive down the amounts of compute needed for ML training and inference, I would expect narrow AI systems to be cheaper and easier to train than more general ones at any point in time. If you wanted to automate 3 different tasks, you would train 3 separate ML systems to do each of them, because you could develop them independently from each other. Whereas if you tried to train a single AI system to do all of them, I think it would be more complicated to ensure that it reaches the same performance as the collection of narrow AI systems, and it would require more compute.
Also, if you wanted a general intelligence (whether a human or machine) to do tasks that require <insert property of general intelligence>, I think it would be cheaper to hire humans, up to a point. This is partly because, until AGI is commercially viable, the process of developing and maintaining AI systems necessarily involves human labor. Machine intelligence scales because computation does, but I think it would be unlikely to scale enough to make machine labor more cost-effective than human labor in all cases.
I do think that AGI depressing human wages to the point of mass unemployment is a tail risk that society should watch for, and that it would lead to humans losing control of society through enfeeblement, but I don’t think it’s a necessary outcome of further AI development.
Thanks for your very thorough response! I’m going to try to articulate my reasons for being skeptical based on what I understand about AI and econ (although I’m not an expert in either). And I’ll definitely read the papers you linked when I have more time.
I agree that it’s theoretically possible to build AGI; as I like to put it, it’s a no-brainer (pun very much intended).
But I think that replicating the capabilities of the human brain will be very expensive. Even if algorithmic improvements drive down the amounts of compute needed for ML training and inference, I would expect narrow AI systems to be cheaper and easier to train than more general ones at any point in time. If you wanted to automate 3 different tasks, you would train 3 separate ML systems to do each of them, because you could develop them independently from each other. Whereas if you tried to train a single AI system to do all of them, I think it would be more complicated to ensure that it reaches the same performance as the collection of narrow AI systems, and it would require more compute.
Also, if you wanted a general intelligence (whether a human or machine) to do tasks that require <insert property of general intelligence>, I think it would be cheaper to hire humans, up to a point. This is partly because, until AGI is commercially viable, the process of developing and maintaining AI systems necessarily involves human labor. Machine intelligence scales because computation does, but I think it would be unlikely to scale enough to make machine labor more cost-effective than human labor in all cases.
I do think that AGI depressing human wages to the point of mass unemployment is a tail risk that society should watch for, and that it would lead to humans losing control of society through enfeeblement, but I don’t think it’s a necessary outcome of further AI development.