Thanks for your thoughtful engagement! Chalmers made a similar point during our interview (that socialist societies would also experience strong pressures to build AGI).
I tried to describe the landscape as it exists right now, without making many claims about what would likely be true under a totally different economic/political system. That being said, I do think it’s interesting that the leading labs are all corporations.
If you look at firms in a market economy as profit-maximizing agents and governments as agents trying to balance many interests, such as stability, economic growth, geopolitical/military advantage, popular support, international respect etc. then I think it’s easier to see why firms are pursuing AGI far more aggressively (by decreasing the cost of labor via automation, you can dramatically increase your profitability). For a government, AGI may boost economic growth and geopolitical/military advantage at the expense of stability and popular support.
And if you look at existential risk from AI as an externality, governments are more likely to take on the costs of mitigating that kind of risk whereas firms are more likely to pass them on to the broader society.
I’ve seen some claims that the CCP is less interested in AGI and more interested in narrow applications, like machine vision, facial recognition, natural language processing, which can all help shore up its power long term. I haven’t gone deep into this yet. I’ll dig into the China links you sent later.
Thanks for your thoughtful engagement! Chalmers made a similar point during our interview (that socialist societies would also experience strong pressures to build AGI).
I tried to describe the landscape as it exists right now, without making many claims about what would likely be true under a totally different economic/political system. That being said, I do think it’s interesting that the leading labs are all corporations.
If you look at firms in a market economy as profit-maximizing agents and governments as agents trying to balance many interests, such as stability, economic growth, geopolitical/military advantage, popular support, international respect etc. then I think it’s easier to see why firms are pursuing AGI far more aggressively (by decreasing the cost of labor via automation, you can dramatically increase your profitability). For a government, AGI may boost economic growth and geopolitical/military advantage at the expense of stability and popular support.
And if you look at existential risk from AI as an externality, governments are more likely to take on the costs of mitigating that kind of risk whereas firms are more likely to pass them on to the broader society.
I’ve seen some claims that the CCP is less interested in AGI and more interested in narrow applications, like machine vision, facial recognition, natural language processing, which can all help shore up its power long term. I haven’t gone deep into this yet. I’ll dig into the China links you sent later.