The two-tiered society
On AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu
Here is Claude.ai’s summary of Daron Acemoglu’s main ideas from the podcast:
Historically, major productivity improvements from new technologies haven’t always translated into benefits for workers. It depends on how the technologies are used and who controls them.
There are concerns that AI could further exacerbate inequality and create a “two-tiered society” if the benefits accrue mainly to a small group of capital owners and highly skilled workers. Widespread prosperity is not automatic.
We should aim for “machine usefulness”—AI that augments and complements human capabilities—rather than just “machine intelligence” focused on automating human tasks. But the latter is easier to monetize.
Achieving an AI future that benefits workers broadly will require changing incentives—through the tax system, giving workers more voice, government funding for human-complementary AI research, reforming business models, and effective regulation.
Some amount of “steering” of AI development through policy is needed to avoid suboptimal social outcomes, but this needs to be balanced against maintaining innovation and progress. Regulation should be a “soft touch.”
An “AI disruption reduction act,” akin to climate legislation, may be needed to massively shift incentives in a more pro-worker, pro-social direction before AI further entrenches a problematic trajectory. But some temporary slowdown in AI progress as a result may be an acceptable tradeoff.
The prospect of two-tiered socioeconomic order looks very realistic to me, and it is scary.
On the one hand, this order won’t be as static as feudal or caste systems: sure thing, politicians and technologists will create (at least formal) systems for vertical mobility from the lower tier, i.e., people who just live off UBI, and the higher level, politicians, business leaders, chief scientists, capital and land owners.
On the other hand, in feudal and caste systems people in all tiers have their role in the societal division of labour from which they can derive their sense of usefulness, purpose, and self-respect. It will be more challenging for those “have-nots” in the future AI world. Not only their labour will not be valued by the economy, their family roles will also be eroded: teacher for their own kids (why kids would respect them if AI will be vastly more intelligent, empathetic, ethical, etc.?), lover for their spouse (cf. VR sex), bread-winner (everyone is on UBI, including their spouse and kids). And this assumes they will have a family at all, which is increasingly rare, where as in faudal and caste societies most people were married and had kids.
Vertical mobility institutions will likely grow rather dysfunctional as well, akin to the education systems in East Asia where the youth are totally deprived of childhood and early adulthood in the cutthroat competition for a limited number of cushy positions at corporations, or the academic tenure in the US. If the first 30 years of people’s lives is a battle for a spot in the “higher tier” of the society, it will be very challenging for them to switch to a totally different mindset of meditative, non-competitive living like doing arts, crafts, gardening, etc.
Although many people point out the dysfunctionality of positional power institutions like the current academia, governments, or corporations, the alternative “libertarian” spin on social mobility in the age of AI is not obviously better: if AI enables very high leverage in the business, social, or media entrepreneurship, the resulting frenzy may be too intense either for the entrepreneurs, their customers, or both.
Response approaches
I’m not aware of anything that looks to me like a comprehensive and feasible alternative vision to the two-tiered society (if you know such, please let me know).
Daron Acemoglu proposes five economic and political responses that sound at least like they could help to steer the economy and the society in some alternative place, without knowing what place that is (which in itself is not a problem: vice versa, thinking of any alternative vision as a likely target would be a gross mistake and disregard for unknown unknowns):
Tax reforms to favour employment rather than automation
Foster labour voice for better power balance at the companies
A federal agency that provides seed funding and subsidies for the human-complementary AI technologies and business models. Subsidies are needed because “machine usefulness” is not as competitive as “machine intelligence/automation”, at least within the current financial system and economic fabric.
Reforming business models, e.g., a “digital ad tax” that should change the incentives of media platforms such as Meta or TikTok, and improve the mental health
This all sounds good to me, but this is not enough. We also need other political responses (cf. The Collective Intelligence Project), and new design ideas in methodology (of human—AI cooperation), social engineering (cf. Game B), and psychology, as a minimum.
If you know some interesting research in some of these directions, or other directions to help reach a non-tiered society that I missed, please comment.
While not providing anything like a solution to the central issue here, I want to note that it looks likely to be the middle classes that get hollowed out first—human labour to do all kinds of physical tasks is likely to be valued for longer than various kinds of desk-based tasks, because scaling up and deploying robotics to replace them would take significant time, whereas scaling up the automation of desk-based tasks can be relatively quick.
I think I mostly agree with this and would like to add a question / some confusion I personally have with these future scenarios:
A lot of (left-ish) spaces talk about how humans are used for their labor and how they’d like us to be “free from work” while also opposing progress in AI because “it means people lose their job”. For the same reason the first two points by Acemoglu you mention as a response seem short-sighted to me (or maybe I’m missing something).
In a world where AI is able to take over broad parts of work and prosperity for all is an attainable goal, shouldn’t our main goals be:
Find a new definition of meaning for humans (this relates back to OPs point about eroding all roles)
One reason why people seem to oppose “unemployment through AI” seems to be that a lot of people derive meaning from their work (even if they don’t like their jobs, it’s at least something)
Ensure sufficient redistribution of wealth (e.g. through UBI)
I’m curious to hear what I’m missing and also looking forward to some more resources on this!
Take care of 2 and 1 will take care of itself. The reason people fear unemployment is because they fear poverty. If the economy is producing incredible amounts of wealth, and there are robust distributive policies allowing everyone access to that wealth, I would expect people to be much happier than they are today. If people have the positive liberty to hang out with their friends, travel, learn new skills, go to restaurants, etc. they’ll do it. There are a myriad of ways that people will find to be “useful” and “valued” outside of the workplace. They can derive meaning from their relationships or their creative pursuits.
First, it doesn’t look politically feasible to me to “take care” of redistribution in the global context, without also tackling all the other aspects that Acemoglu mentions, and more aspects that I mention. Redistribution among Americans only (cf. Sam Altman’s proposal) will make another kind of two-tiered society: Americans and everyone else.
Second, I see the major issue in that people are too culturally conditioned (and to some degree hard-wired) at the moment to play the social status game, cf. Girardian mimetic theory. If we imagine a world where everyone is as serene, all-loving, and non-competitive as Mahatma Gandhi, of course job displacement would go fine. But what we actually have is people competing for zero-sum status: in politics, business, and media. Some “losers” in this game do fine (learn new skills and go to restaurants), but a huge portion of them are depressed, cannot have success in personal life, abuse substances and food, etc.
A large scale rewiring of society towards non-competition should be possible, but it should be accompanied exactly by the economic measures and business model innovation (cf. Maven social network—without likes and followers) that I discuss. Because psychological and social engineering won’t be successful outside of the economic context.
The two measures you quoted may be “short lived”, or maybe they could (if successful, which Acemoglu himself is very doubtful about) send the economy and the society on a somewhat different trajectory which may have rather different eventualities (including in terms of meaning) than if these measures are not applied.
I agree that developing new ideas in the social, psychological, and philosophical domains (the domain of meaning; may also be regarded as part of “psychology”) is essential. But it could only be successful in the context of the current technological, social, and economic reality (which may be “set in motion” by other economic and political measures).
For example, currently, a lot of people seem to derive their meaning in life from blogging on social media. I can relatively easily imagine that this will become a dominant source of meaning for most of the world’s population. Without judging whether this is “good” or “bad” meaning in some grand scheme of things and the effects of this, discussing this seriously is contingent on the existence of social media platforms and their embeddedness in society and the economy.