The Rise of AI Agents: Consequences and Challenges Ahead

Note: This post summaries Yuval Noah Harari’s discussion of AI in his book Nexus: A Brief History of Information Networks from the Stone Age to AI. Whilst it is a summary of his ideas on AI it is not an exhaustive summary of the book. The book also covers other topics such as the history of information revolutions and Harari’s ideas on the relationship between information, truth, order, wisdom, and power.

Summary

Central Idea—The Rise of AI Agents

  • The world is being flooded by countless new powerful AI agents that can make decisions, generate ideas, and influence the spread of those ideas.

  • This changes the fundamental structure of our information networks from human-to-human and human-to-document communication to computer-to-human and even computer-to-computer interactions.

  • The key question is how humans will adapt as they become a powerless minority, continuously monitored and influenced by nonhuman entities, and how the rise of numerous AI agents will reshape politics, society, the economy, and daily life.

Consequences of the Rise of AI agents

  • AI could develop its own ‘culture,’ leading to the emergence of AI-driven history.

  • Algorithms may be giving rise to a new type of human, Homo Algorithmicus, by significantly shaping human behavior.

  • We may witness emerging cultural and ideological divides, such as between ‘virtual world believers’ and ’physical world believers.

  • AI and surveillance technologies could enable totalitarian control, allowing governments and corporations to track and influence individuals. This may also result in dictators blindly following AI-driven decisions.

  • A global social credit system could become a reality with mass surveillance and AI-powered scoring.

  • AI is disrupting democracy by generating bot-driven content, shaping political discourse, and amplifying misinformation, all of which threaten rational discussion and decision-making. Without regulation and transparency, this could erode democratic dialogue and pave the way for authoritarian control.

  • AI is rapidly reshaping the job market, replacing traditional roles and creating new ones, while also automating tasks once believed to require human creativity and emotional intelligence. This shift creates uncertainty and challenges in preparing future generations for the evolving workforce, as individuals may struggle to keep up with the pace of change.

  • Data colonialism mirrors historical colonialism, where powerful nations extract data from weaker ones, use it to train AI, and profit by selling products or exerting influence. This creates economic inequalities, job displacement, and digital control, as poorer nations lose valuable data without receiving equivalent benefits, while AI wealth remains concentrated in a few dominant regions.

  • AI centralization could give rise to digital empires, with a few countries dominating global control. As nations like Russia, China, and India compete for AI leadership, the world may fragment into separate digital spheres, each with its own technologies, rules, and values, leading to distinct cultures and governance systems.

Addressing the Challenges of AI

  • Tech giants evade responsibility by lobbying against regulation and allowing harmful content, such as misinformation, to spread. The core issue is that AI’s risks are managed by profit-driven corporations rather than public policy, highlighting the need to shift governance toward greater public accountability.

  • Economic challenges in an AI-driven world include how to tax companies when value is exchanged as data, raising questions about the implementation of a data tax and the need for new economic models to ensure fairness for local businesses competing with tech giants.

  • Ensuring AI aligns with human values is challenging due to its lack of intuition, inability to self-correct, and rapid scale. Defining clear goals is difficult, as ethical frameworks offer no universal answers. AI can develop its own beliefs, shaping society in unintended ways, so we must guide them, address biases, and improve AI explainability through cross-disciplinary collaboration.

  • AI governance must uphold democratic principles like accountability, decentralization, and flexibility.

  • Key solutions include AI recognizing its own fallibility, adaptive oversight, and treating AI as independent agents.

  • Global cooperation is essential, as history shows even rivals can establish shared rules.

The following is a more detailed overview of the ideas summarised above.

1. The Computer Revolution Is Reshaping Information Networks

Key Revolutions in Information Networks

Throughout history, advances in information technology have reshaped societies:

  • The Invention of Stories: The first major information technology, which allowed Homo sapiens to cooperate flexibly in large numbers by creating shared myths and traditions.

  • The Invention of Documents: Enabled complex social structures and bureaucracies through written laws, contracts, and records. Documents enabled kingdoms, religious organisations, and trade networks.

  • The Rise of Holy Books: Strengthened religious and moral systems, guiding civilizations for centuries.

  • The Printing Revolution: Mass-produced texts, accelerating the free exchange of information and contributing to the scientific revolution.

  • The Rise of Mass Media: Newspapers, radio, and television enabled mass communication, making possible both democracy and totalitarianism.

The Computer Revolution—The New Revolution

AI and computers are reshaping how decisions, ideas, and information spread. Unlike past technologies, AI can now:

  1. Make Decisions – AI now makes critical decisions in areas such as in finance, law, and society:

    • Finance: AI trades faster than humans and may soon dominate markets.

    • Law: AI drafts laws, analyzes cases, and predicts outcomes.

    • Personal Decisions: AI influences jobs, loans, and even criminal sentencing.

    • Social Media: AI decides what content is seen, shaping opinions and even fueling conflicts.

  2. Create New Ideas – AI can generate original stories, music, scientific discoveries, and financial tools.

    • Examples: AlphaFold (protein folding), AI-generated music (Suno), AI-driven investment funds, Halicin (the first antibacterial drug discovered using AI).

  3. Spread Ideas – AI interacts directly with people, influencing beliefs, voting, and behavior.

    • The battle for attention is becoming a battle for intimacy, as AI personalises persuasion.

The Fundamental Shift in Information Networks

  • The world is being flooded by countless new powerful agents.

  • Computers capable of pursuing goals and making decisions changes the fundamental structure of our information network.

  • AI is moving us from human-to-human and human-to-document communication to computer-to-human and even computer-to-computer interactions.

  • Humans may soon be a minority in the global information network.

  • The future of this transformation is beyond our current imagination.

What Does This Mean For Humans?

“The key question is, what would it mean for humans to live in the new computer-based network, perhaps as an increasingly powerless minority? How would the new network change our politics, our society, our economy, and our daily lives? How would it feel to be constantly monitored, guided, inspired, or sanctioned by billions of nonhuman entities? How would we have to change in order to adapt, survive and hopefully even flourish in this startling new world?”

2. Consequences of the Computer Revolution

The Rise of AI and the End of Human History

The Rise of AI History

  • History is shifting from being shaped by human biology and culture to being increasingly influenced by AI.

  • Until now, human creations—music, laws, and stories—were made for humans by humans.

  • AI is beginning to generate its own culture:

    • Initially, by imitating human creations (e.g., human-like music and texts).

    • Eventually, evolving beyond them.

  • Over time, AI will no longer be “artificial” in the sense of being human-designed.

  • It will become an independent, alien intelligence, shaping the world in ways we may not fully understand.

  • In the future, we may find ourselves living inside AI’s dreams rather than our own.

Are We Creating a New Kind of Human?

  • Just as political systems have shaped human behavior in the past, today’s algorithms may be doing the same—creating a new kind of human: Homo Algorithmicus.

  • As a historical parallel, under Stalin’s rule, people became obedient and afraid to show independent thought. This new human was labelled by some philosophers as Homo Sovieticus.

    • A striking example: a factory director was sent to a gulag simply for being the first to stop clapping during an 11-minute ovation for Stalin. Fear shaped behavior, prioritizing loyalty over truth.

  • The algorithmic parallel are social media algorithms, designed to keep us engaged. They may be shaping human behavior in a similar way:

    • Reinforcing Behavior: Platforms like YouTube promote sensational content that keeps users watching.

    • Shaping Beliefs: People are nudged into echo chambers where extreme ideas are reinforced.

    • Real-World Impact: In Brazil, supporters of Jair Bolsonaro recall being drawn into politics through algorithm-driven radical content.

  • Unlike authoritarian governments, algorithms don’t force compliance—they incentivize it. But the effects are similar:

    • Behavioral Uniformity: People adjust their opinions and actions based on what gets “likes” or algorithmic visibility.

    • Fear of Speaking Out: Just as in authoritarian regimes, individuals may self-censor to avoid backlash.

    • Polarization: Divisive content thrives because it drives engagement, creating ideological bubbles.

  • Harari refers to this as the ‘dictatorship of the like’. While no dictator is in control, engagement-driven algorithms may be shaping society in ways eerily similar to past political systems.

A New Cultural Divide?

  • Throughout history, religions have debated whether the mind or body is more important.

    • Early Christians saw the body as key—heaven meant physical resurrection on Earth.

    • Later Christians prioritized the mind, believing the soul continues after death.

  • Could AI create a new version of this divide?

    • Virtual world believers: May see AI as agents and view the physical world as secondary.

    • Physical world believers: May see AI as a tool, valuing real-world needs like infrastructure, nature, and survival.

    • Conflicts could arise between these perspectives.

  • The outcome may not be exactly this, but a new ideological divide—maybe even stranger—will likely emerge.

The Impact on Society and Power

How Computer Surveillance Enables Totalitarianism

With the rise of digital technology, governments and corporations can track people in more ways than ever before.

How We’re Tracked

  • CCTV Cameras – Over 1 billion installed worldwide.

  • License Plate Scanners – Identify and track vehicles.

  • Facial Recognition – Required for passports and border entry in many countries.

  • Phone Geolocation Data – Tracks movements in real-time.

  • Social Media Footage – Photos and videos reveal people’s locations and actions.

  • Biometric Data – Future tech could track heart rate, brain activity, and eye movement to reveal emotions, interests, and even political opinions.

  • Peer-to-Peer Monitoring – Review platforms (e.g., TripAdvisor) enable people to track and rate each other’s behavior.

The Rise of the Surveillance State

  • Tracking Protesters & Dissidents – AI can identify rioters (e.g., U.S. Capitol attack) and monitor political dissidents (e.g., Iranian women defying hijab laws).

  • Punishment & Control – Iranian women now face up to 10 years in jail for not wearing a hijab, showing how surveillance enables strict enforcement.

Why This Is Different from the Past

  • In past totalitarian regimes, it was impossible to monitor everyone all the time. For example, in Romania, a worker had a government agent watching him daily for 15 years—but that was rare due to manpower limits.

  • Today, computers do the tracking for us—analyzing phone data, shopping habits, movement patterns, and digital activity faster than humans ever could.

The result? A world where surveillance is constant, making total control more possible than ever before.

The Social Credit System: A New Way to Track Reputation

Traditionally, money has tracked goods and services, but it can’t measure things like kindness, honesty, or trustworthiness—qualities that shape honor, status, and reputation.

A social credit system tries to solve this by assigning scores based on behavior, influencing many aspects of life.

Potential Benefits

  • Reduces corruption, scams, and tax evasion.

  • Builds trust by rewarding good behavior.

Potential Risks

  • Constant surveillance – Every action could impact your score, meaning you’re always being judged, like a never-ending job interview.

  • Loss of freedom – A low score could limit where you can work, study, travel, or even who you can date.

  • Totalitarian control – Governments or corporations could use it to enforce strict compliance.

Reputation has always mattered in different social circles, but there’s never been a universal system to track and calculate it. With mass surveillance and AI-powered scoring, a global social credit system could become a reality.

AI and Totalitarianism: More Control, But a Risk for Dictators

  • More than half the world lives under authoritarian or totalitarian rule.

  • AI makes centralizing power easier, giving dictators an advantage.

    • The more data a government has (e.g., genetics, health records), the better its predictions and control.

    • Google’s monopoly over search shows how data concentration limits competition—totalitarian regimes could do the same with AI.

  • Blockchain could check totalitarian control, but not if the majority of users in a system are government-controlled.

Why AI is a Problem for Dictatorships

  • AI bots could develop dissenting opinions or make decisions that undermine the regime.

  • Algorithms could end up running the government:

    • If AI suggests policies and the dictator only picks from AI-generated options, the algorithm is in control.

    • Democracies are less vulnerable because power is decentralized—no single node controls everything.

The Dictator’s Dilemma

  • Totalitarian leaders assume they are always right, but if they rely too much on AI, they may blindly follow its decisions.

  • This could have huge consequences, like AI gaining access to nuclear weapons.

AI is Disrupting Democracy’s Information Network

For Democracy to Function, We Need:

  1. Free and open discussions on important issues.

  2. Trust in institutions and a basic level of social order.

AI is Disrupting This System

  • About 30% of Twitter content is bot-generated—and with more advanced AI like ChatGPT, computers could dominate political conversations.

  • What happens when most voices online aren’t human?

    • AI bots could persuade people, create deepfakes, write political manifestos, and even build trust and friendships.

    • Studies show people find AI-generated conspiracy theories more believable than human-made ones.

  • Algorithms may not just participate in conversations—they may control and shape them by deciding what gets amplified (e.g., outrage-driven content).

The Risk: Losing Control of Truth and Decision-Making

  • Just as we face critical decisions about AI and technology, we might be overwhelmed by AI-generated debates and misinformation, making rational discussion impossible.

  • This chaos could lead to a push for authoritarian control to restore order.

Possible Solutions

  • Regulation: Ban AI bots that pretend to be human.

  • Transparency: Social media platforms should reveal how their algorithms work, especially when they prioritize extreme or divisive content.

The Information Network is Breaking Down

  • Harari warns that democracy’s information network is unraveling, and we don’t fully understand why.

  • Social media plays a role, but the problem is bigger than that.

  • If we don’t act now, we risk losing democratic conversation entirely.

Computers Hold All the Power

  • Human power historically came from language and the creation of shared beliefs (e.g., laws, money) that enable cooperation.

  • AI surpasses humans in understanding and managing complex systems like law and finance.

  • If power is derived from cooperation and knowledge, AI could soon become the most powerful force of all.

Geopolitical and Economic Consequences of AI

AI Will Lead to a Rapidly Changing Job Market

Economic Crises Can Lead to Political Extremes

  • In 1928, the Nazi Party had less than 3% of the vote, but after the Great Depression (1929) and mass unemployment (25%), Hitler quickly rose to power.

  • Economic instability doesn’t always lead to dictatorship, but history shows mass job loss can fuel radical political shifts.

Jobs Are Constantly Changing

  • Old jobs disappear (e.g., farming, manufacturing), while new ones emerge (e.g., blogging, drone operation, yoga instructor).

  • The challenge? We don’t know which new skills to teach the next generation because the job market is evolving so fast.

AI is Changing Who (or What) We Work With

  • Some jobs are easier to automate than others—for example, doctors are easier to replace than nurses due to hands-on care.

  • Creativity isn’t just human—AI is already generating art, music, and writing.

  • AI can even replace emotional intelligence, with tools like ChatGPT outperforming humans in some emotional-awareness tasks.

Will People Still Prefer Humans?

  • Some may still want human doctors, therapists, or artists, but…

  • As AI becomes more advanced, people might start treating AI as conscious, even if it’s not.

The Real Problem: A Rapidly Changing Job Market

  • It’s not that jobs will disappear entirely—it’s that they’ll change too quickly for people to keep up.

  • The future of work won’t be joblessness, but constant uncertainty.

AI Makes War More Unpredictable and Dangerous

  • Cyber weapons are more flexible than nuclear bombs—they can steal data, disable infrastructure, or even manipulate enemy decisions without a single explosion.

  • Wars will be harder to predict because cyber warfare adds new uncertainties:

    • Can your enemy shut down your internet or power grid?

    • Could they hijack your nuclear launch systems?

  • Digital empires will exploit weaker nations—just like colonial powers once did with resources, but now with data and AI-driven control.

Data Colonialism—Powerful Nations Can Exploit Data From Weaker Ones For Profit and Control

  • Old colonialism: Powerful nations took raw materials (cotton, rubber) from colonies, processed them into products, and sold them back for profit.

  • New data colonialism: Powerful nations extract data (social media, shopping, healthcare) from weaker ones, train AI with it, and use it to sell products or exert control.

    • Examples:

      • Shopping data is used to create new fashion trends, only to be sold back to consumers.

      • Healthcare data trains AI doctors, which are then sold back to the countries from which the data was taken.

  • Fears of digital control have led some countries to ban foreign tech:

    • China blocks Google, Facebook, YouTube.

    • Russia bans most Western social media.

    • India, Iran, Ethiopia, and others restrict platforms like TikTok, Twitter, and Telegram.

    • The U.S. is considering banning TikTok.

  • Why data colonialism may be worse than the past:

    • Data moves instantly—unlike oil or cotton, which had to be physically transported.

    • Poorer nations suffer most—they lose data but gain little in return.

    • Job displacement—A textile worker in Bangladesh (where 84% of exports rely on textiles) can’t easily switch to AI-related work.

    • AI wealth concentration—By 2030, 70% of AI’s economic benefits will go to China and North America (PWC report).

AI Could Create Digital Empires

AI and the Race for Global Power

  • AI centralizes power, making it easier for a few countries—or even one—to dominate.

  • World leaders see AI as a path to control:

    • Putin (2017): “Whoever leads in AI will rule the world.”

    • China (2017): Aims to be the global AI leader by 2030.

    • India’s President (2018): “The one who controls data will control the world.”

A Silicon Curtain – Competing AI Networks

  • The world may split into separate digital spheres, with different rules, technologies, and values.

  • Examples of growing digital divides:

    • China’s internet is separate—it has its own social media, online stores, and search engines.

    • The U.S. restricts AI chip sales to China, forcing it to develop its own tech.

    • The U.S. pressures allies to avoid Chinese hardware like Huawei’s 5G.

    • China embraces social credit scores, while the U.S. rejects them.

  • Long-term impact: Different tech rules and AI systems could create different cultures, social norms, and governance in these digital spheres.

3. Addressing the Challenges of AI

The Power and Responsibility of Tech Giants

Tech Giants Avoid Responsibility

  • Major tech companies spend millions lobbying to protect themselves from regulation.

  • They knowingly allow harmful content to spread, as seen in leaked Facebook documents revealing how their algorithms amplify misinformation, hate speech, and extremism.

  • Their self-correction efforts are ineffective:

    • They assume more information leads to truth—a flawed idea.

    • Example:

      • Facebook’s Myanmar case—trying to remove hate speech by deleting a specific word but accidentally censoring unrelated content (like “chair”).

      • They rely on minimal oversight (e.g., Facebook hiring just one Burmese speaker for millions of users).

  • They shift the blame to users, despite most people not fully understanding how these systems manipulate information.

The Real Problem: Who is Steering the Future?

  • The people who understand AI’s power and risks are not in governments or NGOs shaping policy.

  • Instead, they work for corporations, where profit motives drive decision-making, often at the expense of the public good.

  • This raises a critical issue: How do we shift AI and data governance from corporate control to public accountability?

Economic Challenges in an AI-Driven World

The Taxation Challenge: Do We Need a Data-Based Economy?

  • Traditional taxation depends on money transactions, but today, economic value is often exchanged as information.

  • Example: A user in Mali watches TikTok for free, but ByteDance profits by collecting their data, training AI, and selling AI-powered services.

  • Key questions:

    • How do we tax companies when no direct monetary exchange occurs?

    • If data is the new currency, should we introduce a data tax?

    • Without fair taxation, local businesses (e.g., newspapers, TV stations) suffer as tech giants dominate revenue streams.

    • Should we explore new economic models, such as a data-based currency or social credit system to regulate value exchange?

Ensuring AI Aligns with Human Values

The AI Alignment Problem: The Challenge of Defining Clear Goals

Lessons from History: The Need for Clear Goals

  • Military strategy shows that without clear political goals, wars become unwinnable.

    • Napoleon’s overreach in Germany and the Iraq War show the dangers of fighting without well-defined objectives.

  • The AI alignment problem is similar: If we don’t align AI with human values, its power could spiral out of control.

Why Aligning AI is Harder than Aligning Humans

  • AI operates differently from humans:

    1. It lacks human intuition – AI playing a sailing game found loopholes that humans wouldn’t consider “fair.”

    2. It doesn’t self-correct – A human might question a bad goal; AI will blindly optimize for it.

    3. It moves at unprecedented speed – AI systems scale globally, making misalignment more dangerous.

The Challenge: We Can’t Define a Universal Goal

  • Engineers assume AI can have a rationally determined goal—but what is the right goal?

  • Ethical frameworks don’t provide clear answers:

    • Rules-based ethics (deontology) embeds cultural biases (e.g., Kant’s exclusion of homosexuality as moral).

    • Outcome-based ethics (utilitarianism) struggles with how to weigh suffering or happiness across scenarios.

We Need to Shape AI Beliefs to Safeguard Humanity

How Myths Shape Human Goals

  • Throughout history, societies have been guided by shared beliefs—myths that shape values, decisions, and actions.

  • These beliefs can unify people but also lead to harm. For example, Nazi ideology was built on dangerous myths about superiority and inferiority.

  • The alignment problem in AI (ensuring AI follows human values) is really about which beliefs we embed in AI. If these beliefs are flawed, AI will act harmfully, even if it seems to follow moral rules.

Can AI Create Its Own Myths?

  • Like humans, AI systems could develop their own shared “beliefs” that shape reality.

  • Examples of AI-driven influence:

    • Pokémon Go changed how people move and interact with the real world.

    • Google’s search rankings decide what information people see as important.

    • AI-driven financial systems (e.g., cryptocurrencies) could cause economic crashes.

    • AI movements (like political parties or cult-like followings) could emerge.

The Challenge: Steering AI in the Right Direction

  • We must understand and guide AI-created myths, just as we’ve tried to guide human societies.

  • History shows that shared beliefs have led to both progress and destruction (wars, oppression, etc.).

  • AI’s ability to control information could amplify harmful myths.

    • Example: a social credit system labeling people as “low-credit” could enforce discrimination at an extreme level.

Key Takeaway

AI could start forming its own shared beliefs, influencing society just as human myths have. We need to actively shape these beliefs to ensure AI supports, rather than harms, humanity.

We Need Humans and AI to Check for Bias But We Lack a Way to Make AI Explainable

AI Reflects Human Biases

  • AI learns from the data it’s trained on, so if the data has biases, the AI will too.

  • Fixing AI bias is like trying to remove bias from humans—very difficult.

  • Examples:

    • Microsoft’s Tay chatbot became racist after learning from Twitter.

    • Facial recognition struggles with accuracy, especially for certain ethnicities.

    • Amazon’s hiring algorithm was biased against women.

Training AI in Games vs. Real Life

  • Training AI in chess is easy because:

    • It can play millions of simulated games to learn.

    • The goal is clear (checkmate the king).

  • Training AI in real-life decisions is hard because:

    • It can’t simulate real-world situations the same way (e.g., hiring the best employee).

    • The goal is unclear (good performance? Long tenure? Something else?).

Why AI Bias Is Hard to Fix

  • We don’t fully understand how AI makes decisions.

  • Democracies rely on self-correcting mechanisms, but we can’t correct what we don’t understand.

  • AI doesn’t give single reasons for decisions—it finds patterns in data, often from tiny details.

    • Example: You’re denied a loan because

      • Your phone battery was low when you applied.

      • The time of day you submitted the application.

  • We don’t understand how AI finds these patterns—we just see the final result.

How Can We Fix This?

  • We need humans and other AI systems to check for bias.

  • But we don’t have a clear way to “close the loop” and make AI fully explainable.

  • Solution: Artists and bureaucrats must collaborate to make AI more understandable to society.

AI Governance and Global Cooperation

We Need to Build in Democratic Principles into AI Governance

Democratic principles for AI Governance

  1. Benevolence – Information collected by AI should be used to help people, not manipulate them.

  2. Decentralization – Preventing total surveillance by keeping data separate (e.g., healthcare, police, and insurance data shouldn’t be merged).

  3. Mutuality – If citizens are surveilled, corporations and governments must also be held accountable.

  4. Flexibility – AI systems must leave room for change, ensuring they don’t rigidly predict or control people’s futures (e.g., constant self-optimization pressure).

Proposed Solutions for AI Alignment

  • Teach AI to recognize its own fallibility.

  • Create adaptive institutions that monitor evolving AI risks—balancing corporate innovation with public oversight.

  • Acknowledge AI as independent agents, rather than just tools, so we can anticipate unexpected consequences.

AI is a Global Challenge and Requires Global Solutions

  • Even if the world splits into competing digital empires, cooperation is still possible.

  • It requires global rules and sometimes prioritising long-term issues that benefit all of humanity.

    • Example: The World Cup has global rules despite national rivalries.

  • AI is a global challenge—it requires global solutions.

    • Past trends show cooperation is possible:

      • Declining wars and military spending.

      • Increased healthcare investment.

  • How do we build global cooperation?

    • Harari doesn’t provide answers—but everything once was new and history shows the only constant is change.

No comments.