TL;DR: This post is a two-page introduction to risks associated with recommendation AI. The negative externalities of recommendation AI seem neglected, and there might be comparatively effective work at improving governance and deploying better recommendation algorithms that generally keep the world on a more sane track. You might consider this a viable option if your AI timelines are over 10 years.
One aspect not discussed here is why having good recommendation AIs could be incredibly beneficial. Improved recommendation AI could enhance collective epistemic and encourage people to engage with important issues, ultimately advancing AI safety and many other important topics. It would be a key to fostering a flourishing civilization.
Epistemic status: I am still a bit new to this topic, I might be wrong in important ways, but I’m curious what you think about this. I’m interested in constructive feedback and open to revising my ideas.
Thanks for Lê Nguyên Hoang, co-founder of Tournesol, for his comments and contributions to this post.
Introduction
Recommendation AIs are deeply integrated into our daily lives. Although they are often considered valuable tools for personalizing our online experience, they also present risks. These AIs can significantly affect individuals and society as a whole. In 2017, YouTube estimated that, out of the billion hours of video humanity consumed on YouTube, 70% was due to its AI recommendations. Since 2016, there have been more views on YouTube than searches on Google.[1]
Recommendation AIs can be argued to be an existential risk amplifier, by reducing the quality of our information space, weakening democratic institutions, amplifying mistrust and hate, silencing priority topics and facilitating propaganda activities.[2]
“We’re running the largest psychological experiment in history, with billions of subjects, and no control group.” — Tristan Harris
Deterioration of Democracy
The proper functioning of democracies relies on access to quality information and quality deliberation. However, recommendation AIs favor certain information at the expense of others, e.g. maximizing engagement often lead to polarization. This might be weakening the quality of democracies worldwide.
Details on the Deterioration of Democracy
Many studies and research have highlighted a correlation between heavy social media use and an increase in political polarization. Recommendation AIs, by prioritizing provocative and emotionally charged content, contribute to creating waves of hate where users are primarily exposed to viewpoints that reinforce their disdain for opposing beliefs.[3]
The Facebook Files revealed that Facebook’s algorithm changes in 2018 favored divisive and controversial content, which in turn would incentivize politicians and other public figures to adopt more extreme positions to maintain visibility and engagement online.[4]. This phenomenon shows how AIs, by amplifying polarizing content, can influence the production of information and its widespread diffusion at the expense of balanced democratic debate.
Numerous reports (v-dem, IDEA) identify the years of massive social media adoption (around 2012) as a phase transition, after which democracies worldwide have declined. This overall threat to self-governance can be regarded as a catastrophic risk for human flourishing. Although direct causality is difficult to establish, the influence of recommendation AIs on this phenomenon deserves serious attention.
Why is this important for AGI safety? I don’t want to see the quality of democracy in the US and other countries that are moving towards AGI to deteriorate any further.
Geopolitical and Conflict Risks
Recommendation AIs also have geopolitical implications, amplifying hate speech or promoting narratives that incite violence. They have contributed to exacerbating international tensions and destabilizing entire regions.
Examples of Geopolitical and Conflict Risks
In Myanmar, Facebook’s AIs were accused of amplifying hate speech against the Rohingya, a Muslim minority. During the Rohingya genocide, 700,000+ refugees fled abroad, and the number of deaths is estimated between 25,000-43,000. Amnesty International documented that Facebook’s systems not only failed to stop the spread of these hateful messages but sometimes promoted them, thus aggravating the situation.[5]
On a global level, the case of the leading democracies, especially the USA, is especially concerning. It includes the rise of QAnon-sympathizing political candidatesand the Capitol Riot. Meanwhile, throughout Europe, there have been numerous far-right mobilizations. Finally, in the last few years, large-scale wars have emerged even in more developed regions of the world (Ukraine, Lebanon). The concerns for civil war or World War III have reached a historical level (30% currently on Metaculus) — except that today’s weaponry is far more destructive than it was in 1939.
Why is this important for AGI safety? Because increasing these tensions could increase the chances of an AI race between countries.
Mute News
We can distinguish between “fake news” and “mute news.” While false information attracts a lot of media attention, an even deeper problem lies in the lack of visibility of important topics.[6] Current AIs favor divisive and emotionally charged content at the expense of essential subjects.
Examples of Mute News
As an example, the IPCC report was published at the same time as Lionel Messi’s transfer to PSG, which resulted in it receiving no visibility. In many cases, the informational crisis is more related to this lack of access to information than to the issue of fake news.
Another example is the ethics and safety of AI systems. While the enthusiasm for ChatGPT’s spectacularity and MidJourney’s images have been widely spread through recommendation AIs, significantly less attention was given to the risks that these systems raised, and even less so to the laws they violate, even when this was acknowledged by their own creators. Similarly, climate change, cybersecurity, recommendation AIs, and AI safety are unlikely to be addressed properly, if the attention they receive (through recommendation AIs and classical media) vanishes.
Recommendation AIs are the main mechanism that could bring these important topics and unknown unknowns to our attention.
Why is this important for AGI safety? Because the fact that most people and policymakers don’t know much about AI risks is a huge bottleneck.
Malicious Exploitation of AIs
Recommendation AIs are currently widely exploited by malicious actors. These actors can manipulate AIs to bury certain information under a mass of content or to promote specific narratives, with the aim of manipulating public opinion, spreading propaganda, or destabilizing countries by encouraging certain ideologies.
Examples of Malicious Exploitation of AIs
For example, the French agency Viginum has documented cases of coordinated pro-Russian propaganda networks, as in the “Portal Kombat” report. These networks exploit social media AIs to amplify their message and influence public debate.
Strikingly, Facebook themselves reports that they suppressed 30 billion fake accounts. On many platforms, most accounts should be expected to be fake. These accounts can be used to provide initial boosts to propaganda-aligned content, thereby tricking recommendation AIs into believing that these contents trigger significant amounts of engagements, which may then make such content viral. Another example is the Instagram influencer David Michigan, who is suspected to have purchased millions of fake subscribers to boost his online business. Such attacks are known as poisoning attacks in AI Safety.
Overall, there is a very wide and active disinformation industry online derived from the old SEO optimization industry. Numerous actors exist even in democratic countries, such as Cambridge Analytica (UK), Eliminalia (Spain), Team Jorge (Israel), Avisa Partners (France), and Alp Service (Switzerland). China is suspected of paying 2 million individuals online for their online soft power.
Why is this important for AGI safety? I don’t think this point is that important for AGI safety. I might be wrong.
How Could Recommendation AIs Become Beneficial?
Initiatives are underway to ensure that recommendation AIs become tools serving the common good.
In Europe, regulations such as GDPR, the Digital Services Act (DSA), the Digital Markets Act (DMA) and the AI Act lay the foundations for responsible use of these AIs. These regulations aim to protect user data, limit the power of large platforms, and ensure increased transparency in the functioning of these AIs.
The example of Taiwan is particularly inspiring. In 2014, Taiwan initiated a transition to digital democracy, where digital technology is governed by citizens democratically. This approach has created a model where digital tools, including recommendation AIs, are aligned with the values and needs of society. Remarkedly, over the last decade, Taiwan is the only country that has drastically improved democratically, moving from a flawed democracy with little popular trust in the government, to a model that the people want to defend. This is arguably a strong evidence that the transition to digital democracy is both tractable and extremely effective.[7]
Another initiative is that of the Tournesol[8] non-profit, a participatory research project that aims to develop democratic recommendation AIs. Unlike current AIs that are optimized to maximize engagement, Tournesol proposes a transparent robust alignment solution based on contributors’ reported judgments of what ought to be more recommended on YouTube.[9]
So, are we dropping the ball?
There are few systems today as pervasive as recommendation AI on collective psyche.
Here are a few bottlenecks that might make improvements more challenging than they initially appear:
Inherent trade-off? Is there a trade-off between better content (in terms of epistemological quality) and engagement? I not sure. For example, Kurzgesagt manages to be both highly engaging and (mostly) epistemologically sound.
Not neglected? Many people are already discussing fake news and social media issues, but I don’t believe the problem of recommendation systems is saturated with quality research. On the contrary, in the case of the YouTube algorithm, I’m not aware of any non-profit working currently on this besides Tournesol.[10]
Not urgent compared to X-Risks? Perhaps. I think timelines longer than 10 years for the development of superintelligence might allow enough time for changes in recommendation AIs to have a meaningful impact on society. And even if you estimate a 20% probability of AI-related existential risks, it still seems valuable to improve the state of society for the remaining 80%.
Overall, I’m tempted to say that yes, we are probably dropping the ball.
“Garbage in, garbage out” — Someone in Machine Learning[11]
“it is not isolation from opposing views that drives polarization but precisely the fact that digital media bring us to interact outside our local bubble. When individuals interact locally, the outcome is a stable plural patchwork of cross-cutting conflicts. By encouraging nonlocal interaction, digital media drive an alignment of conflicts along partisan lines, thus effacing the counterbalancing effects of local heterogeneity. The result is polarization, even if individual interaction leads to convergence.” From a paper that tries to modelize the polarization dynamics.
“The result of that, it turns out that what gets the most comments is really divisive, outrageous stuff, especially stuff that provokes political anger.”—source
Another summary of the Facebook leak is available on wikipedia.
“Meta uses engagement-based algorithmic systems to power Facebook’s news feed, ranking, recommendation and groups features, shaping what is seen on the platform. Meta profits when Facebook users stay on the platform as long as possible, by selling more targeted advertising. The display of inflammatory content – including that which advocates hatred, constituting incitement to violence, hostility and discrimination – is an effective way of keeping people on the platform longer. As such, the promotion and amplification of this type of content is key to the surveillance-based business model of Facebook.” (source)
This raises the question of what is important. Maybe you are a libertarian and would say, “What is important is what people choose to watch.” But I think that even with this definition, there is a difference between preferences and volition, the latter being chosen in a much more mindful way. Better recommendation AI could enable users to watch content they truly want to watch upon reflection, which is very different from merely optimizing for immediate preferences. And if, upon reflection, people genuinely want to eat fast food, then so be it.
For example, Pol.is, an opinion mapping tool that uses machine learning to identify areas of consensus and disagreement among participants. Unlike traditional recommendation AIs that can amplify polarization, Pol.is is designed to highlight points of agreement, thus fostering more constructive debate, and discussions are organized in several phases (proposal, discussion, reflection, decision), allowing for an orderly and transparent progression of debate.
Taiwan’s democratic renaissance over the past decade stands out as a rare success story in a world where many democracies have faltered. The catalyst for this transformation was the 2014 Sunflower Student Movement, which sparked a shift from an imperfect democracy vulnerable to Chinese influence and corruption to a model of democratic governance.
Central to this evolution was Taiwan’s commitment to democratizing the digital sphere. Under the leadership of figures like Audrey Tang, who became Digital Minister, Taiwan invested heavily in democratic digital technologies. These initiatives included innovative reforms to enhance government transparency and citizen participation through digital tools.
Numerous mathematical, sociological and philosophical problems have been identified by the project, some of which are well defined, and have been argued to be central to any collaborative AI alignment problem.
Maybe, another organisation that could qualify could be The Mozilla Foundation: Mozilla has been advocating for transparency and ethical approaches in technology, including research into how recommendation systems work. They have conducted studies on YouTube’s recommendation algorithm and its role in promoting harmful content. And they have also launched initiatives like the YouTube Regrets project, which collects stories from users who were led down undesirable recommendation “rabbit holes.”
Are we dropping the ball on Recommendation AIs?
TL;DR: This post is a two-page introduction to risks associated with recommendation AI. The negative externalities of recommendation AI seem neglected, and there might be comparatively effective work at improving governance and deploying better recommendation algorithms that generally keep the world on a more sane track. You might consider this a viable option if your AI timelines are over 10 years.
One aspect not discussed here is why having good recommendation AIs could be incredibly beneficial. Improved recommendation AI could enhance collective epistemic and encourage people to engage with important issues, ultimately advancing AI safety and many other important topics. It would be a key to fostering a flourishing civilization.
Epistemic status: I am still a bit new to this topic, I might be wrong in important ways, but I’m curious what you think about this. I’m interested in constructive feedback and open to revising my ideas.
Thanks for Lê Nguyên Hoang, co-founder of Tournesol, for his comments and contributions to this post.
Introduction
Recommendation AIs are deeply integrated into our daily lives. Although they are often considered valuable tools for personalizing our online experience, they also present risks. These AIs can significantly affect individuals and society as a whole. In 2017, YouTube estimated that, out of the billion hours of video humanity consumed on YouTube, 70% was due to its AI recommendations. Since 2016, there have been more views on YouTube than searches on Google.[1]
Recommendation AIs can be argued to be an existential risk amplifier, by reducing the quality of our information space, weakening democratic institutions, amplifying mistrust and hate, silencing priority topics and facilitating propaganda activities.[2]
Deterioration of Democracy
The proper functioning of democracies relies on access to quality information and quality deliberation. However, recommendation AIs favor certain information at the expense of others, e.g. maximizing engagement often lead to polarization. This might be weakening the quality of democracies worldwide.
Details on the Deterioration of Democracy
Many studies and research have highlighted a correlation between heavy social media use and an increase in political polarization. Recommendation AIs, by prioritizing provocative and emotionally charged content, contribute to creating waves of hate where users are primarily exposed to viewpoints that reinforce their disdain for opposing beliefs.[3]
The Facebook Files revealed that Facebook’s algorithm changes in 2018 favored divisive and controversial content, which in turn would incentivize politicians and other public figures to adopt more extreme positions to maintain visibility and engagement online.[4]. This phenomenon shows how AIs, by amplifying polarizing content, can influence the production of information and its widespread diffusion at the expense of balanced democratic debate.
Numerous reports (v-dem, IDEA) identify the years of massive social media adoption (around 2012) as a phase transition, after which democracies worldwide have declined. This overall threat to self-governance can be regarded as a catastrophic risk for human flourishing. Although direct causality is difficult to establish, the influence of recommendation AIs on this phenomenon deserves serious attention.
Why is this important for AGI safety? I don’t want to see the quality of democracy in the US and other countries that are moving towards AGI to deteriorate any further.
Geopolitical and Conflict Risks
Recommendation AIs also have geopolitical implications, amplifying hate speech or promoting narratives that incite violence. They have contributed to exacerbating international tensions and destabilizing entire regions.
Examples of Geopolitical and Conflict Risks
In Myanmar, Facebook’s AIs were accused of amplifying hate speech against the Rohingya, a Muslim minority. During the Rohingya genocide, 700,000+ refugees fled abroad, and the number of deaths is estimated between 25,000-43,000. Amnesty International documented that Facebook’s systems not only failed to stop the spread of these hateful messages but sometimes promoted them, thus aggravating the situation.[5]
On a global level, the case of the leading democracies, especially the USA, is especially concerning. It includes the rise of QAnon-sympathizing political candidates and the Capitol Riot. Meanwhile, throughout Europe, there have been numerous far-right mobilizations. Finally, in the last few years, large-scale wars have emerged even in more developed regions of the world (Ukraine, Lebanon). The concerns for civil war or World War III have reached a historical level (30% currently on Metaculus) — except that today’s weaponry is far more destructive than it was in 1939.
Why is this important for AGI safety? Because increasing these tensions could increase the chances of an AI race between countries.
Mute News
We can distinguish between “fake news” and “mute news.” While false information attracts a lot of media attention, an even deeper problem lies in the lack of visibility of important topics.[6] Current AIs favor divisive and emotionally charged content at the expense of essential subjects.
Examples of Mute News
As an example, the IPCC report was published at the same time as Lionel Messi’s transfer to PSG, which resulted in it receiving no visibility. In many cases, the informational crisis is more related to this lack of access to information than to the issue of fake news.
Another example is the ethics and safety of AI systems. While the enthusiasm for ChatGPT’s spectacularity and MidJourney’s images have been widely spread through recommendation AIs, significantly less attention was given to the risks that these systems raised, and even less so to the laws they violate, even when this was acknowledged by their own creators. Similarly, climate change, cybersecurity, recommendation AIs, and AI safety are unlikely to be addressed properly, if the attention they receive (through recommendation AIs and classical media) vanishes.
Recommendation AIs are the main mechanism that could bring these important topics and unknown unknowns to our attention.
Why is this important for AGI safety? Because the fact that most people and policymakers don’t know much about AI risks is a huge bottleneck.
Malicious Exploitation of AIs
Recommendation AIs are currently widely exploited by malicious actors. These actors can manipulate AIs to bury certain information under a mass of content or to promote specific narratives, with the aim of manipulating public opinion, spreading propaganda, or destabilizing countries by encouraging certain ideologies.
Examples of Malicious Exploitation of AIs
For example, the French agency Viginum has documented cases of coordinated pro-Russian propaganda networks, as in the “Portal Kombat” report. These networks exploit social media AIs to amplify their message and influence public debate.
Strikingly, Facebook themselves reports that they suppressed 30 billion fake accounts. On many platforms, most accounts should be expected to be fake. These accounts can be used to provide initial boosts to propaganda-aligned content, thereby tricking recommendation AIs into believing that these contents trigger significant amounts of engagements, which may then make such content viral. Another example is the Instagram influencer David Michigan, who is suspected to have purchased millions of fake subscribers to boost his online business. Such attacks are known as poisoning attacks in AI Safety.
Overall, there is a very wide and active disinformation industry online derived from the old SEO optimization industry. Numerous actors exist even in democratic countries, such as Cambridge Analytica (UK), Eliminalia (Spain), Team Jorge (Israel), Avisa Partners (France), and Alp Service (Switzerland). China is suspected of paying 2 million individuals online for their online soft power.
Why is this important for AGI safety? I don’t think this point is that important for AGI safety. I might be wrong.
How Could Recommendation AIs Become Beneficial?
Initiatives are underway to ensure that recommendation AIs become tools serving the common good.
In Europe, regulations such as GDPR, the Digital Services Act (DSA), the Digital Markets Act (DMA) and the AI Act lay the foundations for responsible use of these AIs. These regulations aim to protect user data, limit the power of large platforms, and ensure increased transparency in the functioning of these AIs.
The example of Taiwan is particularly inspiring. In 2014, Taiwan initiated a transition to digital democracy, where digital technology is governed by citizens democratically. This approach has created a model where digital tools, including recommendation AIs, are aligned with the values and needs of society. Remarkedly, over the last decade, Taiwan is the only country that has drastically improved democratically, moving from a flawed democracy with little popular trust in the government, to a model that the people want to defend. This is arguably a strong evidence that the transition to digital democracy is both tractable and extremely effective.[7]
Another initiative is that of the Tournesol[8] non-profit, a participatory research project that aims to develop democratic recommendation AIs. Unlike current AIs that are optimized to maximize engagement, Tournesol proposes a transparent robust alignment solution based on contributors’ reported judgments of what ought to be more recommended on YouTube.[9]
So, are we dropping the ball?
There are few systems today as pervasive as recommendation AI on collective psyche.
Here are a few bottlenecks that might make improvements more challenging than they initially appear:
Inherent trade-off? Is there a trade-off between better content (in terms of epistemological quality) and engagement? I not sure. For example, Kurzgesagt manages to be both highly engaging and (mostly) epistemologically sound.
Not neglected? Many people are already discussing fake news and social media issues, but I don’t believe the problem of recommendation systems is saturated with quality research. On the contrary, in the case of the YouTube algorithm, I’m not aware of any non-profit working currently on this besides Tournesol.[10]
Not urgent compared to X-Risks? Perhaps. I think timelines longer than 10 years for the development of superintelligence might allow enough time for changes in recommendation AIs to have a meaningful impact on society. And even if you estimate a 20% probability of AI-related existential risks, it still seems valuable to improve the state of society for the remaining 80%.
Overall, I’m tempted to say that yes, we are probably dropping the ball.
Additional statistics on YouTube can be found here or in this book.
Bad recommendation AIs → Bad epistemic → Misinformed and misaligned politician → catastrophic decisions with respect to transformative AI.
This would be one possible causal chain towards more X-Risks.
“it is not isolation from opposing views that drives polarization but precisely the fact that digital media bring us to interact outside our local bubble. When individuals interact locally, the outcome is a stable plural patchwork of cross-cutting conflicts. By encouraging nonlocal interaction, digital media drive an alignment of conflicts along partisan lines, thus effacing the counterbalancing effects of local heterogeneity. The result is polarization, even if individual interaction leads to convergence.” From a paper that tries to modelize the polarization dynamics.
“The result of that, it turns out that what gets the most comments is really divisive, outrageous stuff, especially stuff that provokes political anger.”—source
Another summary of the Facebook leak is available on wikipedia.
“Meta uses engagement-based algorithmic systems to power Facebook’s news feed, ranking, recommendation and groups features, shaping what is seen on the platform. Meta profits when Facebook users stay on the platform as long as possible, by selling more targeted advertising. The display of inflammatory content – including that which advocates hatred, constituting incitement to violence, hostility and discrimination – is an effective way of keeping people on the platform longer. As such, the promotion and amplification of this type of content is key to the surveillance-based business model of Facebook.” (source)
This raises the question of what is important. Maybe you are a libertarian and would say, “What is important is what people choose to watch.” But I think that even with this definition, there is a difference between preferences and volition, the latter being chosen in a much more mindful way. Better recommendation AI could enable users to watch content they truly want to watch upon reflection, which is very different from merely optimizing for immediate preferences. And if, upon reflection, people genuinely want to eat fast food, then so be it.
For example, Pol.is, an opinion mapping tool that uses machine learning to identify areas of consensus and disagreement among participants. Unlike traditional recommendation AIs that can amplify polarization, Pol.is is designed to highlight points of agreement, thus fostering more constructive debate, and discussions are organized in several phases (proposal, discussion, reflection, decision), allowing for an orderly and transparent progression of debate.
Taiwan’s democratic renaissance over the past decade stands out as a rare success story in a world where many democracies have faltered. The catalyst for this transformation was the 2014 Sunflower Student Movement, which sparked a shift from an imperfect democracy vulnerable to Chinese influence and corruption to a model of democratic governance.
Central to this evolution was Taiwan’s commitment to democratizing the digital sphere. Under the leadership of figures like Audrey Tang, who became Digital Minister, Taiwan invested heavily in democratic digital technologies. These initiatives included innovative reforms to enhance government transparency and citizen participation through digital tools.
Sunflower in English
Numerous mathematical, sociological and philosophical problems have been identified by the project, some of which are well defined, and have been argued to be central to any collaborative AI alignment problem.
Maybe, another organisation that could qualify could be The Mozilla Foundation: Mozilla has been advocating for transparency and ethical approaches in technology, including research into how recommendation systems work. They have conducted studies on YouTube’s recommendation algorithm and its role in promoting harmful content. And they have also launched initiatives like the YouTube Regrets project, which collects stories from users who were led down undesirable recommendation “rabbit holes.”
Hint: This does not only apply to ML models