Hi Mark, thanks for writing this post. I only had a cursory reading of your linked paper and the 80k episode transcript, but my impression is that Tristan’s main worry (as I understand it) and your analysis are not incompatible:
Tristan and parts of broader society fear that through the recommendation algorithm, users discover radicalizing content. According to your paper, the algorithm does not favour and might even actively be biased against e.g conspiracy content.
Again, I am not terribly familiar with the whole discussion, but so far I have not yet seen the point made clearly (enough), that both these claims can be true: The algorithm could show less “radicalizing” content than an unbiased algorithm would, but even these fewer recommendations could be enough to radicalize the viewers compared to a baseline where the algorithm would recommend no such content. Thus, YouTube could be accused of not “doing enough”.
Your own paper cites this paper arguing that there is a clear pattern of viewership migration from moderate “Intellectual Dark Web” channels to alt-right content based on an analysis of user comments. Despite the limitation of using only user comments that your paper mentions, I think that commenting users are still a valid subset of all users and their movement towards more radical content needs to be explained, and that the recommendation algorithm is certainly a plausible explanation. Since you have doubts about this hypothesis, may I ask if you think there are likelier ways these users have radicalized?
A way to test the role of the recommendation algorithm could be to redo the analysis of the user movement data for comments left after the change of the recommendation algorithm. If the movement is basically the same despite less recommendations for radical content, that is evidence that the recommendations never played a role like you argue in this post. If however the movement towards alt-right or radical content is lessened, it is reasonable to conclude that recommendations have played a role in the past, and by extension could still play a (smaller) role now.
I agree you can still criticize YouTube, even if they are recommending conspiracy content less than “view-neural”. My main disagreement is with the facts—Tristan us representing YouTube is as a radicalization pipeline caused be the influence of recommendations. Let’s say that YouTube is more radicalizing than a no-recommendation system all-things-considered because users were sure to click on radical content whenever it appeared. In this case you would describe radicalization as a demand from users, rather than a radialization rabbit hole caused by a manipulative algorithm. I’m open to this possibility, I wouldn’t give this much pushback if this is what is being described.
The algorithmic “Auditing Radicalization Pathways on YouTube” paper is clever in the way they use comments to get at real-world movement. But that paper doesn’t tell us much given that a) they didn’t analyse movement form right to left (one way movement tell you churn, but nothing directional) and b) thy didn’t share their data.
The best I have seen is this study, which uses real-world web usage forma representative sample of users to get the real behaviour of users who are clicking on recommendations. They are currently re-doing analysis with better classifications so we will see what happens.
Still doesn’t fully answer your question tho. To get at the real influence of recommendation you will need to do actual experiments, something only YouTube can really do right now. Or if a third party was allowed to provide a youtube recsys somehow.
My suspicions about radicalization that leads to real word violence is mainly to do with things outside influence of analythims. Disillusionment, Experience of malevolence, Grooming by terrorist ideologically violent religious/political groups.
Hi Mark, thanks for writing this post. I only had a cursory reading of your linked paper and the 80k episode transcript, but my impression is that Tristan’s main worry (as I understand it) and your analysis are not incompatible:
Tristan and parts of broader society fear that through the recommendation algorithm, users discover radicalizing content. According to your paper, the algorithm does not favour and might even actively be biased against e.g conspiracy content.
Again, I am not terribly familiar with the whole discussion, but so far I have not yet seen the point made clearly (enough), that both these claims can be true: The algorithm could show less “radicalizing” content than an unbiased algorithm would, but even these fewer recommendations could be enough to radicalize the viewers compared to a baseline where the algorithm would recommend no such content. Thus, YouTube could be accused of not “doing enough”.
Your own paper cites this paper arguing that there is a clear pattern of viewership migration from moderate “Intellectual Dark Web” channels to alt-right content based on an analysis of user comments. Despite the limitation of using only user comments that your paper mentions, I think that commenting users are still a valid subset of all users and their movement towards more radical content needs to be explained, and that the recommendation algorithm is certainly a plausible explanation. Since you have doubts about this hypothesis, may I ask if you think there are likelier ways these users have radicalized?
A way to test the role of the recommendation algorithm could be to redo the analysis of the user movement data for comments left after the change of the recommendation algorithm. If the movement is basically the same despite less recommendations for radical content, that is evidence that the recommendations never played a role like you argue in this post. If however the movement towards alt-right or radical content is lessened, it is reasonable to conclude that recommendations have played a role in the past, and by extension could still play a (smaller) role now.
I agree you can still criticize YouTube, even if they are recommending conspiracy content less than “view-neural”. My main disagreement is with the facts—Tristan us representing YouTube is as a radicalization pipeline caused be the influence of recommendations. Let’s say that YouTube is more radicalizing than a no-recommendation system all-things-considered because users were sure to click on radical content whenever it appeared. In this case you would describe radicalization as a demand from users, rather than a radialization rabbit hole caused by a manipulative algorithm. I’m open to this possibility, I wouldn’t give this much pushback if this is what is being described.
The algorithmic “Auditing Radicalization Pathways on YouTube” paper is clever in the way they use comments to get at real-world movement. But that paper doesn’t tell us much given that a) they didn’t analyse movement form right to left (one way movement tell you churn, but nothing directional) and b) thy didn’t share their data.
The best I have seen is this study, which uses real-world web usage forma representative sample of users to get the real behaviour of users who are clicking on recommendations. They are currently re-doing analysis with better classifications so we will see what happens.
Still doesn’t fully answer your question tho. To get at the real influence of recommendation you will need to do actual experiments, something only YouTube can really do right now. Or if a third party was allowed to provide a youtube recsys somehow.
My suspicions about radicalization that leads to real word violence is mainly to do with things outside influence of analythims. Disillusionment, Experience of malevolence, Grooming by terrorist ideologically violent religious/political groups.