80k hrs #88 - Response to criticism

I’m a regular listener of the 80k hrs podcast. Our paper came up on the episode with Tristan Harris, and Rob encouraged responses on this forum so here I go.

Update: apologies, but this post will only make sense if you listen to this episode of the 80k hrs podcast.


Conflict can be an effective tactic for good

I have a mini Nassim Taleb inside me that I let out for special occasions 😠. I’m sometimes rude to Tristan, Kevin Roose and others. It’s not just because Tristan is worried about the possible negative impacts of social media (i’m not against that at all). It is because he has been one of the most influential people in building a white hot moral panic, and frequently bends truth for the cause.

One part he gets right is that, trigger a high-reach person into conflict with you, serves to give your message more attention. Even if they don’t reply, you are more likely to be boosted by their detractors as well. This underdog advantage isn’t “hate”, and the small advantage is massively outweighed by institutional status, finances and social proof. To play by gentlemans rules is to their advantage—curtailing the tools in at my disposal to makes bullshit as costly as possible.

I acknowledge there are some negative costs to this (e.g. polluting the information commons with avoidable conflict), and good people can disagree about if the tradeoff is worth it. But I believe it is.

Was Tristan & tech media successful in improving the YouTube’s recommendation algorithm?

I’ll give this win to Tristan and Roose. I believe YouTube did respond to this pressure when in early 2019 they reduced recommendations to conspiracies and borderline content and this was better overall, but not great.

But YouTube was probably never as they described—a recommendation rabbit hole to radicalization. If it was, there was never strong evidence to support it.

The YouTube recommendation has always boosted recent, highly watched videos, and has been through 3 main phases:

Clickbait Phase: Favoured high click-through rates on video thumbnails. This meant that clickbait thumbnails, were very “tabloidy” and edgy, and frequently misrepresented the content of the video. But no-one ever showed that this influenced users down an extremist rabbit hole—they just asserted it, or used very week attempts at evidence.

View-Neutral Phase: Favoured videos that people watch more of, and rated highly after watching. This was a big improvement for quality recommendations. They hadn’t started putting their thumb on the scales, so the recommendations largely matched the portion of views for a video.

Authoritative Phase: Favours traditional media, especially highly partisan cable news. Very little recommendations to conspiracies and borderline content. This was announced early 2019, and deployed in April 2019.

Tristan regularly represents today’s algorithm as a radicalization rabbit hole. His defence that critics are unfair because the algorithm changed after he made the critique is wrong. He didn’t make any effort to clarify on the Social Dilemma (Released Jan 2020), or in his appearances about it, and hasn’t updated his talking points. For example, speaking on the Joe Rogan Podcast in October 2020 he said: “no matter what I start with, what is it going to recommend next. So if you start with a WW2 video, YouTube recommends a bunch of holocaust denial videos”.

What’s the problem with scapegoating the algorithm and encouraging draconean platform moderation ?

Tristan’s hyperbole sets the stage for drastic action. Draconian solutions for misdiagnosed problems will probably have unintended consequences that are worse than doing nothing. I wrote about this in regards to the the QAnon crackdown:

  • Demand for partisan conspiracy content is strong, which will be supplied by the internet in one way or another. There is big movement towards free speech platforms due to moderation, which (due to selection effects) are intense bubbles of far right and conspiracy content

  • Content moderation is building grievances that will not be easily placated. YouTube’s moderation removes more than 1000x right vs left videos. With the current trajectory political content may end up largely separated into tribal platforms.

  • Scapegoating the scary algorithm, or adopting a puritanical approach to moderation will work against more practical and effective actions.

The anonymous user limitation of YouTube studies

It’s technically quite difficult to analyse the YouTube algorithm that includes personalization. Our study was the most rigorous and comprehensive look at the recommendations political influence at the time, despite the limitation of collecting non-personalized recommendations. To take the results at face value, you need to assume this will “average out” to about the same influence once aggregated. I think it’s an open question, but it’s reasonable to assume the results will bein the same ballpark.

My experience with critics who point to this as a flaw or reason to ignore the results, are inconsistent with their skepticism. The metrics that Tristan uses in this podcast (e.g. “recommended flat Earth videos hundreds of millions of times”) are based on Gualliam Chaslots data, which is also based on anonymous recommendations. I also am skeptical about these results:
- These figures are much higher that what we see and Chaslot is not transparent about how these numbers have been calculated
- Chaslots data is based on the API, which gives distorted recommendations compared to our method of scraping the website (much closer to real-world).

This quality of the research is quickly improving in this space. This most recent study uses real-world user traffic to estimate people following recommendations from videos. A very promising approach once they fix some issues.

We have been collecting personalized recommendations since november. We are analysing the results and will present there on transparency.tube and a paper in the coming months. I hope tristan and other prominent people will start update the way they talk about YouTube based on the best and latest research. If they continue to misdiagnose problems, the fervor for solutions they whip up will be misdirected.

What are the most effective ways we can address problems from social media

I have a narrow focus on the mechanics of YouTube’s platform, but I’ll give my intuitional grab bag of ideas that are the most promising to reduce the bad things about social media:

  • Building, or popularising apps/​extensions that can be used by people to control their own future behaviour towards their higher order desires. The types of apps that Rob suggested are great and some are new to me (i.e. Inbox when ready, todobook, freedom, news feed eradicator).

  • Platform nudges like adjusting recommendations and information banners on misinformation with research into the effectiveness of these interventions

  • Grassroots efforts to cool down partisanship, like BraverAngels, with research into measuring the impact of these.

  • And for the easy one 😅. Addressing of corruption and decay in institutions (e.g. academia, media, politics), lack of state capacity, low growth, negative polarization and income inequality. Just fix those things and social media will be much less toxic.

Rob’s did some really good background research and gently pushed back in the right places. The best interview with Tristan I have listened to.