[Question] Looking for collaborators after last 80k podcast with Tristan Harris

Dear EAs,

After the interview with Tristan Harris Rob asked EAs to do their own research on the topic of aligning recommender systems /​ short term AI risks. A few weeks ago I already posed the short question on donating against these risks. After that I listened to the 2.5 hour podcast on the topic and read multiple articles on this topic on this forum (https://​​forum.effectivealtruism.org/​​posts/​​E4gfMSqmznDwMrv9q/​​are-social-media-algorithms-an-existential-risk, https://​​forum.effectivealtruism.org/​​posts/​​xzjQvqDYahigHcwgQ/​​aligning-recommender-systems-as-cause-area, https://​​forum.effectivealtruism.org/​​posts/​​ptrY5McTdQfDy8o23/​​short-term-ai-alignment-as-a-priority-cause)

I would love to collaborate with some others on collecting more in-depth arguments including some back- of-the-envelope calculations /​ Fermi estimates on the possible scale of the problem. I’ve spent some hours creating a structure including some next steps. In particular we should focus on the first argument since I feel the mental health one is very shaky after recent research.

I didn’t start reading the mentioned papers (except for the abstracts). We can define multiple work streams and divide them between possible collaborators. My expertise is mostly in back- of-the-envelope calculations /​ Fermi estimates given my background as management consutant. I am especially looking for people who like to /​ are good at assessing the value of scientific papers.

Please drop a message below or send an e-mail to jan-willem@effectiefaltruisme.nl if you want to participate

Arguments in favour of aligning recommender systems as cause area

Social media causes political polarization

(http://​​eprints.lse.ac.uk/​​87402/​​1/​​Social-Media-Political-Polarization-and-Political-Disinformation-Literature-Review.pdf)

Next steps here:

Social media causes declining mental health

This one looks interesting as well: https://​​docs.google.com/​​document/​​d/​​1w-HOfseF2wF9YIpXwUUtP65-olnkPyWcgF5BiAtBEy0/​​edit#

Next steps here:

Aligning recommender systems is “training practice” for larger AI alignment problems

See https://​​forum.effectivealtruism.org/​​posts/​​xzjQvqDYahigHcwgQ/​​aligning-recommender-systems-as-cause-area#Connection_with_AGI_Alignment

Next step here:

  • Should we expand this argument?

The problem is solvable (but we need more capacity for research)

See e.g. https://​​www.turing.ac.uk/​​sites/​​default/​​files/​​2020-10/​​epistemic-security-report_final.pdf

Next step here:

  • Collect additional papers on solutions

  • What kind of research is interesting and would be worthwhile investing in?