Great post! It’s very nice to see this problem being put forward. Here are a few remarks.
It seems to be that the scale of the problem may be underestimated by the post. Two statistics that suggest this are the fact that there are now more views on YouTube than searches on Google, and that 70% of them are YouTube recommendation. Meanwhile, psychology stresses biases like availability bias or mere exposure effects that suggest that YouTube strongly influences what people think, want and do. Here are a few links about this:
Also, I would argue that the neglectedness of the problem may be underestimated by the post. I have personally talked to many people from different areas, social sciences, healthcare, education, environmentalists, medias, YouTubers and AI Safety researchers. After ~30-minute discussions, essentially all of them acknowledged that they had overlooked the importance of aligning recommender systems. For instance, one problem is known as “mute news”, i.e. the fact that important problems are overshadowed by what’s put forward by recommender systems. I’d argue that the problem of mute news is neglected.
Having said this, it seems to me that the tractability of the problem may be overestimated. For one thing, aligning recommender systems is particularly hard because they act in so-called “Byzantine” environments. Namely, any small modification of recommender systems is systematically followed by SEO-optimization-like strategies from content creators. This is discussed in the following excellent series of videos with interviews of Facebook and Twitter employees:
I would argue that aligning recommender systems may even be harder than aligning AGI, because we need to get the objective function right, even though we do not have AGI to help us do so. But as such, I’d argue that this is a perfect practice playground for alignment research, advocacy and policing. In particular, I’d argue that we too often view AGI as that system that *we* get to design. But what seems just as hard is to get leading AI companies to agree to align it.
Great post! It’s very nice to see this problem being put forward. Here are a few remarks.
It seems to be that the scale of the problem may be underestimated by the post. Two statistics that suggest this are the fact that there are now more views on YouTube than searches on Google, and that 70% of them are YouTube recommendation. Meanwhile, psychology stresses biases like availability bias or mere exposure effects that suggest that YouTube strongly influences what people think, want and do. Here are a few links about this:
https://www.visualcapitalist.com/what-happens-in-an-internet-minute-in-2019/
https://www.cnet.com/news/youtube-ces-2018-neal-mohan/
https://www.youtube.com/watch?v=cebFWOlx848
Also, I would argue that the neglectedness of the problem may be underestimated by the post. I have personally talked to many people from different areas, social sciences, healthcare, education, environmentalists, medias, YouTubers and AI Safety researchers. After ~30-minute discussions, essentially all of them acknowledged that they had overlooked the importance of aligning recommender systems. For instance, one problem is known as “mute news”, i.e. the fact that important problems are overshadowed by what’s put forward by recommender systems. I’d argue that the problem of mute news is neglected.
Having said this, it seems to me that the tractability of the problem may be overestimated. For one thing, aligning recommender systems is particularly hard because they act in so-called “Byzantine” environments. Namely, any small modification of recommender systems is systematically followed by SEO-optimization-like strategies from content creators. This is discussed in the following excellent series of videos with interviews of Facebook and Twitter employees:
https://www.youtube.com/watch?v=MUiYglgGbos&list=PLtzmb84AoqRRFF4rD1Bq7jqsKObbfaJIX
I would argue that aligning recommender systems may even be harder than aligning AGI, because we need to get the objective function right, even though we do not have AGI to help us do so. But as such, I’d argue that this is a perfect practice playground for alignment research, advocacy and policing. In particular, I’d argue that we too often view AGI as that system that *we* get to design. But what seems just as hard is to get leading AI companies to agree to align it.
I discussed this in a bit more length in this conference here (https://www.youtube.com/watch?v=sivsXJ1L1pg), and in this paper: https://arxiv.org/abs/1809.01036.
The two links in this paragraph are broken; I’m interested in taking a look, are the resources still available?
If you remove the parentheses and comma from the first link, and the final period from the second, they work.