My notes on what I liked about the post, from the announcement:
Every cause area starts somewhere. And while I’m not sure whether improving YouTube recommendations or fixing the News Feed will become a major focus of EA research, I commend Ivan Vendrov and Jeremy Nixon for crafting a coherent vision for how we might approach the problem of “aligning recommender systems.”
Alongside a straightforward discussion of the scale of these systems’ influence (they shape hours of daily experience for hundreds of millions of people), the authors present a fascinating argument that certain features of these commercial products map onto longstanding problems in AI alignment. This broad scope seems appropriate for an introduction to a new cause — I’m happy to see authors make the most comprehensive case they can, since further research can always moderate their conclusions.
(It helps that Vendrov and Nixon freely admit the low confidence levels around their specific numbers and discuss the risks behind this work — they want to inform, not just persuade.)
Finally, I appreciated the next-to-last section (“Key points of uncertainty”), which leaves a set of open questions for other authors to tackle and creates convenient cruxes for debate.
This post was awarded an EA Forum Prize; see the prize announcement for more details.
My notes on what I liked about the post, from the announcement: