While AI value alignment is considered a serious problem, the algorithms we use every day do not seem to be subject to alignment. That sounds like a serious problem to me. Has no one ever tried to align the YouTube algorithm with our values? What about on other types of platforms?
Since around 2017, there has been a lot of public interest in how youtube’s recommendation algorithms may affect individuals and society negatively. Governments, think tanks, the press/media, and other institutions have pressured youtube to adjust its recommendations. You could think of this as our world’s (indirect & corrupted) way of trying to instill humanity’s values into youtube’s algorithms.
I believe this sort of thing doesn’t get much attention from EAs because there’s not really a strong case for it being a global priority in the same way that existential risk from AI is.
While AI value alignment is considered a serious problem, the algorithms we use every day do not seem to be subject to alignment. That sounds like a serious problem to me. Has no one ever tried to align the YouTube algorithm with our values? What about on other types of platforms?
You might be interested in Building Human Values into Recommender Systems: An Interdisciplinary Synthesis as well as Jonathan Stray’s other work on alignment and beneficence of recommender systems.
Since around 2017, there has been a lot of public interest in how youtube’s recommendation algorithms may affect individuals and society negatively. Governments, think tanks, the press/media, and other institutions have pressured youtube to adjust its recommendations. You could think of this as our world’s (indirect & corrupted) way of trying to instill humanity’s values into youtube’s algorithms.
I believe this sort of thing doesn’t get much attention from EAs because there’s not really a strong case for it being a global priority in the same way that existential risk from AI is.