Thank you so much for the links! Possibly I was just being a bit blind.
I was pretty excited about the Aligning Recommender systems article as I had also been thinking about that, but only now managed to read it in full. I somehow had missed Scott’s post.
I’m not sure whether they quite get to the bottom of the issue though (though I am not sure whether there is a bottom of the issue, we are back to ‘I feel like there is something more important here but I don’t know what’).
The Aligning recommender systems article discusses the direct relevance to more powerful AI alignment a fair bit which I was very keen to see. I am slightly surprised that there is little discussion on the double layer of misaligned goals—first Netflix does not recommend what users would truly want, second it does that because it is trying to maximize profit. Although it is up to debate whether aligning ‘recommender systems’ to peoples’ reflected preferences would actually bring in more money than just getting them addicted to the systems, which I doubt a bit.
Your second paragraph feels like something interesting in the capitalism critiques—we already have plenty of experience with misalignment in market economies between profit maximization and what people truly want, are there important lessons we can learn from this?
Thank you so much for the links! Possibly I was just being a bit blind. I was pretty excited about the Aligning Recommender systems article as I had also been thinking about that, but only now managed to read it in full. I somehow had missed Scott’s post.
I’m not sure whether they quite get to the bottom of the issue though (though I am not sure whether there is a bottom of the issue, we are back to ‘I feel like there is something more important here but I don’t know what’).
The Aligning recommender systems article discusses the direct relevance to more powerful AI alignment a fair bit which I was very keen to see. I am slightly surprised that there is little discussion on the double layer of misaligned goals—first Netflix does not recommend what users would truly want, second it does that because it is trying to maximize profit. Although it is up to debate whether aligning ‘recommender systems’ to peoples’ reflected preferences would actually bring in more money than just getting them addicted to the systems, which I doubt a bit.
Your second paragraph feels like something interesting in the capitalism critiques—we already have plenty of experience with misalignment in market economies between profit maximization and what people truly want, are there important lessons we can learn from this?