People do bring this up a fair bit—see for example some previous related discussion on Slatestarcodex here and the EA forum here.
I think most AI alignment people would be relatively satisfied with an outcome where our controls over AI outcomes were as strong as our current control over corporations: optimisation for a criteria that requires continual human input from a broad range of people, while keeping humans in-the-loop of decision making inside the optimisation process, and with the ability to impose additional external constrains at run-time (regulations).
Thank you so much for the links! Possibly I was just being a bit blind.
I was pretty excited about the Aligning Recommender systems article as I had also been thinking about that, but only now managed to read it in full. I somehow had missed Scott’s post.
I’m not sure whether they quite get to the bottom of the issue though (though I am not sure whether there is a bottom of the issue, we are back to ‘I feel like there is something more important here but I don’t know what’).
The Aligning recommender systems article discusses the direct relevance to more powerful AI alignment a fair bit which I was very keen to see. I am slightly surprised that there is little discussion on the double layer of misaligned goals—first Netflix does not recommend what users would truly want, second it does that because it is trying to maximize profit. Although it is up to debate whether aligning ‘recommender systems’ to peoples’ reflected preferences would actually bring in more money than just getting them addicted to the systems, which I doubt a bit.
Your second paragraph feels like something interesting in the capitalism critiques—we already have plenty of experience with misalignment in market economies between profit maximization and what people truly want, are there important lessons we can learn from this?
People do bring this up a fair bit—see for example some previous related discussion on Slatestarcodex here and the EA forum here.
I think most AI alignment people would be relatively satisfied with an outcome where our controls over AI outcomes were as strong as our current control over corporations: optimisation for a criteria that requires continual human input from a broad range of people, while keeping humans in-the-loop of decision making inside the optimisation process, and with the ability to impose additional external constrains at run-time (regulations).
Thank you so much for the links! Possibly I was just being a bit blind. I was pretty excited about the Aligning Recommender systems article as I had also been thinking about that, but only now managed to read it in full. I somehow had missed Scott’s post.
I’m not sure whether they quite get to the bottom of the issue though (though I am not sure whether there is a bottom of the issue, we are back to ‘I feel like there is something more important here but I don’t know what’).
The Aligning recommender systems article discusses the direct relevance to more powerful AI alignment a fair bit which I was very keen to see. I am slightly surprised that there is little discussion on the double layer of misaligned goals—first Netflix does not recommend what users would truly want, second it does that because it is trying to maximize profit. Although it is up to debate whether aligning ‘recommender systems’ to peoples’ reflected preferences would actually bring in more money than just getting them addicted to the systems, which I doubt a bit.
Your second paragraph feels like something interesting in the capitalism critiques—we already have plenty of experience with misalignment in market economies between profit maximization and what people truly want, are there important lessons we can learn from this?