While fully understanding a user’s preferences and values requires more research, it seems like there are simpler things that could be done by the existing recommender systems that would be a win for users, ie. facebook having a “turn off inflammatory political news” switch (or a list of 5-10 similar switches), where current knowledge would suffice to train a classification system.
It could be the case that this is bottlenecked by the incentives of current companies, in that there isn’t a good revenue model for recommender systems other than advertising, and advertising creates the perverse incentive to keep users on your system as long as possible. Or it might be the case that most recommender systems are effectively monopolies on their respective content, and users will choose an aligned system over an unaligned one if options are available, but otherwise a monopoly faces no pressure to align their system.
In these cases, the bottleneck might be “start and scale one or more new organizations that do aligned recommender systems using current knowledge” rather than “do more research on how to produce more aligned recommender systems”.
My mental model of why Facebook doesn’t have “turn off inflammatory political news” and similar switches is because 99% of their users never toggle any such switches, so the feature won’t affect any of the metrics they track, so no engineer or product manager has an incentive to add it. Why won’t users toggle the switches? Part of it is laziness; but mostly I think users don’t trust that the system will faithfully give them what they want based on a single short description like “inflammatory political news” -what if they miss out on an important national story? What if a close friend shares a story with them and they don’t see it? What if their favorite comedian gets classified as inflammatory and filtered out?
As additional evidence that we’re more bottlenecked by research than by incentives, consider Twitter’s call for research to measure the “health” of Twitter conversations, and Facebook’s decision to demote news content. I believe if you gave most companies a robust and well-validated metric (analogous to differential privacy) for alignment with user value, they would start optimizing for it even at the cost of some short term growth/revenue.
The monopoly point is interesting. I don’t think existing recommender systems are well modelled as monopolies; they certainly behave as if they are in a life-and-death struggle with each other, probably because their fundamental product is “ways to occupy your time” and that market is extremely competitive. But a monopoly might actually be better because it wouldn’t have the current race to the bottom in pursuit of monetisable eyeballs.
Appreciate that point that they are competing for time (as I was only thinking of monopolies over content).
If the reason it isn’t used is that users don’t “trust that the system will give what they want given a single short description”, then part of the research agenda for aligned recommender systems is not just producing systems that are aligned, but systems where their users have a greater degree of justified trust that they are aligned (placing more emphasis on the user’s experience of interacting with the system). Some of this research could potentially take place with existing classification-based filters.
Agreed that’s an important distinction. I just assumed that if you make an aligned system, it will become trusted by users, but that’s not at all obvious.
While fully understanding a user’s preferences and values requires more research, it seems like there are simpler things that could be done by the existing recommender systems that would be a win for users, ie. facebook having a “turn off inflammatory political news” switch (or a list of 5-10 similar switches), where current knowledge would suffice to train a classification system.
It could be the case that this is bottlenecked by the incentives of current companies, in that there isn’t a good revenue model for recommender systems other than advertising, and advertising creates the perverse incentive to keep users on your system as long as possible. Or it might be the case that most recommender systems are effectively monopolies on their respective content, and users will choose an aligned system over an unaligned one if options are available, but otherwise a monopoly faces no pressure to align their system.
In these cases, the bottleneck might be “start and scale one or more new organizations that do aligned recommender systems using current knowledge” rather than “do more research on how to produce more aligned recommender systems”.
My mental model of why Facebook doesn’t have “turn off inflammatory political news” and similar switches is because 99% of their users never toggle any such switches, so the feature won’t affect any of the metrics they track, so no engineer or product manager has an incentive to add it. Why won’t users toggle the switches? Part of it is laziness; but mostly I think users don’t trust that the system will faithfully give them what they want based on a single short description like “inflammatory political news” -what if they miss out on an important national story? What if a close friend shares a story with them and they don’t see it? What if their favorite comedian gets classified as inflammatory and filtered out?
As additional evidence that we’re more bottlenecked by research than by incentives, consider Twitter’s call for research to measure the “health” of Twitter conversations, and Facebook’s decision to demote news content. I believe if you gave most companies a robust and well-validated metric (analogous to differential privacy) for alignment with user value, they would start optimizing for it even at the cost of some short term growth/revenue.
The monopoly point is interesting. I don’t think existing recommender systems are well modelled as monopolies; they certainly behave as if they are in a life-and-death struggle with each other, probably because their fundamental product is “ways to occupy your time” and that market is extremely competitive. But a monopoly might actually be better because it wouldn’t have the current race to the bottom in pursuit of monetisable eyeballs.
Appreciate that point that they are competing for time (as I was only thinking of monopolies over content).
If the reason it isn’t used is that users don’t “trust that the system will give what they want given a single short description”, then part of the research agenda for aligned recommender systems is not just producing systems that are aligned, but systems where their users have a greater degree of justified trust that they are aligned (placing more emphasis on the user’s experience of interacting with the system). Some of this research could potentially take place with existing classification-based filters.
Agreed that’s an important distinction. I just assumed that if you make an aligned system, it will become trusted by users, but that’s not at all obvious.