I didn’t see any mentions of existing organizations that work on recommender alignment (even if they don’t use the “short-term aligned AI” framing). It sounds as though many of the goals/benefits you discuss here could come from tweaks to existing algorithms that needn’t be connected to AI alignment (if Facebook wanted to focus on making users healthier, would it need “alignment” to do so?).
What do you think of the goals of existing “recommender alignment” organizations, like the Center for Humane Technology? They are annoyingly vague about their goals, but this suggestion sheet lays out some of what they care about: Users being able to focus, not being stressed, etc.
I didn’t see any mentions of existing organizations that work on recommender alignment (even if they don’t use the “short-term aligned AI” framing). It sounds as though many of the goals/benefits you discuss here could come from tweaks to existing algorithms that needn’t be connected to AI alignment (if Facebook wanted to focus on making users healthier, would it need “alignment” to do so?).
What do you think of the goals of existing “recommender alignment” organizations, like the Center for Humane Technology? They are annoyingly vague about their goals, but this suggestion sheet lays out some of what they care about: Users being able to focus, not being stressed, etc.