I didnât see any mentions of existing organizations that work on recommender alignment (even if they donât use the âshort-term aligned AIâ framing). It sounds as though many of the goals/âbenefits you discuss here could come from tweaks to existing algorithms that neednât be connected to AI alignment (if Facebook wanted to focus on making users healthier, would it need âalignmentâ to do so?).
What do you think of the goals of existing ârecommender alignmentâ organizations, like the Center for Humane Technology? They are annoyingly vague about their goals, but this suggestion sheet lays out some of what they care about: Users being able to focus, not being stressed, etc.
I didnât see any mentions of existing organizations that work on recommender alignment (even if they donât use the âshort-term aligned AIâ framing). It sounds as though many of the goals/âbenefits you discuss here could come from tweaks to existing algorithms that neednât be connected to AI alignment (if Facebook wanted to focus on making users healthier, would it need âalignmentâ to do so?).
What do you think of the goals of existing ârecommender alignmentâ organizations, like the Center for Humane Technology? They are annoyingly vague about their goals, but this suggestion sheet lays out some of what they care about: Users being able to focus, not being stressed, etc.