Thanks. I’m aware of this sort of argument, though I think most of what’s out there relies on anecdotes, and it’s unclear exactly what the effect is (since there is likely some level of confounding here).
I guess there are still two things holding me up here. (1) It’s not clear that the media is changing preferences or just offering [mis/dis]information. (2) I’m not sure it’s a small leap. News channels’ effects on preferences likely involve prolonged exposure, not a one-time sitting. For an algorithm to expose someone in a prolonged way, it has to either repeatedly recommend videos or recommend one video that leads to their watching many, many videos. The latter strikes me as unlikely; again, behavior is malleable but not that malleable. In the former case, I would think the direct effect on the reward function of all of those individual videos recommended and clicked on has to be way larger than the effect on the person’s behavior after seeing the videos. If my reasoning were wrong, I would find that quite scary, because it would be evidence of substantially greater vulnerability to current algorithms than I previously thought.
(1) The difference between preferences and information seems like a thin line to me. When groups are divided about abortion, for example, which cluster would that fall into?
It feels fairly clear to me that the media facilitates political differences, as I’m not sure how else these could be relayed to the extent they are (direct friends/family is another option, but wouldn’t explain quick and correlated changes in political parties).
(2) The specific issue of prolonged involvement doesn’t seem hard to be believe. People spend lots of time on Youtube. I’ve definitely gotten lots of recommendations to the same clusters of videos. There are only so many clusters out there.
All that said, my story above is fairly different from Stuart’s. I think his is more of “these algorithms are a fundamentally new force with novel mechanisms of preference changes”. My claim is that media sources naturally change the preferences of individuals, so of course if algorithms have control in directing people to media sources, this will be influential in preference modification. This is where “preference modification” basically means, “I didn’t used to be an intense anarcho-capitalist, but then I watched a bunch of the videos, and now tie in strongly to the movement”
However, the issue of “how much do news organizations actively optimize preference modification for the purposes of increasing engagement, either intentionally or non intentionally?” is more vague.
Thanks. I’m aware of this sort of argument, though I think most of what’s out there relies on anecdotes, and it’s unclear exactly what the effect is (since there is likely some level of confounding here).
I guess there are still two things holding me up here. (1) It’s not clear that the media is changing preferences or just offering [mis/dis]information. (2) I’m not sure it’s a small leap. News channels’ effects on preferences likely involve prolonged exposure, not a one-time sitting. For an algorithm to expose someone in a prolonged way, it has to either repeatedly recommend videos or recommend one video that leads to their watching many, many videos. The latter strikes me as unlikely; again, behavior is malleable but not that malleable. In the former case, I would think the direct effect on the reward function of all of those individual videos recommended and clicked on has to be way larger than the effect on the person’s behavior after seeing the videos. If my reasoning were wrong, I would find that quite scary, because it would be evidence of substantially greater vulnerability to current algorithms than I previously thought.
(1) The difference between preferences and information seems like a thin line to me. When groups are divided about abortion, for example, which cluster would that fall into?
It feels fairly clear to me that the media facilitates political differences, as I’m not sure how else these could be relayed to the extent they are (direct friends/family is another option, but wouldn’t explain quick and correlated changes in political parties).
(2) The specific issue of prolonged involvement doesn’t seem hard to be believe. People spend lots of time on Youtube. I’ve definitely gotten lots of recommendations to the same clusters of videos. There are only so many clusters out there.
All that said, my story above is fairly different from Stuart’s. I think his is more of “these algorithms are a fundamentally new force with novel mechanisms of preference changes”. My claim is that media sources naturally change the preferences of individuals, so of course if algorithms have control in directing people to media sources, this will be influential in preference modification. This is where “preference modification” basically means, “I didn’t used to be an intense anarcho-capitalist, but then I watched a bunch of the videos, and now tie in strongly to the movement”
However, the issue of “how much do news organizations actively optimize preference modification for the purposes of increasing engagement, either intentionally or non intentionally?” is more vague.