Right. I mean, I privilege this simpler explanation you mention. He seems to have reason to think it’s not the right explanation, but I can’t figure out why.
I think the “so that they become more predictable [to the recommender algorithm]” is crucial in Russel’s argument. IF human preferences were malleable in this way, and IF recommender algorithms are strong enough to detect that malleability, then the pressures towards the behaviour that Russel suggests is strong and we have a lot of reasons to expect it. I think the answer to both IFs is likely to be yes.
I just don’t think we’ve seen anything that favors the hypothesis “algorithm ‘intentionally’ radicalizes people in order to get more clicks from them in the long run” over the hypothesis “algorithm shows people what they will click on the most (which is often extreme political content, and this causes them to become more radical, in a self-reinforcing cycle.)”
Right. I mean, I privilege this simpler explanation you mention. He seems to have reason to think it’s not the right explanation, but I can’t figure out why.
I think the “so that they become more predictable [to the recommender algorithm]” is crucial in Russel’s argument. IF human preferences were malleable in this way, and IF recommender algorithms are strong enough to detect that malleability, then the pressures towards the behaviour that Russel suggests is strong and we have a lot of reasons to expect it. I think the answer to both IFs is likely to be yes.
I just don’t think we’ve seen anything that favors the hypothesis “algorithm ‘intentionally’ radicalizes people in order to get more clicks from them in the long run” over the hypothesis “algorithm shows people what they will click on the most (which is often extreme political content, and this causes them to become more radical, in a self-reinforcing cycle.)”