BTW, I am interested in studying this question if anyone is interested in partnering up. I’m not entirely sure how to study it, as (given the post) I suspect the result may be a null, which is only interesting if we have access to one of the algorithms he is talking about and data on the scale such an algorithm would typically have.
My general approach would be an online experiment where I expose one group of people to a recommender system and don’t expose another. Then place both groups in the same environment and observe whether the first group is now more predictable. (This does not account for the issue of information, though.)
I think that experiment wouldn’t prove anything about the algorithm’s “intentions,” which seem to be the interesting part of the claim. One experiment that maybe would (I have no idea if this is practical) is giving the algorithm the chance to recommend two pieces of content: a) high likelihood of being clicked on, b) lower likelihood of being clicked on, but makes the people who do click on it more polarized. Not sure if a natural example of such a piece of content exists.
BTW, I am interested in studying this question if anyone is interested in partnering up. I’m not entirely sure how to study it, as (given the post) I suspect the result may be a null, which is only interesting if we have access to one of the algorithms he is talking about and data on the scale such an algorithm would typically have.
My general approach would be an online experiment where I expose one group of people to a recommender system and don’t expose another. Then place both groups in the same environment and observe whether the first group is now more predictable. (This does not account for the issue of information, though.)
I think that experiment wouldn’t prove anything about the algorithm’s “intentions,” which seem to be the interesting part of the claim. One experiment that maybe would (I have no idea if this is practical) is giving the algorithm the chance to recommend two pieces of content: a) high likelihood of being clicked on, b) lower likelihood of being clicked on, but makes the people who do click on it more polarized. Not sure if a natural example of such a piece of content exists.