The expert DiResta said (in the YouTube video of interviews with Twitter and Facebook employees that Misha posted) that overcoming the division that is created by online bad actors will require us addressing our own natures because online bad actors will never be elimanted but merely managed. This struck me as important and it is applicable to the problems that recommender algorithms may exacerbate. If I remember correctly, in the audiobook The Alignment Problem, Brian Christian’s way of looking at it was that the biases that AI systems spit out can hopefully cause us to look introspectively at ourselves and how we have committed so many injustices throughout history.
Neil deGrasse Tyson once remarked that a recommender algorithm can prevent him from exploring content that he would have explored naturally. His remark seems to hint at or point somewhere in the direction of a dangerous slope recommender algorithms could potentially bring us down.
The Metrics for Recommender Algorithms
Somewhat along the lines of what Neil said, a recommender algorithm might devoid us of some important internal quality while building out empty, superficial qualities. The recommender algorithms that I am most familiar with (like the one on Netflix and for feeds on Google and Twitter) are based on maximizing the use of our eyes on the screen and clicks. While our eyes are important, neuroscience tells us that sight is not a perfect representation of reality, and even ancient philosophers took what they saw with a grain of salt. As for our clicks, to me they seem to be mostly associated with our curiosity to explore, to see what is in the next article, video, etc.
Pornography
Ted Bundy said that pornography made him become who he was. I have no opinion on whether this is true. However, if it is true, it means that a recommender algorithm (when applied to pornography) could potentially make a person become a serial killer faster than they would have otherwise or pave the opportunity (for those who are slightly vulnerable of becoming one but have self control) for them to become one at all by exploiting their vulnerability.
Suggestion:
A recommender algorithm can shut off periodically. The person can be notified when it is shut off and when it is on. When it is off, maybe things will appear based on how recent they are or something. This way a person can see the difference in their quality of life and content consumption with and without the recommender algorithm and decide whether the algorithm has any benefit. It is possible over time that the person will view the algorithm as a lenses into possibly their own bad habits or into the dark side of human history. It is possible that having the algorithm on sometimes, and off at others, can reduce the capacity of the algorithm to become insidious in the person’s life and make the interaction with the algorithm a more conscious interaction on behalf of the person; the algorithm may have some dark aspects and results, but the person can constantly be aware of these results and perhaps see it as a reflection of humanity’s own faults.
Seemingly Useful Viewpoints
The expert DiResta said (in the YouTube video of interviews with Twitter and Facebook employees that Misha posted) that overcoming the division that is created by online bad actors will require us addressing our own natures because online bad actors will never be elimanted but merely managed. This struck me as important and it is applicable to the problems that recommender algorithms may exacerbate. If I remember correctly, in the audiobook The Alignment Problem, Brian Christian’s way of looking at it was that the biases that AI systems spit out can hopefully cause us to look introspectively at ourselves and how we have committed so many injustices throughout history.
Neil deGrasse Tyson once remarked that a recommender algorithm can prevent him from exploring content that he would have explored naturally. His remark seems to hint at or point somewhere in the direction of a dangerous slope recommender algorithms could potentially bring us down.
The Metrics for Recommender Algorithms
Somewhat along the lines of what Neil said, a recommender algorithm might devoid us of some important internal quality while building out empty, superficial qualities. The recommender algorithms that I am most familiar with (like the one on Netflix and for feeds on Google and Twitter) are based on maximizing the use of our eyes on the screen and clicks. While our eyes are important, neuroscience tells us that sight is not a perfect representation of reality, and even ancient philosophers took what they saw with a grain of salt. As for our clicks, to me they seem to be mostly associated with our curiosity to explore, to see what is in the next article, video, etc.
Pornography
Ted Bundy said that pornography made him become who he was. I have no opinion on whether this is true. However, if it is true, it means that a recommender algorithm (when applied to pornography) could potentially make a person become a serial killer faster than they would have otherwise or pave the opportunity (for those who are slightly vulnerable of becoming one but have self control) for them to become one at all by exploiting their vulnerability.
Suggestion:
A recommender algorithm can shut off periodically. The person can be notified when it is shut off and when it is on. When it is off, maybe things will appear based on how recent they are or something. This way a person can see the difference in their quality of life and content consumption with and without the recommender algorithm and decide whether the algorithm has any benefit. It is possible over time that the person will view the algorithm as a lenses into possibly their own bad habits or into the dark side of human history. It is possible that having the algorithm on sometimes, and off at others, can reduce the capacity of the algorithm to become insidious in the person’s life and make the interaction with the algorithm a more conscious interaction on behalf of the person; the algorithm may have some dark aspects and results, but the person can constantly be aware of these results and perhaps see it as a reflection of humanity’s own faults.