To minimize human-caused suffering as much as possible, it seems that farm animals should be let to live freely until they die naturally and shouldn’t need to be modified in any way. A quick google search told me that cows have lifespans of 15-20 years and chickens have lifespans of 3-7 years. Since the world produces enough food to feed the global population several times over (even though hundreds of millions of people go without food), it might be that society and individual habits can be restructured in such a way (such as by using less of our food to feed farm animals and individuals not wasting any food they buy) so that we could farm in the way I just said and we would virtually eat as much as we currently do.
Analgesics are better than nothing. However, they don’t erase the trauma that the animals experience from being modified. I don’t know how the modifications affects the animals in the long run, but I wonder if they would cause chronic living struggles similar to what humans experience who are missing a limb, have back problems, etc. Also, the animals cannot communicate to us any secondary problems that result from their body modifications. It seems that addressing the pain caused by our modifications of them could potentially bring up all the issues I just raised plus many more that could have been avoided altogether by not modifying them in the first place.
Seemingly Useful Viewpoints
The expert DiResta said (in the YouTube video of interviews with Twitter and Facebook employees that Misha posted) that overcoming the division that is created by online bad actors will require us addressing our own natures because online bad actors will never be elimanted but merely managed. This struck me as important and it is applicable to the problems that recommender algorithms may exacerbate. If I remember correctly, in the audiobook The Alignment Problem, Brian Christian’s way of looking at it was that the biases that AI systems spit out can hopefully cause us to look introspectively at ourselves and how we have committed so many injustices throughout history.
Neil deGrasse Tyson once remarked that a recommender algorithm can prevent him from exploring content that he would have explored naturally. His remark seems to hint at or point somewhere in the direction of a dangerous slope recommender algorithms could potentially bring us down.
The Metrics for Recommender Algorithms
Somewhat along the lines of what Neil said, a recommender algorithm might devoid us of some important internal quality while building out empty, superficial qualities. The recommender algorithms that I am most familiar with (like the one on Netflix and for feeds on Google and Twitter) are based on maximizing the use of our eyes on the screen and clicks. While our eyes are important, neuroscience tells us that sight is not a perfect representation of reality, and even ancient philosophers took what they saw with a grain of salt. As for our clicks, to me they seem to be mostly associated with our curiosity to explore, to see what is in the next article, video, etc.
Pornography
Ted Bundy said that pornography made him become who he was. I have no opinion on whether this is true. However, if it is true, it means that a recommender algorithm (when applied to pornography) could potentially make a person become a serial killer faster than they would have otherwise or pave the opportunity (for those who are slightly vulnerable of becoming one but have self control) for them to become one at all by exploiting their vulnerability.
Suggestion:
A recommender algorithm can shut off periodically. The person can be notified when it is shut off and when it is on. When it is off, maybe things will appear based on how recent they are or something. This way a person can see the difference in their quality of life and content consumption with and without the recommender algorithm and decide whether the algorithm has any benefit. It is possible over time that the person will view the algorithm as a lenses into possibly their own bad habits or into the dark side of human history. It is possible that having the algorithm on sometimes, and off at others, can reduce the capacity of the algorithm to become insidious in the person’s life and make the interaction with the algorithm a more conscious interaction on behalf of the person; the algorithm may have some dark aspects and results, but the person can constantly be aware of these results and perhaps see it as a reflection of humanity’s own faults.