Great to learn about your paper Fai, I didn’t know about it till now, and this topic is quite interesting. I think when longtermism talks about the far future it’s usually “of humanity” that follows and this always scared me, because I was not sure either this is speciesist or is there some silent assumption that we should also care about sentient beings. I don’t think there were animal-focused considerations in Toby Ord’s book (I might be wrong here) and similar publications? I would gladly then read your paper. I quickly jumped to the conclusion of it, and it kinds of confirm my intuitions in regards to AI (but also long-term future work in general): “Up to now, the AI fairness community has largely disregarded this particular dimension of discrimination. Even more so, the field of AI ethics hitherto has had an anthropocentric tailoring. Hence, despite the longstanding discourse about AI fairness, comprising lots of papers critically scrutinizing machine biases regarding race, gender, political orientation, religion, etc., this is the first paper to describe speciesist biases in various common-place AI applications like image recognition, language models, or recommender systems. Accordingly, we follow the calls of another large corpus of literature, this time from animal ethics, pointing from different angles at the ethical necessity of taking animals directly into consideration [48,155–158]...”
Great to learn about your paper Fai, I didn’t know about it till now, and this topic is quite interesting. I think when longtermism talks about the far future it’s usually “of humanity” that follows and this always scared me, because I was not sure either this is speciesist or is there some silent assumption that we should also care about sentient beings. I don’t think there were animal-focused considerations in Toby Ord’s book (I might be wrong here) and similar publications? I would gladly then read your paper. I quickly jumped to the conclusion of it, and it kinds of confirm my intuitions in regards to AI (but also long-term future work in general):
“Up to now, the AI fairness community has largely disregarded this particular dimension of discrimination. Even more so, the field of AI ethics hitherto has had an anthropocentric tailoring. Hence, despite the longstanding discourse about AI fairness, comprising lots of papers critically scrutinizing machine biases regarding race, gender, political orientation, religion, etc., this is the first paper to describe speciesist biases in various common-place AI applications like image recognition, language models, or recommender systems. Accordingly, we follow the calls of another large corpus of literature, this time from animal ethics, pointing from different angles at the ethical necessity of taking animals directly into consideration [48,155–158]...”