Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
It seems to me like when most EA’s are talking about an expanding circle what we are talking about is either an expanding circle of moral concern towards 1) all sentient beings or 2) equal consideration of interests for all entities (with the background understanding that only sentient beings have interests).
Given this definition of what it means to expand the moral circle, I don’t think Gwern’s talk of a narrowing moral circle is relevant. For the list of entities that Gwern has described us as having lost moral concern for, we did not lose moral concern for them for reasons having to do with their sentience. Even when these entities are plausibly sentient (such as with sacred animals) it seems like people’s moral concern for them is primarily based on other factors. Therefore they should not count as data points in the trend of how our moral circle is or is not expanding.
Also, quite plausibly, a big reason why we have lost concern for these entities is because of an increasingly scientifically and metaphysically accurate view of the world that causes us to not regard these entities to be seen as special, to have interests, or even to exist at all.
Many of the processes we pejoratively call “cognitive biases” are actually true. Either in the sense of being useful heuristics for everyday circumstances, or in the sense of just being generally true (ie the prototypical grandma being right and a PhD scientist being wrong).
For example, hyperbolic discounting is completely rational in the face of uncertain risks. This is clearly the case when planning for the far future. While one might care about future beings in an abstract sense, it doesn’t make sense to include their well-being in ones decision making as it has been discounted to approximately zero. As an extreme example: I fully agree that humans outside my light-cone have the same moral worth as those inside my light-cone, but since I can never effect those outside my light-cone (assuming they exist, which is not something we will ever know) I don’t factor them into moral decisions.
I don’t think people don’t not care about digital minds just because they are digital. Watching the first episode of Black Mirror, it’s hard not to feel sympathy for the simulated people. It would probably be a very unsuccessful show if the audience had no emotional investment in what happens to the simulated people.
Some objections to target when trying to increase moral concern for digital minds might be:
they don’t exist, and feel much more hypothetical than “future generations”
it feels unclear what could be done to help them (them specifically, as opposed to helping future generations in general)
it feels hard to determine whether a digital mind (that is not just a human or animal consciousness upload) is sentient and what they would feel as positive or negative valence
As an overall trend, people act in their self-interest. At best people act in their long-term self-interest. So if you want to convince people of something, appeal to their self-interest. This may need to be an indirect appeal.
On the subject of recognizing the moral worth of animals, Subhuman: The Moral Psychology of Human Attitudes to Animals by TJ Kasperbauer offers a good summary of issues. In particular, he argues that there are psychological processes at work that humans frequently use to distance themselves from animals that are different than what they apply to humans, though there are cases of overlap too.
Fwiw, I didn’t find anything particularly actionable in the book. But I do think he argues well that different approaches to motivating people to morally care about animals (namely, welfarism and abolitionism) are both premised on moral psychological beliefs that we don’t have very much empirical evidence to help adjudicate.