Correct me if I am wrong, but it seems like the notion of existence you are using here is either sharp or would be sharp if you had sufficient time to reflect on and refine a formalization (e.g. it formalization might be of the form “the set of physical systems A count as existing humans and everything else doesn’t”). For example, your valence/non-existence diagram has a sharp line.
I’m asking this because I think the main reason why I don’t hold person affecting views is that sharpness seems like it isn’t a necessary assumption about my morality* and that it is more complex than at least one alternative: view all matter as potentially mattering, because, for example, there is always potential to shape the matter like a human living a good life. Viewing personhood like this also seems more accurate/practical to me (e.g. it feels like being more of a physicalist about personhood in this way would let you make more progress on questions related to digital sentience, though this claim is unsubstantiated).
Do you have any thoughts on how to weigh between “choose the morality that uses simpler-seeming assumptions” v.s. “choose the morality that matches my immediate feelings about what it means to improve the world?” I’m unsure, though I currently lean toward the former.
*maybe this is where you disagree since not having sharpness means that reproducing in the future when life isn’t sucky can be good
I think I had something basically sharp in mind when I wrote this, but I don’t think sharpness is essential. Rocks have no interests at all or only extremely weak ones that are very easy to outweigh. From the point of view of the rock as a rock, becoming a happy human doesn’t necessarily seem better at all even assuming a symmetric theory of welfare. It could depend on exactly how you think about “interests”, though, but I wouldn’t use any account on which rocks have non-negligible interests as long as they remain rocks, since they care at most negligibly about anything.
We can apply Comparative Interests:
Comparative Interests: An outcome X is in one way worse than an outcome Y if, conditional on X, the individuals in X would have a stronger overall interest in outcome Y than in X and, conditional on Y, the individuals in Y would not have an equally or even stronger overall interest in X than in Y.
Suppose in X, the rock remains a rock, and in Y, the rock becomes a happy human. In X, the rock cares at most negligibly about anything, so I would say Y is at most negligibly better for the rock than X (although I’m not sure it is better at all), and this difference is small enough to be so easily outweighed that it can be ignored in practice. So, whether or not X is worse than Y basically doesn’t depend on any interests the rock might have, because of how weak they’d be.
Do you have any thoughts on how to weigh between “choose the morality that uses simpler-seeming assumptions” v.s. “choose the morality that matches my immediate feelings about what it means to improve the world?” I’m unsure, though I currently lean toward the former.
I don’t think there’s really any good, non-arbitrary, principled and objective way to weigh these things. I guess I decide by cases on whatever just feels more right to me (although am often uncertain about that) and I try to at least avoid inconsistency. You could try to approximate something like a Kolmogorov complexity prior, but even then, you’d have a bunch of arbitrary choices to make.
Correct me if I am wrong, but it seems like the notion of existence you are using here is either sharp or would be sharp if you had sufficient time to reflect on and refine a formalization (e.g. it formalization might be of the form “the set of physical systems A count as existing humans and everything else doesn’t”). For example, your valence/non-existence diagram has a sharp line.
I’m asking this because I think the main reason why I don’t hold person affecting views is that sharpness seems like it isn’t a necessary assumption about my morality* and that it is more complex than at least one alternative: view all matter as potentially mattering, because, for example, there is always potential to shape the matter like a human living a good life. Viewing personhood like this also seems more accurate/practical to me (e.g. it feels like being more of a physicalist about personhood in this way would let you make more progress on questions related to digital sentience, though this claim is unsubstantiated).
Do you have any thoughts on how to weigh between “choose the morality that uses simpler-seeming assumptions” v.s. “choose the morality that matches my immediate feelings about what it means to improve the world?” I’m unsure, though I currently lean toward the former.
*maybe this is where you disagree since not having sharpness means that reproducing in the future when life isn’t sucky can be good
I think I had something basically sharp in mind when I wrote this, but I don’t think sharpness is essential. Rocks have no interests at all or only extremely weak ones that are very easy to outweigh. From the point of view of the rock as a rock, becoming a happy human doesn’t necessarily seem better at all even assuming a symmetric theory of welfare. It could depend on exactly how you think about “interests”, though, but I wouldn’t use any account on which rocks have non-negligible interests as long as they remain rocks, since they care at most negligibly about anything.
We can apply Comparative Interests:
Suppose in X, the rock remains a rock, and in Y, the rock becomes a happy human. In X, the rock cares at most negligibly about anything, so I would say Y is at most negligibly better for the rock than X (although I’m not sure it is better at all), and this difference is small enough to be so easily outweighed that it can be ignored in practice. So, whether or not X is worse than Y basically doesn’t depend on any interests the rock might have, because of how weak they’d be.
I don’t think there’s really any good, non-arbitrary, principled and objective way to weigh these things. I guess I decide by cases on whatever just feels more right to me (although am often uncertain about that) and I try to at least avoid inconsistency. You could try to approximate something like a Kolmogorov complexity prior, but even then, you’d have a bunch of arbitrary choices to make.