I think I had something basically sharp in mind when I wrote this, but I don’t think sharpness is essential. Rocks have no interests at all or only extremely weak ones that are very easy to outweigh. From the point of view of the rock as a rock, becoming a happy human doesn’t necessarily seem better at all even assuming a symmetric theory of welfare. It could depend on exactly how you think about “interests”, though, but I wouldn’t use any account on which rocks have non-negligible interests as long as they remain rocks, since they care at most negligibly about anything.
We can apply Comparative Interests:
Comparative Interests: An outcome X is in one way worse than an outcome Y if, conditional on X, the individuals in X would have a stronger overall interest in outcome Y than in X and, conditional on Y, the individuals in Y would not have an equally or even stronger overall interest in X than in Y.
Suppose in X, the rock remains a rock, and in Y, the rock becomes a happy human. In X, the rock cares at most negligibly about anything, so I would say Y is at most negligibly better for the rock than X (although I’m not sure it is better at all), and this difference is small enough to be so easily outweighed that it can be ignored in practice. So, whether or not X is worse than Y basically doesn’t depend on any interests the rock might have, because of how weak they’d be.
Do you have any thoughts on how to weigh between “choose the morality that uses simpler-seeming assumptions” v.s. “choose the morality that matches my immediate feelings about what it means to improve the world?” I’m unsure, though I currently lean toward the former.
I don’t think there’s really any good, non-arbitrary, principled and objective way to weigh these things. I guess I decide by cases on whatever just feels more right to me (although am often uncertain about that) and I try to at least avoid inconsistency. You could try to approximate something like a Kolmogorov complexity prior, but even then, you’d have a bunch of arbitrary choices to make.
I think I had something basically sharp in mind when I wrote this, but I don’t think sharpness is essential. Rocks have no interests at all or only extremely weak ones that are very easy to outweigh. From the point of view of the rock as a rock, becoming a happy human doesn’t necessarily seem better at all even assuming a symmetric theory of welfare. It could depend on exactly how you think about “interests”, though, but I wouldn’t use any account on which rocks have non-negligible interests as long as they remain rocks, since they care at most negligibly about anything.
We can apply Comparative Interests:
Suppose in X, the rock remains a rock, and in Y, the rock becomes a happy human. In X, the rock cares at most negligibly about anything, so I would say Y is at most negligibly better for the rock than X (although I’m not sure it is better at all), and this difference is small enough to be so easily outweighed that it can be ignored in practice. So, whether or not X is worse than Y basically doesn’t depend on any interests the rock might have, because of how weak they’d be.
I don’t think there’s really any good, non-arbitrary, principled and objective way to weigh these things. I guess I decide by cases on whatever just feels more right to me (although am often uncertain about that) and I try to at least avoid inconsistency. You could try to approximate something like a Kolmogorov complexity prior, but even then, you’d have a bunch of arbitrary choices to make.