I think I had something basically sharp in mind when I wrote this, but I donât think sharpness is essential. Rocks have no interests at all or only extremely weak ones that are very easy to outweigh. From the point of view of the rock as a rock, becoming a happy human doesnât necessarily seem better at all even assuming a symmetric theory of welfare. It could depend on exactly how you think about âinterestsâ, though, but I wouldnât use any account on which rocks have non-negligible interests as long as they remain rocks, since they care at most negligibly about anything.
We can apply Comparative Interests:
Comparative Interests: An outcome X is in one way worse than an outcome Y if, conditional on X, the individuals in X would have a stronger overall interest in outcome Y than in X and, conditional on Y, the individuals in Y would not have an equally or even stronger overall interest in X than in Y.
Suppose in X, the rock remains a rock, and in Y, the rock becomes a happy human. In X, the rock cares at most negligibly about anything, so I would say Y is at most negligibly better for the rock than X (although Iâm not sure it is better at all), and this difference is small enough to be so easily outweighed that it can be ignored in practice. So, whether or not X is worse than Y basically doesnât depend on any interests the rock might have, because of how weak theyâd be.
Do you have any thoughts on how to weigh between âchoose the morality that uses simpler-seeming assumptionsâ v.s. âchoose the morality that matches my immediate feelings about what it means to improve the world?â Iâm unsure, though I currently lean toward the former.
I donât think thereâs really any good, non-arbitrary, principled and objective way to weigh these things. I guess I decide by cases on whatever just feels more right to me (although am often uncertain about that) and I try to at least avoid inconsistency. You could try to approximate something like a Kolmogorov complexity prior, but even then, youâd have a bunch of arbitrary choices to make.
I think I had something basically sharp in mind when I wrote this, but I donât think sharpness is essential. Rocks have no interests at all or only extremely weak ones that are very easy to outweigh. From the point of view of the rock as a rock, becoming a happy human doesnât necessarily seem better at all even assuming a symmetric theory of welfare. It could depend on exactly how you think about âinterestsâ, though, but I wouldnât use any account on which rocks have non-negligible interests as long as they remain rocks, since they care at most negligibly about anything.
We can apply Comparative Interests:
Suppose in X, the rock remains a rock, and in Y, the rock becomes a happy human. In X, the rock cares at most negligibly about anything, so I would say Y is at most negligibly better for the rock than X (although Iâm not sure it is better at all), and this difference is small enough to be so easily outweighed that it can be ignored in practice. So, whether or not X is worse than Y basically doesnât depend on any interests the rock might have, because of how weak theyâd be.
I donât think thereâs really any good, non-arbitrary, principled and objective way to weigh these things. I guess I decide by cases on whatever just feels more right to me (although am often uncertain about that) and I try to at least avoid inconsistency. You could try to approximate something like a Kolmogorov complexity prior, but even then, youâd have a bunch of arbitrary choices to make.