I like this view. I think it agrees with my intuition that morality is just a function of whatever a society has decided it cares about. This can make practical sense in the case of ‘murder’, and perhaps not so much in the case of ‘hands on table’. Of course one might then wonder if they ought to care about things making practical sense, which in turn depends on whether that’s a thing we care about, or whatever underlying(/meta-)goal applies; turtles all the way down.
I like that this perspective takes some of the mysticism out of morality by explicitly noting the associated goals/desires. Also generalising obligations to things you should do according to at least one of every self-consistent moral framework is pretty neat. The latter is mainly playing with definitions obviously, but it makes sense as far as definitions go (whatever that means).
I wonder if there exists some set of obligations that caring about/following maximises CEV. Assuming we care about achieving CEV (by definition we might?), this seems like a strong candidate moral framework for everyone to agree on, if such a thing is at all possible.
Possible problems: 1) current volition is not extrapolated volition, so we may not want to care about what we would want to care about, 2) extrapolated volition of different sentients may not converge (then maybe look for the best approximation?).
I’m not that well read in these issues, so please do tell me if I’m clearly missing something/making an obvious mistake.
I like this view. I think it agrees with my intuition that morality is just a function of whatever a society has decided it cares about. This can make practical sense in the case of ‘murder’, and perhaps not so much in the case of ‘hands on table’. Of course one might then wonder if they ought to care about things making practical sense, which in turn depends on whether that’s a thing we care about, or whatever underlying(/meta-)goal applies; turtles all the way down.
I like that this perspective takes some of the mysticism out of morality by explicitly noting the associated goals/desires. Also generalising obligations to things you should do according to at least one of every self-consistent moral framework is pretty neat. The latter is mainly playing with definitions obviously, but it makes sense as far as definitions go (whatever that means).
I wonder if there exists some set of obligations that caring about/following maximises CEV. Assuming we care about achieving CEV (by definition we might?), this seems like a strong candidate moral framework for everyone to agree on, if such a thing is at all possible.
Possible problems: 1) current volition is not extrapolated volition, so we may not want to care about what we would want to care about, 2) extrapolated volition of different sentients may not converge (then maybe look for the best approximation?).
I’m not that well read in these issues, so please do tell me if I’m clearly missing something/making an obvious mistake.