It seems like we can predictably make moral progress by reflecting; i.e. coming to answers + arguments that would be persuasive to our former selves I think I’m more likely to update towards the positions of smart people who’ve thought long and hard about a topic than the converse (and this is more true the smarter they are, the more they’re already aware of all the considerations I know, and the more they’ve thought about it) If I imagine handing people the keys to the universe, I mostly want to know “will they put serious effort into working out what the right thing is and then doing it” rather than their current moral views
I guess i feel this shows that there’s some shared object-level moral assumptions and some shared methodology for making progress among humans.
But doesn’t show that the overlap is strong enough for convergence.
And you could explain the overlap by conformism, in which case i wouldn’t agree with societies i haven’t interacted with
I guess i feel this shows that there’s some shared object-level moral assumptions and some shared methodology for making progress among humans.
But doesn’t show that the overlap is strong enough for convergence.
And you could explain the overlap by conformism, in which case i wouldn’t agree with societies i haven’t interacted with