I’m commenting late, but I don’t think the better futures perspective gets us back to intuitive/normie ethical views, because what is a better future has far more variation in values than preventing catastrophic outcomes (I’m making an empirical claim that most human values have more convergence in things they want to avoid than in things they want to seek out/are positive), and the other issue is that to a large extent, AGI/ASI in the medium/long-term is very totalizing in its effects, meaning that basically the only thing that matters is getting a friendly ASI to you, and thus promoting peace/democracy don’t matter, while good governance can actually matter (though it’d have to be way more specific than what Will MacAskill defines as good governance.)
I’m commenting late, but I don’t think the better futures perspective gets us back to intuitive/normie ethical views, because what is a better future has far more variation in values than preventing catastrophic outcomes (I’m making an empirical claim that most human values have more convergence in things they want to avoid than in things they want to seek out/are positive), and the other issue is that to a large extent, AGI/ASI in the medium/long-term is very totalizing in its effects, meaning that basically the only thing that matters is getting a friendly ASI to you, and thus promoting peace/democracy don’t matter, while good governance can actually matter (though it’d have to be way more specific than what Will MacAskill defines as good governance.)