I will admit that this isn’t as much of a concern as I think it is because of my admittedly moral anti-realist viewpoint, where the questions aren’t “Is there one true morality?” or “What moralities are harmful? (Except from perspectives).
The better questions are, “Why does moral intuition emerge in our brains?” and “How do you ensure your values get encoded into the future?”
Yes this is a fair point, I think that P6 is probably quite easily rejected from a moral anti-realsit stance. I do however think that the rest of the argument probbaly stil runs, given the claim is about potential X-Risk which can probbaly be agreed on as a bad irrespective of one’s metaethics.
I will admit that this isn’t as much of a concern as I think it is because of my admittedly moral anti-realist viewpoint, where the questions aren’t “Is there one true morality?” or “What moralities are harmful? (Except from perspectives).
The better questions are, “Why does moral intuition emerge in our brains?” and “How do you ensure your values get encoded into the future?”
Yes this is a fair point, I think that P6 is probably quite easily rejected from a moral anti-realsit stance. I do however think that the rest of the argument probbaly stil runs, given the claim is about potential X-Risk which can probbaly be agreed on as a bad irrespective of one’s metaethics.