By insisting on your request, you guarantee that an anti-realist version of youāa Bob without this strict commitment to moral realismāwould be horrified with the outcome
I feel unsure why this would be so, at least if weāre using the terms āguaranteeā and āhorrifiedā in the same way. It makes sense to me (given my high credence that moral realism is false) that insisting on morality being objective would be likely to result in an outcomethat an anti-realist version of Bob would be somewhere between unhappy with and horrified by. But Iām not sure how to think about how likely itād be that the anti-realist Bob would be horrified, given that Iām not sure what itād look like or result in if the AI forced its reasoning to fit with the idea of an objective morality.
Is there a reason to believe thereās a greater than 99.9% chance that, if the AI forces its reasoning to fit with the idea of objective morality, weād get a horrifying outcome (from an anti-realist Bobās perspective)? (As opposed to a āsort-of badā or āok but not optimalā outcome.)
I feel unsure why this would be so, at least if weāre using the terms āguaranteeā and āhorrifiedā in the same way. It makes sense to me (given my high credence that moral realism is false) that insisting on morality being objective would be likely to result in an outcome that an anti-realist version of Bob would be somewhere between unhappy with and horrified by. But Iām not sure how to think about how likely itād be that the anti-realist Bob would be horrified, given that Iām not sure what itād look like or result in if the AI forced its reasoning to fit with the idea of an objective morality.
Is there a reason to believe thereās a greater than 99.9% chance that, if the AI forces its reasoning to fit with the idea of objective morality, weād get a horrifying outcome (from an anti-realist Bobās perspective)? (As opposed to a āsort-of badā or āok but not optimalā outcome.)