By insisting on your request, you guarantee that an anti-realist version of you—a Bob without this strict commitment to moral realism—would be horrified with the outcome
I feel unsure why this would be so, at least if we’re using the terms “guarantee” and “horrified” in the same way. It makes sense to me (given my high credence that moral realism is false) that insisting on morality being objective would be likely to result in an outcomethat an anti-realist version of Bob would be somewhere between unhappy with and horrified by. But I’m not sure how to think about how likely it’d be that the anti-realist Bob would be horrified, given that I’m not sure what it’d look like or result in if the AI forced its reasoning to fit with the idea of an objective morality.
Is there a reason to believe there’s a greater than 99.9% chance that, if the AI forces its reasoning to fit with the idea of objective morality, we’d get a horrifying outcome (from an anti-realist Bob’s perspective)? (As opposed to a “sort-of bad” or “ok but not optimal” outcome.)
I feel unsure why this would be so, at least if we’re using the terms “guarantee” and “horrified” in the same way. It makes sense to me (given my high credence that moral realism is false) that insisting on morality being objective would be likely to result in an outcome that an anti-realist version of Bob would be somewhere between unhappy with and horrified by. But I’m not sure how to think about how likely it’d be that the anti-realist Bob would be horrified, given that I’m not sure what it’d look like or result in if the AI forced its reasoning to fit with the idea of an objective morality.
Is there a reason to believe there’s a greater than 99.9% chance that, if the AI forces its reasoning to fit with the idea of objective morality, we’d get a horrifying outcome (from an anti-realist Bob’s perspective)? (As opposed to a “sort-of bad” or “ok but not optimal” outcome.)