If you were in a simulation or a dream, would you hold uncertainty about its behaviour, within a framework of subjectivity?
Do you believe in changing the rules that you use to make moral decisions as you learn things?
In this link above, Brian Tomasik points out some good reasons for rejecting this argument. But even given the argument holds, moral realism seems so weird and incomprehensible that I see no reason to prefer any possible moral realisms (e.g. utilitarianism, libertarianism) over any other, so each possible moral realism is canceled out by an equally likely opposite realism
Do you think that these probabilities are nonzero and that they cancel each other out?
However, the meta-ethical view that is required is realist only in a minimal sense: as long as one can make sense of a notion of moral proposition’s being true or false, and of one having better or worse evidence with respect to those propositions, then one can makes sense of it being important to gain new moral information. And very many metaethical
views can make sense of that. Sophisticated subjectivist moral views certainly can: it’s certainly non-obvious, for example, what one would desire oneself to desire if one were fully rational; and one can certainly improve one’s evidence on the question of what such desires would look like. And the sorts of non-cognitivist views that are defended in the contemporary literature192 want to capture the idea that one’s moral views can be correct or incorrect, and that one can have greater or lesser credence in different moral views.
It’s true that the likelihood that one places on changing one’s view might vary depending on the meta-ethical view one endorses. If one is robustly realist, then the idea that common sense has got things radically wrong generally becomes more plausible than if one is some flavour of anti-realist. But it seems to me that anti-realist views actually support my argument rather than detract from it. If one is a subjectivist, one should be optimistic about the likelihood of finding the moral truth — as finding the moral truth is ultimately just about working out what one values. The subjectivist should therefore think it more likely that she will change her view in light of further study and reflection than the robust realist, and that makes the value of information higher.
Moreover, even if one endorsed a meta-ethical view that is inconsistent with the idea that there’s value in gaining more moral information, one should not be certain in that meta-ethical view. And it’s high-stakes whether that view is true — if there are moral facts out there but one thinks there aren’t, that’s a big deal! Even for this sort of antirealist, then, there’s therefore value in moral information, because there’s value in finding out for certain whether that meta-ethical view is correct.
This post raises a bunch of questions for me:
If you were in a simulation or a dream, would you hold uncertainty about its behaviour, within a framework of subjectivity?
Do you believe in changing the rules that you use to make moral decisions as you learn things?
Do you think that these probabilities are nonzero and that they cancel each other out?
How do you respond to Will’s thesis on this topic?: