In this post, I spell out how I think we ought to approach decision making under uncertainty and argue that the most plausible conclusion is that we ought to act as if theism is true. This seems relevant to the EA community as if this is the case it might impact our cause prioritisation decisions.
Normative realism is the view that there are reasons for choosing to carry out at least some actions. If normative realism is false, then normative anti-realism is true. On this view, there are no reasons for or against taking any action. If normative realism is false then all actions are equally choice-worthy, for they all have both no reasons for and no reasons against them.
Suppose Tina is working out whether she has more reason to go on holiday or to donate the money to an effective charity.
Tina knows that if normative anti-realism is true then there is no fact of the matter about which she has more reason to do, for there are no reasons either way. It seems to make sense for Tina to ignore the part of her probability space taken up by worlds in which normative realism is false and instead just focus on the part of her probability space taken up by worlds in which normative realism is true. After-all, in worlds with normative anti-realism, there isn’t any reason to act either way. So it would be surprising if the possibility of being in one of these worlds were relevant to her decision.
It also seems appropriate for Tina to ignore the part of her probability space taken up by worlds in which she would not have epistemic access to any potential normative facts. Suppose that World 26 is a world in which normative realism held but that agents had no access to the reasons for action which existed. Considering World 26 is going to provide no guidance to Tina on whether to go on holiday or donate the money. As such it seems right for Tina to discount such worlds from her decision procedure.
If the above is true then Tina should only consider worlds in which normative realism is true and there is a plausible mechanism that she would know the normative truths.
It is difficult to see how unguided evolution would give humans like Tina epistemic access to normative reasons. This seems to particularly be the case when it comes to a specific variety of reasons: moral reasons. There are no obvious structural connections between knowing correct moral facts and evolutionary benefit. (Note that I am assuming that non-objectivist moral theories such as subjectivism are not plausible. See the relevant section of Lukas Gloor’s post here for more on the objectivist/non-objectivist distinction.)
To see this, imagine that moral reasons were all centred around maximising the number of paperclips in the universe. It’s not clear that there would be any evolutionary benefit to knowing that morality was shaped in this way. The picture for other potential types of reasons, such as prudential reasons is more complicated, see the appendix for more. The remainder of this analysis assumes that only moral reasons exist.
It therefore seems unlikely that an unguided evolutionary process would give humans access to moral facts. This suggests that most of the worlds Tina should pay attention to—worlds with normative realism and human access to moral facts—are worlds in which there is some sort of directing power over the emergence of human agents leading humans to have reliable moral beliefs.
There do not seem to be many candidates for types of mechanism that would guide evolution to deliver humans with reliable beliefs about moral reasons for action. Two species of mechanism stand out.
The first is that there is some sort of built in teleology to the universe which results in certain ends being brought about. John Leslie’s axiarchism is one example of this, where what exists, exists because it is good. This might plausibly bring about humans with correct moral beliefs as knowing correct moral beliefs might itself be intrinsically good. However many, myself included, will find this sort of metaphysics quite unlikely. Separately the possibility of this theory is unlikely to count against my argument as it’s also likely to be a metaphysics in which God exists, as God’s existence itself is typically considered to be a good and so would also be brought about.
The other apparent option is that evolution was guided by some sort of designer. The most likely form of this directing power stems from the existence of God or Gods. If an omniscient God exists, then God would know all moral facts and had he so desired it, could easily engineer it so that humans had reliable moral beliefs.
Another design option is that we were brought about by a simulator, simulators would also have the power to engineer the moral beliefs of humans. However it’s not clear how these simulators would have reliable access to the relevant moral facts themselves in order to correctly program them into us as humans. The question we are asking of how we could trust our moral views on unguided evolution could equally be asked of our simulators, and their simulators in turn if the chain of simulation continues. As a result, it’s not clear that considering worlds in which we are simulated is going to be decision relevant by the second of our two criteria, unless our simulators had their moral beliefs reliably programmed by God.
Given this, the only worlds in which humans end up with reliable moral beliefs seem to be worlds in which God exists. As such, according to our criteria above, when deciding how we ought to act we need only consider possible worlds in which God exists. Therefore, when Tina is choosing between the two options she ought to ask herself, what would be the option she would have most reason to choose if she existed in a theistic world.
To complete her analysis of what action to take, she should consider, of the possible theisms: (i) how likely each is, (ii) how likely each is to co-exist with normative realism, (iii) how likely it is that the God(s) of this theism would give her reliable access to moral facts and (iv) how choice-worthy are the two actions on each theistic possibility.
Appendix on prudential reasons
Other than moral reasons, the other category of reasons commonly discussed are prudential reasons. These are reasons of self-interest. For example, one may have strong moral reasons to jump on a grenade to save the lives of one’s comrades but some think it’s likely that one has a prudential reason not to sacrifice one’s life in this way.
If prudential reasons exist then it seems more plausible that humans would know about them compared to moral reasons: prudential reasons pertain to what it is in my interests to do, and I have at least some access to myself. Still, it’s not guaranteed that we have access to prudential reasons. If prudential reasons exist, a baby presumably has a prudential reason to be inoculated even if it has no access to this fact at the time of the inoculation.
It seems unlikely to me that prudential reasons exist in many worlds in which normative realism holds. However, even if they do exist, we would need to consider how to weigh prudential and moral reasons, especially when moral reasons pull in one way and prudential reasons pull in the other.
It’s tempting to say that it will just depend on the comparative strengths of the moral and prudential reasons in any given case. However, it seems jarring to think that a person who does what there is most moral reason to do could have failed to do what there was most, all things considered, reason for them to do. As such, I prefer a view where moral reasons have a ‘lexical’ priority over prudential reasons, which is to say that when choosing between two actions, we should do whichever action has most moral reason for it and only consider the prudential reasons if both actions are equally morally choice-worthy.
Still, my previous analysis would need to be tempered by any uncertainty surrounding the possible existence of prudential reasons, for unguided evolution might plausibly give a human access to them. If there is also uncertainty that moral reasons do not always dominate prudential reasons then the possibility of prudential reasons in non-God worlds will need to be factored into one’s decision procedure.
Even non-theists should act as if theism is true
In this post, I spell out how I think we ought to approach decision making under uncertainty and argue that the most plausible conclusion is that we ought to act as if theism is true. This seems relevant to the EA community as if this is the case it might impact our cause prioritisation decisions.
Normative realism is the view that there are reasons for choosing to carry out at least some actions. If normative realism is false, then normative anti-realism is true. On this view, there are no reasons for or against taking any action. If normative realism is false then all actions are equally choice-worthy, for they all have both no reasons for and no reasons against them.
Suppose Tina is working out whether she has more reason to go on holiday or to donate the money to an effective charity.
Tina knows that if normative anti-realism is true then there is no fact of the matter about which she has more reason to do, for there are no reasons either way. It seems to make sense for Tina to ignore the part of her probability space taken up by worlds in which normative realism is false and instead just focus on the part of her probability space taken up by worlds in which normative realism is true. After-all, in worlds with normative anti-realism, there isn’t any reason to act either way. So it would be surprising if the possibility of being in one of these worlds were relevant to her decision.
It also seems appropriate for Tina to ignore the part of her probability space taken up by worlds in which she would not have epistemic access to any potential normative facts. Suppose that World 26 is a world in which normative realism held but that agents had no access to the reasons for action which existed. Considering World 26 is going to provide no guidance to Tina on whether to go on holiday or donate the money. As such it seems right for Tina to discount such worlds from her decision procedure.
If the above is true then Tina should only consider worlds in which normative realism is true and there is a plausible mechanism that she would know the normative truths.
It is difficult to see how unguided evolution would give humans like Tina epistemic access to normative reasons. This seems to particularly be the case when it comes to a specific variety of reasons: moral reasons. There are no obvious structural connections between knowing correct moral facts and evolutionary benefit. (Note that I am assuming that non-objectivist moral theories such as subjectivism are not plausible. See the relevant section of Lukas Gloor’s post here for more on the objectivist/non-objectivist distinction.)
To see this, imagine that moral reasons were all centred around maximising the number of paperclips in the universe. It’s not clear that there would be any evolutionary benefit to knowing that morality was shaped in this way. The picture for other potential types of reasons, such as prudential reasons is more complicated, see the appendix for more. The remainder of this analysis assumes that only moral reasons exist.
It therefore seems unlikely that an unguided evolutionary process would give humans access to moral facts. This suggests that most of the worlds Tina should pay attention to—worlds with normative realism and human access to moral facts—are worlds in which there is some sort of directing power over the emergence of human agents leading humans to have reliable moral beliefs.
There do not seem to be many candidates for types of mechanism that would guide evolution to deliver humans with reliable beliefs about moral reasons for action. Two species of mechanism stand out.
The first is that there is some sort of built in teleology to the universe which results in certain ends being brought about. John Leslie’s axiarchism is one example of this, where what exists, exists because it is good. This might plausibly bring about humans with correct moral beliefs as knowing correct moral beliefs might itself be intrinsically good. However many, myself included, will find this sort of metaphysics quite unlikely. Separately the possibility of this theory is unlikely to count against my argument as it’s also likely to be a metaphysics in which God exists, as God’s existence itself is typically considered to be a good and so would also be brought about.
The other apparent option is that evolution was guided by some sort of designer. The most likely form of this directing power stems from the existence of God or Gods. If an omniscient God exists, then God would know all moral facts and had he so desired it, could easily engineer it so that humans had reliable moral beliefs.
Another design option is that we were brought about by a simulator, simulators would also have the power to engineer the moral beliefs of humans. However it’s not clear how these simulators would have reliable access to the relevant moral facts themselves in order to correctly program them into us as humans. The question we are asking of how we could trust our moral views on unguided evolution could equally be asked of our simulators, and their simulators in turn if the chain of simulation continues. As a result, it’s not clear that considering worlds in which we are simulated is going to be decision relevant by the second of our two criteria, unless our simulators had their moral beliefs reliably programmed by God.
Given this, the only worlds in which humans end up with reliable moral beliefs seem to be worlds in which God exists. As such, according to our criteria above, when deciding how we ought to act we need only consider possible worlds in which God exists. Therefore, when Tina is choosing between the two options she ought to ask herself, what would be the option she would have most reason to choose if she existed in a theistic world.
To complete her analysis of what action to take, she should consider, of the possible theisms: (i) how likely each is, (ii) how likely each is to co-exist with normative realism, (iii) how likely it is that the God(s) of this theism would give her reliable access to moral facts and (iv) how choice-worthy are the two actions on each theistic possibility.
Appendix on prudential reasons
Other than moral reasons, the other category of reasons commonly discussed are prudential reasons. These are reasons of self-interest. For example, one may have strong moral reasons to jump on a grenade to save the lives of one’s comrades but some think it’s likely that one has a prudential reason not to sacrifice one’s life in this way.
If prudential reasons exist then it seems more plausible that humans would know about them compared to moral reasons: prudential reasons pertain to what it is in my interests to do, and I have at least some access to myself. Still, it’s not guaranteed that we have access to prudential reasons. If prudential reasons exist, a baby presumably has a prudential reason to be inoculated even if it has no access to this fact at the time of the inoculation.
It seems unlikely to me that prudential reasons exist in many worlds in which normative realism holds. However, even if they do exist, we would need to consider how to weigh prudential and moral reasons, especially when moral reasons pull in one way and prudential reasons pull in the other.
It’s tempting to say that it will just depend on the comparative strengths of the moral and prudential reasons in any given case. However, it seems jarring to think that a person who does what there is most moral reason to do could have failed to do what there was most, all things considered, reason for them to do. As such, I prefer a view where moral reasons have a ‘lexical’ priority over prudential reasons, which is to say that when choosing between two actions, we should do whichever action has most moral reason for it and only consider the prudential reasons if both actions are equally morally choice-worthy.
Still, my previous analysis would need to be tempered by any uncertainty surrounding the possible existence of prudential reasons, for unguided evolution might plausibly give a human access to them. If there is also uncertainty that moral reasons do not always dominate prudential reasons then the possibility of prudential reasons in non-God worlds will need to be factored into one’s decision procedure.