Have you looked into “amplifications” of theories? This is discussed a bit in the Moral Uncertainty book. You could imagine versions of standard classical utilitarianism where everything is lexically amplified relative to standard CU, and so could possibly compete with other views with infinities. Of course, those other views could be further amplified lexically, too, all ad infinitum.
I’ve been thinking about how MEC works with lexical threshold utilitarian views and leximin, including with lexical amplifications of standard non-lexical theories.
Hi Michael, thanks for your comments! A few replies:
Re: amplification, I’m not sure about this proposal (I’m familiar with that section of the book). From the perspective of a supreme soteriology (e.g. (certain conceptions of) Christianity), attaining salvation is the best possible outcome, full stop. It is, to use MacAskill, Bykvist, and Ord’s terminology, maximally choiceworthy. It therefore seems to me wrong that ‘those other views could be further amplified lexically, too, all ad infinitum.’ To insist that we could lexically amplify a supreme soteriology would be to fail to take it seriously from its own internal perspective. But that is precisely what MacAskill, Bykvist, and Ord’s universal scale account requires us to do.
Of course, I agree that we can amplify other ethical theories that do not, in their standard forms, represent options or outcomes as maximally choiceworthy, such that the amplified theories do represent certain options/outcomes as maximally choiceworthy. But this is rather ad hoc.
Re: the ‘limited applicability’ suggestion, this strikes me as prima facie implausible on abudctive grounds (principally, parsimony, and to a lesser extent, elegance).
Re: the point that ‘there are other possible infinities that could dominate’: I’m not sure how the term ‘dominate’ is being used here. It’s not the case that other ethical theories which assign infinite choiceworthiness to certain options dominate supreme soteriologies in the game-theoretic useage of ‘dominate’ (on which option A dominates option B iff the outcome associated with A is at least as good as the corresponding outcome associated with B in every state of nature and strictly better in at least one).
But if the point is rather simply that MEC does not require all agents—regardless of their credence distribution over descriptive and ethical hypotheses—to become religionists, I agree. To take a simplistic but illustrative example, MEC will tell an agent who has credence = 1 that doing whatever they feel like will generate an infinite quantity of the summum bonum to go ahead and do whatever they feel like. My thought is just that MEC will deliver sufficiently implausible verdicts to sufficiently many agents to cast serious doubt on its truth qua theory of what we ought to do in response to ethical uncertainty. This is particularly pressing in the context of prudential choice, due to the three factors highlighted in subsection 3.5 above. The points you make in the linked response to the question ‘why not accept Pascal’s Wager?’ are solid, and lead me to think that the extension of my argument from prudence to morality might not be quite as quick as I suggest at the end of the post. But if we can show that MEC is in big trouble in the domain of prudence, that seems to me like evidence against its candidacy in the domain of morality. (I don’t agree with MacAskill, Bykvist, and Ord’s suggestion that, on priors, we should expect the correct way to handle descriptive uncertainty to be more-or-less the correct way to handle ethical uncertainty. The descriptive and the ethical are quite different! But it would be relatively more surprising to me if the correct way to handle prudential uncertainty were wildly different from the correct way to handle moral uncertainty.)
With respect to domination, I just mean that MEC could still give more weight to their recommendations over those of supreme soteriology, because their infinities could compete with those of supreme soteriology (I don’t mean anything like stochastic dominance or Pareto improvement). I don’t think we’re required to take for granted that salvation is better than everything else across all theories under a universal scale account. Other theories will have other plausible candidates that should compete. Some may even directly refer to salvation and make claims that other things are better.
I agree that lexical amplifications of theories that don’t have infinities do seem ad hoc, but I don’t think we should assign them 0 probability. (Similarly, we shouldn’t assign 0 probability to other lexical views.) So, it’s not obvious that we should bet on supreme soteriology, until we also check the plausibility of and weigh other infinities. Of course, I still think this “solution” is unsatisfying and I think the principled objection of fanaticism still holds, even if it turns out not to hold in practice.
I would say I don’t know if MEC will deliver sufficiently implausible verdicts to sufficiently many agents without checking more closely given other possible infinities, but I think if it does give plausible verdicts most of the time (or even almost all of the time), this is mostly by luck and too contingent on our current circumstances and beliefs. Giving the right answers for the wrong reasons is still deeply unsatisfying.
Really interesting! Do you have anything in mind for goods identified by competing ethical theories that you think would compete with, e.g., the beatific vision for the Christian or nirvana for the Buddhist? (A clear example here would be a valuable update for me.)
+1 on your comment that ‘Giving the right answers for the wrong reasons is still deeply unsatisfying.’ I think this is an under appreciated part of ethical theorizing and would even take a stronger methodological stance: getting the right explanatory answers (why we ought to do what we ought to) is just as important as getting the right extensional answers (what we ought to do). If an ethical theory gives you the wrong explanation, it’s not the right ethical theory!
You could have infinitely many (and, in principle, even more than countably many) instances of finite goods in an infinite universe/multiverse, or lexically dominating pleasures (e.g. Mill’s higher pleasures), or just set a lexical threshold for positive goods or good lives. Any of the goods in objective list theories could be claimed to be infinitely valuable. Some people think life is infinitely valuable, although often also on religious grounds.
I’d interpret supreme soteriology as claiming finite amounts of Earthly (or non-Heavenly) goods have merely finite value while salvation has infinite value, but this doesn’t extend to infinite amounts of Earthly goods, and other theories can simply reject the claim that all individual instances of Earthly goods have merely finite value.
I don’t claim that these other possible infinities have much to defend them, but I think this applies to supreme soteriology, too. The history and number of people believing supreme soteriology only very slightly adds to its plausibility, because we have good reasons to believe the believers are mistaken in their beliefs and the reasons for their beliefs aren’t much supported by evidence, but anything that’s plausibly a good at all could be about as plausible as a candidate for generating infinite good, and maybe even more plausible, depending on your views. There are many such candidates, so they could add up together to outweigh supreme soteriology if they correlate, or some of them could just be much easier to achieve.
Great post (and paper)! Thanks for sharing!
Have you looked into “amplifications” of theories? This is discussed a bit in the Moral Uncertainty book. You could imagine versions of standard classical utilitarianism where everything is lexically amplified relative to standard CU, and so could possibly compete with other views with infinities. Of course, those other views could be further amplified lexically, too, all ad infinitum.
I’ve been thinking about how MEC works with lexical threshold utilitarian views and leximin, including with lexical amplifications of standard non-lexical theories.
Hi Michael, thanks for your comments! A few replies:
Re: amplification, I’m not sure about this proposal (I’m familiar with that section of the book). From the perspective of a supreme soteriology (e.g. (certain conceptions of) Christianity), attaining salvation is the best possible outcome, full stop. It is, to use MacAskill, Bykvist, and Ord’s terminology, maximally choiceworthy. It therefore seems to me wrong that ‘those other views could be further amplified lexically, too, all ad infinitum.’ To insist that we could lexically amplify a supreme soteriology would be to fail to take it seriously from its own internal perspective. But that is precisely what MacAskill, Bykvist, and Ord’s universal scale account requires us to do.
Of course, I agree that we can amplify other ethical theories that do not, in their standard forms, represent options or outcomes as maximally choiceworthy, such that the amplified theories do represent certain options/outcomes as maximally choiceworthy. But this is rather ad hoc.
Re: the ‘limited applicability’ suggestion, this strikes me as prima facie implausible on abudctive grounds (principally, parsimony, and to a lesser extent, elegance).
Re: the point that ‘there are other possible infinities that could dominate’: I’m not sure how the term ‘dominate’ is being used here. It’s not the case that other ethical theories which assign infinite choiceworthiness to certain options dominate supreme soteriologies in the game-theoretic useage of ‘dominate’ (on which option A dominates option B iff the outcome associated with A is at least as good as the corresponding outcome associated with B in every state of nature and strictly better in at least one).
But if the point is rather simply that MEC does not require all agents—regardless of their credence distribution over descriptive and ethical hypotheses—to become religionists, I agree. To take a simplistic but illustrative example, MEC will tell an agent who has credence = 1 that doing whatever they feel like will generate an infinite quantity of the summum bonum to go ahead and do whatever they feel like. My thought is just that MEC will deliver sufficiently implausible verdicts to sufficiently many agents to cast serious doubt on its truth qua theory of what we ought to do in response to ethical uncertainty. This is particularly pressing in the context of prudential choice, due to the three factors highlighted in subsection 3.5 above. The points you make in the linked response to the question ‘why not accept Pascal’s Wager?’ are solid, and lead me to think that the extension of my argument from prudence to morality might not be quite as quick as I suggest at the end of the post. But if we can show that MEC is in big trouble in the domain of prudence, that seems to me like evidence against its candidacy in the domain of morality. (I don’t agree with MacAskill, Bykvist, and Ord’s suggestion that, on priors, we should expect the correct way to handle descriptive uncertainty to be more-or-less the correct way to handle ethical uncertainty. The descriptive and the ethical are quite different! But it would be relatively more surprising to me if the correct way to handle prudential uncertainty were wildly different from the correct way to handle moral uncertainty.)
I agree with most of this.
With respect to domination, I just mean that MEC could still give more weight to their recommendations over those of supreme soteriology, because their infinities could compete with those of supreme soteriology (I don’t mean anything like stochastic dominance or Pareto improvement). I don’t think we’re required to take for granted that salvation is better than everything else across all theories under a universal scale account. Other theories will have other plausible candidates that should compete. Some may even directly refer to salvation and make claims that other things are better.
I agree that lexical amplifications of theories that don’t have infinities do seem ad hoc, but I don’t think we should assign them 0 probability. (Similarly, we shouldn’t assign 0 probability to other lexical views.) So, it’s not obvious that we should bet on supreme soteriology, until we also check the plausibility of and weigh other infinities. Of course, I still think this “solution” is unsatisfying and I think the principled objection of fanaticism still holds, even if it turns out not to hold in practice.
I would say I don’t know if MEC will deliver sufficiently implausible verdicts to sufficiently many agents without checking more closely given other possible infinities, but I think if it does give plausible verdicts most of the time (or even almost all of the time), this is mostly by luck and too contingent on our current circumstances and beliefs. Giving the right answers for the wrong reasons is still deeply unsatisfying.
Really interesting! Do you have anything in mind for goods identified by competing ethical theories that you think would compete with, e.g., the beatific vision for the Christian or nirvana for the Buddhist? (A clear example here would be a valuable update for me.)
+1 on your comment that ‘Giving the right answers for the wrong reasons is still deeply unsatisfying.’ I think this is an under appreciated part of ethical theorizing and would even take a stronger methodological stance: getting the right explanatory answers (why we ought to do what we ought to) is just as important as getting the right extensional answers (what we ought to do). If an ethical theory gives you the wrong explanation, it’s not the right ethical theory!
You could have infinitely many (and, in principle, even more than countably many) instances of finite goods in an infinite universe/multiverse, or lexically dominating pleasures (e.g. Mill’s higher pleasures), or just set a lexical threshold for positive goods or good lives. Any of the goods in objective list theories could be claimed to be infinitely valuable. Some people think life is infinitely valuable, although often also on religious grounds.
I’d interpret supreme soteriology as claiming finite amounts of Earthly (or non-Heavenly) goods have merely finite value while salvation has infinite value, but this doesn’t extend to infinite amounts of Earthly goods, and other theories can simply reject the claim that all individual instances of Earthly goods have merely finite value.
I don’t claim that these other possible infinities have much to defend them, but I think this applies to supreme soteriology, too. The history and number of people believing supreme soteriology only very slightly adds to its plausibility, because we have good reasons to believe the believers are mistaken in their beliefs and the reasons for their beliefs aren’t much supported by evidence, but anything that’s plausibly a good at all could be about as plausible as a candidate for generating infinite good, and maybe even more plausible, depending on your views. There are many such candidates, so they could add up together to outweigh supreme soteriology if they correlate, or some of them could just be much easier to achieve.