Thank you for the post which I really liked! Just two short comments:
It is not clear to me why the problems of utilitarianism should inevitably lead to a form of fanaticism, under promising frameworks for moral uncertainty. At least this seems not to follow on the account of moral uncertainty of MacAskill, Ord and Bykvist (2020) which is arguably the most popular one for at least two reasons: a) Once the relevant credence distribution includes ethical theories which are not intertheoretically comparable or are merely ordinal-scale, then theories in which one has small credences (including totalist utilitariansm) won’t always dictate how to act. b) Some other ethical theories, e.g. Kantian theories which unconditionally forbid killing, seem (similar to totalist utilitarianism) to place extremely high (dis)value on certain actions.
It would be interesting to think about how distinctions between different version of utilitarianism would factor into your argument. In particular, you could be an objective utilitarian (who thinks that the de facto moral worth of an action is completely determined by its de facto consequences for total wellbeing) without believing (a) that expected value theory is the correct account of decision making under uncertainty or (b) that the best method in practice for maximizing wellbeing is to frequently explicitly calculate expected value. Version (b) would be so-called multi-level utilitarianism.
Thanks for your comment! I was considering writing much more about moral uncertainty, since I think it’s an important topic here, but the post was long enough as it is. But you and other commenters below have all pulled me up on this one, so it’s worth being more explicit. I thus hope it’s OK for this reply to you to serve as a general reply to a lot of themes related to moral uncertainty in the comment section, to avoid repeating myself too much!
Starting with 1(b), the question of unconditional ‘deontological constraints’: this works in theory, but I don’t think it applies in practice. The (dis)value placed on specific actions can’t just be ‘extremely high’, because then it can still be swamped by utilitarianism over unbounded choice sets; it has to be infinite, such that (e.g.) intentional killing is infinitely disvaluable and no finite source of value, no matter how arbitrarily large, could outweigh it. This gets you around the impossibility proof, which as mentioned relies on order properties of the reals that don’t hold for the extended reals—roughly, the value of utility is always already infinitesimal relative to the infinite sources of (dis)value, so the marginal value of utility doesn’t need to decline asymptotically to avoid swamping them. But in practice, I just don’t see what marginally plausible deontological constraints could help a mostly-consequentialist theory avoid the train to crazy town. These constraints work to avoid counterexamples like e.g. the transplant case, where intuitively there is another principle at play that overrides utility considerations. In these cases, deontological constraints are simple, intuitive, and well-motivated. But in the cases I’m concerned with in this post, like Hurka’s St Petersburg Paradox, it’s not clear that Kantian-style constraints on murder or lying really help the theory—especially because of the role of risk in the example. To get around this example with deontological constraints, you either have to propose wildly implausible constraints like ‘never accept any choices with downside risk to human life’, or have an ad hoc restriction specifically designed to get around this case in particular—the latter of which seems a) epistemically dodgy and b) liable to be met with a slightly adjusted counterexample. I just don’t see how you could avoid all such cases with any even mildly plausible deontological constraint.
Beyond these kinds of ‘lexical’ approaches, there are various other attempts to avoid fanaticism while respecting considerations of utility at scale—your 1(a). But by Cowen’s proof, if these are indeed to work, they must deny the universal domain condition—as indeed, the theories mentioned tend to! I mentioned the moral parliament explicitly, but note also that (e.g.) if you accept that certain intertheoretical comparisons cannot be made, then you have ipso facto denied universal domain and accepted a certain level of incomparability and pluralism. The difference between me and you is just that you’ve only accepted incomparability at the meta-level (it applies comparing different moral theories), whereas I’m encouraging you to adopt it at the object level (it should apply to the act of thinking about ethics in the first instance). But I see no coherent way to hold ‘we can have incomparability in our meta-level theorising, but it must be completely banned from our object-level theorising’! There are many potential rationalistic reasons you might offer for why incomparability and incommensurability should be banished from moral philosophy; but none of these are available to you if you take on a framework for moral uncertainty that avoids fanaticism by denying universal domain. So accepting these kinds of positions about moral uncertainty just seems to me like an unstable halfway house between true rationalistic moral philosophy (on the one hand) and pluralism (on the other).
Thank you for your replies! In essence, I don’t think I disagree much with any of your points. I will mainly add different points of emphasis:
I think one argument I was gesturing at is a kind of divide-and-conquer strategy where some standard moves of utilitarians or moral uncertainty adherents can counter some of the counterintuitive implications (walks to crazy town) you point to. For instance, the St. Petersburg Paradox seems to be a objection to expected value utilitarianism, not for every form of the view. Similarly, some of the classical counterexamples to utilitarianism (e.g., some variants of trolley cases) involve violations of plausible deontological constraints. Thus, if you have a non-negligible credence in a moral view which posits unconditional prohibitions of such behavior, you don’t need to buy the implausible implication (under moral uncertainty). But you are completely correct that there will remain some, maybe many, implications that many find counterintuitive or crazy, e.g., the (very) repugnant conclusion (if you are totalist utilitarian). Personally, I tend to be less troubled by these cases and suspect that we perhaps should bite some of these bullets, but to justify this would of course require a longer argument (which someone with different intuitions won’t likely be tempted by, in any case).
The passage of your text which seemed most relevant to multi-level utilitarianism is the following: “In practice, Effective Altruists are not willing to purchase theoretical coherence at the price of absurdity; they place utilitarian reasoning in a pluralist context. They may do this unreflectively, and I think they do it imperfectly; but it is an existence proof of a version of Effective Altruism that accepts that utility considerations are embedded in a wider context, and tempers them with judgment.“ One possible explanation of this observation is that the EA’s which are utilitarians are often multi-level utilitarians who consciously and intentionally use considerations beyond maximizing utility in practical decision-situation. If that were true, it would raise the interesting question what difference adopting a pluralist normative ethics, as opposed to a universal-domain utilitarianism, would make for effective altruist practice (I do not mean to imply that there aren’t difference).
With respect to moral uncertainty, I interpret you as agreeing that the most common effective altruist views actually avoid fanaticism. This then raises the question whether accepting incomparability at the meta-level (between normative theories) gives you reasons to also (or instead) accept incomparability at the object-level (between first-order moral reasons for or against actions). I am not sure about that. I am sympathetic to your point that it might be strange to hold that ‘we can have incomparability in our meta-level theorising, but it must be completely banned from our object-level theorising‘. At the same time, at least some of the reasons for believing in meta-level incomparability are quite independent from the relevant object-level arguments, so you might have good reasons to believe in it only on the meta-level. Also, the sort of incomparability seems different. As I understand your view, it says that different kinds of moral reasons can favor or oppose a course of action such that we sometimes have to use our faculties for particular, context-sensitive moral judgements, without being able to resort to a universal meta-principle that tells us how to weigh the relevant moral reasons. By contrast, the moral uncertainty view posits precisely such a meta-principle, e.g. variance voting. So I can see how one might think that the second-order incomparability is acceptable but yours is unacceptable (although this is not my view).
On 2: I think the point is simply that, as noted in footnote 8, the ‘train to crazy town’ reasoning can apply quite directly to comparisons between states of affairs with no lingering uncertainty (Savagean consequences). When we apply the reasoning in this way, two features arise: (a) Uncertainty, and frameworks for dealing with uncertainty, no longer have a role to play as we are certain about outcomes. This is the case with e.g. the Very Repugnant Conclusion. (b) The absurdities that are generated apply directly at the level of axiology, rather than ‘infecting’ axiology via normative ethics. If we read multi-level utilitarianism as an attempt to insulate axiology from ethics, then it can’t help in this case. Of course, multi-level utilitarians are often more willing to be bullet-biters! But the point is just that they do have to bite the bullet.
Thank you for the post which I really liked! Just two short comments:
It is not clear to me why the problems of utilitarianism should inevitably lead to a form of fanaticism, under promising frameworks for moral uncertainty. At least this seems not to follow on the account of moral uncertainty of MacAskill, Ord and Bykvist (2020) which is arguably the most popular one for at least two reasons: a) Once the relevant credence distribution includes ethical theories which are not intertheoretically comparable or are merely ordinal-scale, then theories in which one has small credences (including totalist utilitariansm) won’t always dictate how to act. b) Some other ethical theories, e.g. Kantian theories which unconditionally forbid killing, seem (similar to totalist utilitarianism) to place extremely high (dis)value on certain actions.
It would be interesting to think about how distinctions between different version of utilitarianism would factor into your argument. In particular, you could be an objective utilitarian (who thinks that the de facto moral worth of an action is completely determined by its de facto consequences for total wellbeing) without believing (a) that expected value theory is the correct account of decision making under uncertainty or (b) that the best method in practice for maximizing wellbeing is to frequently explicitly calculate expected value. Version (b) would be so-called multi-level utilitarianism.
Thanks for your comment! I was considering writing much more about moral uncertainty, since I think it’s an important topic here, but the post was long enough as it is. But you and other commenters below have all pulled me up on this one, so it’s worth being more explicit. I thus hope it’s OK for this reply to you to serve as a general reply to a lot of themes related to moral uncertainty in the comment section, to avoid repeating myself too much!
Starting with 1(b), the question of unconditional ‘deontological constraints’: this works in theory, but I don’t think it applies in practice. The (dis)value placed on specific actions can’t just be ‘extremely high’, because then it can still be swamped by utilitarianism over unbounded choice sets; it has to be infinite, such that (e.g.) intentional killing is infinitely disvaluable and no finite source of value, no matter how arbitrarily large, could outweigh it. This gets you around the impossibility proof, which as mentioned relies on order properties of the reals that don’t hold for the extended reals—roughly, the value of utility is always already infinitesimal relative to the infinite sources of (dis)value, so the marginal value of utility doesn’t need to decline asymptotically to avoid swamping them.
But in practice, I just don’t see what marginally plausible deontological constraints could help a mostly-consequentialist theory avoid the train to crazy town. These constraints work to avoid counterexamples like e.g. the transplant case, where intuitively there is another principle at play that overrides utility considerations. In these cases, deontological constraints are simple, intuitive, and well-motivated. But in the cases I’m concerned with in this post, like Hurka’s St Petersburg Paradox, it’s not clear that Kantian-style constraints on murder or lying really help the theory—especially because of the role of risk in the example. To get around this example with deontological constraints, you either have to propose wildly implausible constraints like ‘never accept any choices with downside risk to human life’, or have an ad hoc restriction specifically designed to get around this case in particular—the latter of which seems a) epistemically dodgy and b) liable to be met with a slightly adjusted counterexample. I just don’t see how you could avoid all such cases with any even mildly plausible deontological constraint.
Beyond these kinds of ‘lexical’ approaches, there are various other attempts to avoid fanaticism while respecting considerations of utility at scale—your 1(a). But by Cowen’s proof, if these are indeed to work, they must deny the universal domain condition—as indeed, the theories mentioned tend to! I mentioned the moral parliament explicitly, but note also that (e.g.) if you accept that certain intertheoretical comparisons cannot be made, then you have ipso facto denied universal domain and accepted a certain level of incomparability and pluralism.
The difference between me and you is just that you’ve only accepted incomparability at the meta-level (it applies comparing different moral theories), whereas I’m encouraging you to adopt it at the object level (it should apply to the act of thinking about ethics in the first instance). But I see no coherent way to hold ‘we can have incomparability in our meta-level theorising, but it must be completely banned from our object-level theorising’! There are many potential rationalistic reasons you might offer for why incomparability and incommensurability should be banished from moral philosophy; but none of these are available to you if you take on a framework for moral uncertainty that avoids fanaticism by denying universal domain. So accepting these kinds of positions about moral uncertainty just seems to me like an unstable halfway house between true rationalistic moral philosophy (on the one hand) and pluralism (on the other).
Thank you for your replies! In essence, I don’t think I disagree much with any of your points. I will mainly add different points of emphasis:
I think one argument I was gesturing at is a kind of divide-and-conquer strategy where some standard moves of utilitarians or moral uncertainty adherents can counter some of the counterintuitive implications (walks to crazy town) you point to. For instance, the St. Petersburg Paradox seems to be a objection to expected value utilitarianism, not for every form of the view. Similarly, some of the classical counterexamples to utilitarianism (e.g., some variants of trolley cases) involve violations of plausible deontological constraints. Thus, if you have a non-negligible credence in a moral view which posits unconditional prohibitions of such behavior, you don’t need to buy the implausible implication (under moral uncertainty). But you are completely correct that there will remain some, maybe many, implications that many find counterintuitive or crazy, e.g., the (very) repugnant conclusion (if you are totalist utilitarian). Personally, I tend to be less troubled by these cases and suspect that we perhaps should bite some of these bullets, but to justify this would of course require a longer argument (which someone with different intuitions won’t likely be tempted by, in any case).
The passage of your text which seemed most relevant to multi-level utilitarianism is the following: “In practice, Effective Altruists are not willing to purchase theoretical coherence at the price of absurdity; they place utilitarian reasoning in a pluralist context. They may do this unreflectively, and I think they do it imperfectly; but it is an existence proof of a version of Effective Altruism that accepts that utility considerations are embedded in a wider context, and tempers them with judgment.“ One possible explanation of this observation is that the EA’s which are utilitarians are often multi-level utilitarians who consciously and intentionally use considerations beyond maximizing utility in practical decision-situation. If that were true, it would raise the interesting question what difference adopting a pluralist normative ethics, as opposed to a universal-domain utilitarianism, would make for effective altruist practice (I do not mean to imply that there aren’t difference).
With respect to moral uncertainty, I interpret you as agreeing that the most common effective altruist views actually avoid fanaticism. This then raises the question whether accepting incomparability at the meta-level (between normative theories) gives you reasons to also (or instead) accept incomparability at the object-level (between first-order moral reasons for or against actions). I am not sure about that. I am sympathetic to your point that it might be strange to hold that ‘we can have incomparability in our meta-level theorising, but it must be completely banned from our object-level theorising‘. At the same time, at least some of the reasons for believing in meta-level incomparability are quite independent from the relevant object-level arguments, so you might have good reasons to believe in it only on the meta-level. Also, the sort of incomparability seems different. As I understand your view, it says that different kinds of moral reasons can favor or oppose a course of action such that we sometimes have to use our faculties for particular, context-sensitive moral judgements, without being able to resort to a universal meta-principle that tells us how to weigh the relevant moral reasons. By contrast, the moral uncertainty view posits precisely such a meta-principle, e.g. variance voting. So I can see how one might think that the second-order incomparability is acceptable but yours is unacceptable (although this is not my view).
On 2: I think the point is simply that, as noted in footnote 8, the ‘train to crazy town’ reasoning can apply quite directly to comparisons between states of affairs with no lingering uncertainty (Savagean consequences). When we apply the reasoning in this way, two features arise:
(a) Uncertainty, and frameworks for dealing with uncertainty, no longer have a role to play as we are certain about outcomes. This is the case with e.g. the Very Repugnant Conclusion.
(b) The absurdities that are generated apply directly at the level of axiology, rather than ‘infecting’ axiology via normative ethics. If we read multi-level utilitarianism as an attempt to insulate axiology from ethics, then it can’t help in this case. Of course, multi-level utilitarians are often more willing to be bullet-biters! But the point is just that they do have to bite the bullet.