Thanks for your comment! I was considering writing much more about moral uncertainty, since I think it’s an important topic here, but the post was long enough as it is. But you and other commenters below have all pulled me up on this one, so it’s worth being more explicit. I thus hope it’s OK for this reply to you to serve as a general reply to a lot of themes related to moral uncertainty in the comment section, to avoid repeating myself too much!
Starting with 1(b), the question of unconditional ‘deontological constraints’: this works in theory, but I don’t think it applies in practice. The (dis)value placed on specific actions can’t just be ‘extremely high’, because then it can still be swamped by utilitarianism over unbounded choice sets; it has to be infinite, such that (e.g.) intentional killing is infinitely disvaluable and no finite source of value, no matter how arbitrarily large, could outweigh it. This gets you around the impossibility proof, which as mentioned relies on order properties of the reals that don’t hold for the extended reals—roughly, the value of utility is always already infinitesimal relative to the infinite sources of (dis)value, so the marginal value of utility doesn’t need to decline asymptotically to avoid swamping them.
But in practice, I just don’t see what marginally plausible deontological constraints could help a mostly-consequentialist theory avoid the train to crazy town. These constraints work to avoid counterexamples like e.g. the transplant case, where intuitively there is another principle at play that overrides utility considerations. In these cases, deontological constraints are simple, intuitive, and well-motivated. But in the cases I’m concerned with in this post, like Hurka’s St Petersburg Paradox, it’s not clear that Kantian-style constraints on murder or lying really help the theory—especially because of the role of risk in the example. To get around this example with deontological constraints, you either have to propose wildly implausible constraints like ‘never accept any choices with downside risk to human life’, or have an ad hoc restriction specifically designed to get around this case in particular—the latter of which seems a) epistemically dodgy and b) liable to be met with a slightly adjusted counterexample. I just don’t see how you could avoid all such cases with any even mildly plausible deontological constraint.
Beyond these kinds of ‘lexical’ approaches, there are various other attempts to avoid fanaticism while respecting considerations of utility at scale—your 1(a). But by Cowen’s proof, if these are indeed to work, they must deny the universal domain condition—as indeed, the theories mentioned tend to! I mentioned the moral parliament explicitly, but note also that (e.g.) if you accept that certain intertheoretical comparisons cannot be made, then you have ipso facto denied universal domain and accepted a certain level of incomparability and pluralism.
The difference between me and you is just that you’ve only accepted incomparability at the meta-level (it applies comparing different moral theories), whereas I’m encouraging you to adopt it at the object level (it should apply to the act of thinking about ethics in the first instance). But I see no coherent way to hold ‘we can have incomparability in our meta-level theorising, but it must be completely banned from our object-level theorising’! There are many potential rationalistic reasons you might offer for why incomparability and incommensurability should be banished from moral philosophy; but none of these are available to you if you take on a framework for moral uncertainty that avoids fanaticism by denying universal domain. So accepting these kinds of positions about moral uncertainty just seems to me like an unstable halfway house between true rationalistic moral philosophy (on the one hand) and pluralism (on the other).
On 2: I think the point is simply that, as noted in footnote 8, the ‘train to crazy town’ reasoning can apply quite directly to comparisons between states of affairs with no lingering uncertainty (Savagean consequences). When we apply the reasoning in this way, two features arise:
(a) Uncertainty, and frameworks for dealing with uncertainty, no longer have a role to play as we are certain about outcomes. This is the case with e.g. the Very Repugnant Conclusion.
(b) The absurdities that are generated apply directly at the level of axiology, rather than ‘infecting’ axiology via normative ethics. If we read multi-level utilitarianism as an attempt to insulate axiology from ethics, then it can’t help in this case. Of course, multi-level utilitarians are often more willing to be bullet-biters! But the point is just that they do have to bite the bullet.