I think your argument is that we should ignore worlds without a binding oughtness.
Agreed, I’m just using ‘binding oughtness’ here as a (hopefully) more intuitive way of fleshing out what I mean by ‘normative reason for action’.
But in worlds without a binding oughtness, you still have your own desires and goals to guide your actions. This might be what you call ‘prudential’ reasons
So I agree that if there are no normative reasons/‘binding oughtness’ then you would still have your mere desires. However these just wouldn’t constitute normative reason for action and that’s just what you need for an action to be choice-worthy. If your desires do constitute normative reason for action then that’s just a world in which there are prudential normative reasons. The distinction between normative/prudential is one developed in the relevant literature, see this abstract for a paper by Roger Crisp to get a sense for it. The way prudential reason is used in the relevant literature it is not the same as an instrumental reason.
So it seems to me that in worlds with a binding oughtness that you know about, you should take actions according to that binding oughtness, and otherwise you should take actions according to your own desires and goals.
The issues is that we’re trying to work out how to act with uncertainty about what sort of world we’re in? So my argument is that you ought only to ‘listen’ to worlds which have normative realism/‘binding oughtness’ and ones where you have epistemic access to those normative reasons. As I don’t think that mere desires create reasons for action I think we can ignore them unless they are actually prduential reasons.
You could argue that binding oughtness always trumps desires and goals, so that your action should always follow the binding oughtness that is most likely, and you can put no weight on desires and goals. But I would want to know why that’s true.
I attempt to give an argument for this claim in the penultimate para of my appendix. Note that I’m interpreting that you think ‘desires and goals’ result in what I would call prudential reasons for action. I think this is fair because in terms of the way you operationalize the concept.
Thanks for the interesting post. One thought I have is developed below. Apologies that it only tangentially relates to your argument, but I figured that you might have something interesting to say.
Ignoring the possibility of infinite negative utilities. All possible actions seem to have infinite positive utility in expectation. For all actions have a non-zero chance of resulting in infinite positive utility. For it seems that for any action there’s a very small chance that it results in me getting an infinite bliss pill, or to go Pascal’s route to getting into an infinitely good heaven.
As such, classic expected utility theory won’t be action guiding unless we add an additional decision rule: that we ought to pick the action which is most likely to bring about the infinite utility. This addition seems intuitive to me, imagine two bets: one where there is 0.99 chance of getting infinite utility and one where there is a 0.01 chance. It seems irrational to not take the 0.99 deal even though they have the same expected utility.
Now lets suppose that the mugger is offering infinite expected utility rather than just very high utility. If my argument above is the case then I don’t think the generic mugging case has much bite.
It doesn’t seem very plausible that donating my money to the mugger is a better route to the infinite utility than say attempting to become a Muslim in case heaven exists or donating to an AI startup in the hope that a superintelligence might emerge that would one day give me an infinite bliss pill.