I have a strongly negative bias against any attempt to ground normative theories in abstract mathematical theories, such as game theory and decision theory. The way I see it, the two central claims of utilitarianism are the axiological claim (well-being is what matters) and the maximizing claim (we should maximize what matters ie. well-being). This argument provides no reason to ground our axiology in well-being, and also provides no reason that we should be maximizers.
In general, there is a significant difference between normative claims, like total utilitarianism, and factual claims, like “As a group, VNM rational agents will do X.”
+1
I have a strongly negative bias against any attempt to ground normative theories in abstract mathematical theories, such as game theory and decision theory. The way I see it, the two central claims of utilitarianism are the axiological claim (well-being is what matters) and the maximizing claim (we should maximize what matters ie. well-being). This argument provides no reason to ground our axiology in well-being, and also provides no reason that we should be maximizers.
In general, there is a significant difference between normative claims, like total utilitarianism, and factual claims, like “As a group, VNM rational agents will do X.”