A (potential) issue with MacAskill’s presentation of moral uncertainty
Not able to write a real post about this atm, though I think it deserves one.
MacAskill makes a similar point in WWOTF, but IMO the best and most decision-relevant quote comes from his second appearance on the 80k podcast:
There are possible views in which you should give more weight to suffering...I think we should take that into account too, but then what happens? You end up with kind of a mix between the two, supposing you were 50⁄50 between classical utilitarian view and just strict negative utilitarian view. Then I think on the natural way of making the comparison between the two views, you give suffering twice as much weight as you otherwise would.
I don’t think the second bolded sentence follows in any objective or natural manner from the first. Rather, this reasoning takes a distinctly total utilitarian meta-level perspective, summing the various signs of utility and then implicitly considering them under total utilitarianism.
Even granting that the mora arithmetic is appropriate and correct, it’s not at all clear what to do once the 2:1 accounting is complete. MacAskill’s suffering-focused twin might have reasoned instead that
Negative and total utilitarianism are both 50% likely to be true, so we must give twice the normal amount of weight to happiness. However, since any sufficiently severe suffering morally outweighs any amount of happiness, the moral outlook on a world with twice as much wellbeing is the same as before
A better proxy for genuine neutrality (and the best one I can think of) might be to simulate bargaining over real-world outcomes from each perspective, which would probably result in at least some proportion of one’s resources being deployed as though negative utilitarianism were true (perhaps exactly 50%, though I haven’t given this enough thought to make the claim outright).
A (potential) issue with MacAskill’s presentation of moral uncertainty
Not able to write a real post about this atm, though I think it deserves one.
MacAskill makes a similar point in WWOTF, but IMO the best and most decision-relevant quote comes from his second appearance on the 80k podcast:
I don’t think the second bolded sentence follows in any objective or natural manner from the first. Rather, this reasoning takes a distinctly total utilitarian meta-level perspective, summing the various signs of utility and then implicitly considering them under total utilitarianism.
Even granting that the mora arithmetic is appropriate and correct, it’s not at all clear what to do once the 2:1 accounting is complete. MacAskill’s suffering-focused twin might have reasoned instead that
A better proxy for genuine neutrality (and the best one I can think of) might be to simulate bargaining over real-world outcomes from each perspective, which would probably result in at least some proportion of one’s resources being deployed as though negative utilitarianism were true (perhaps exactly 50%, though I haven’t given this enough thought to make the claim outright).