It looks like a strawman to me. It conflates (A) a question about evaluation (is Suboptimal Earth axiologically better than current Earth?) with (B) a question about decision/action (would it be right to kill everyone for the sake of Suboptimal Earth), and it omits:
(A) a utilitarian doesn’t classify scenarios categorically (“this is good, that is bad”), but through an ordering over possible worlds, such as: (1) current population + everyone alive in Suboptimal Earth is better than (2) Suboptimal Earth scenario minus current population is better than (3) current Earth...
(B) a utilitarian decides according to ex ante expected utility, so it’d have to ask “what’s the odds that Suboptimal Earth will occur given my decision?”
Of course, there are huge problems for such reasoning—a more realistic Suboptimal Earth would get close to a Pascal Muggering: imagine that a Super AGI asked you to press this red button, freeing it to turn the whole galaxy into an eternal utopian hedonist simulation, for example.
As someone who has been “fighting” utilitarianism for a long time, I can say that the best objections against it have been produced by utilitarians themselves.
It looks like a strawman to me. It conflates (A) a question about evaluation (is Suboptimal Earth axiologically better than current Earth?) with (B) a question about decision/action (would it be right to kill everyone for the sake of Suboptimal Earth), and it omits:
(A) a utilitarian doesn’t classify scenarios categorically (“this is good, that is bad”), but through an ordering over possible worlds, such as: (1) current population + everyone alive in Suboptimal Earth is better than (2) Suboptimal Earth scenario minus current population is better than (3) current Earth...
(B) a utilitarian decides according to ex ante expected utility, so it’d have to ask “what’s the odds that Suboptimal Earth will occur given my decision?”
Of course, there are huge problems for such reasoning—a more realistic Suboptimal Earth would get close to a Pascal Muggering: imagine that a Super AGI asked you to press this red button, freeing it to turn the whole galaxy into an eternal utopian hedonist simulation, for example.
As someone who has been “fighting” utilitarianism for a long time, I can say that the best objections against it have been produced by utilitarians themselves.