Executive summary: The post argues—carefully and mostly confidently on the logic, more tentatively on the metaphysics—that total utilitarianism does not logically entail that any suffering can be “offset,” and that some extreme suffering is in fact non-offsetable, implying a longtermist reorientation toward minimizing catastrophic suffering and s-risks rather than maximizing aggregate happiness (exploratory, philosophy-first reframing rather than an empirical policy brief).
Key points:
Offsetability isn’t implied by the Utilitarian Core: Consequentialism, (hedonic) welfarism, impartiality, aggregation, and maximization don’t force all welfare to live on a single real-number scale; the usual “representation premise” is an extra, substantive assumption.
Alternative formalisms preserve aggregation while blocking offsetability: Lexicographic orderings or hyperreals allow comparisons and addition yet prevent any finite good from compensating certain bads; VNM expected-utility theorems don’t rescue offsetability because the required continuity axiom is rejected here.
Metaphysical claim via Idealized Hedonic Egoist: When you (ideally rational and fully informed) must experience all lives, trades like “70 years of maximal torture for any later bliss” look indefensibly bad—evidence that some suffering is non-offsetable.
Asymptotic, not arbitrary, threshold: The “compensation” needed to justify increasing suffering rises without bound as it approaches a catastrophic threshold; even sub-threshold suffering may demand astronomically large (practically unreachable) compensation, making the move to “infinite” a small further step.
Implications for longtermism and s-risk: Prioritize preventing lock-in or growth of extreme suffering and be cautious about creating vast populations that include it; reject simplistic “extinction is good” conclusions while emphasizing moral uncertainty, cooperation, irreversibility, and unilateralist-risk considerations.
Stated uncertainties and bullets bitten: Open questions include time-granularity (does a microsecond of super-bad experience cross the threshold?); the author does bite the bullet that any nonzero probability of catastrophic suffering morally matters lexically; evolutionary debunking is addressed but not found decisive.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: The post argues—carefully and mostly confidently on the logic, more tentatively on the metaphysics—that total utilitarianism does not logically entail that any suffering can be “offset,” and that some extreme suffering is in fact non-offsetable, implying a longtermist reorientation toward minimizing catastrophic suffering and s-risks rather than maximizing aggregate happiness (exploratory, philosophy-first reframing rather than an empirical policy brief).
Key points:
Offsetability isn’t implied by the Utilitarian Core: Consequentialism, (hedonic) welfarism, impartiality, aggregation, and maximization don’t force all welfare to live on a single real-number scale; the usual “representation premise” is an extra, substantive assumption.
Alternative formalisms preserve aggregation while blocking offsetability: Lexicographic orderings or hyperreals allow comparisons and addition yet prevent any finite good from compensating certain bads; VNM expected-utility theorems don’t rescue offsetability because the required continuity axiom is rejected here.
Metaphysical claim via Idealized Hedonic Egoist: When you (ideally rational and fully informed) must experience all lives, trades like “70 years of maximal torture for any later bliss” look indefensibly bad—evidence that some suffering is non-offsetable.
Asymptotic, not arbitrary, threshold: The “compensation” needed to justify increasing suffering rises without bound as it approaches a catastrophic threshold; even sub-threshold suffering may demand astronomically large (practically unreachable) compensation, making the move to “infinite” a small further step.
Implications for longtermism and s-risk: Prioritize preventing lock-in or growth of extreme suffering and be cautious about creating vast populations that include it; reject simplistic “extinction is good” conclusions while emphasizing moral uncertainty, cooperation, irreversibility, and unilateralist-risk considerations.
Stated uncertainties and bullets bitten: Open questions include time-granularity (does a microsecond of super-bad experience cross the threshold?); the author does bite the bullet that any nonzero probability of catastrophic suffering morally matters lexically; evolutionary debunking is addressed but not found decisive.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.