I lend some credence to the trendlines argument but mostly think that humans are more likely to want to optimize for extreme happiness (or other positive moral goods) than extreme suffering (or other negatives/moral bads), and any additive account of moral goods will shake out to in expectation have a lot more positive moral goods than moral bads, unless you have really extreme inside views to think that optimizing for extreme moral bads is as as likely (or more likely) than optimizing for extreme moral goods.
I do think there are nontrivial probability of P(S-risk | singularity), eg a) our descendants are badly mistaken or b) other agents follow through with credible pre-commitments to torture, but I think it ought to be surprising for classical utilitarians to believe that the EV of the far future is negative.
I lend some credence to the trendlines argument but mostly think that humans are more likely to want to optimize for extreme happiness (or other positive moral goods) than extreme suffering (or other negatives/moral bads), and any additive account of moral goods will shake out to in expectation have a lot more positive moral goods than moral bads, unless you have really extreme inside views to think that optimizing for extreme moral bads is as as likely (or more likely) than optimizing for extreme moral goods.
I do think there are nontrivial probability of P(S-risk | singularity), eg a) our descendants are badly mistaken or b) other agents follow through with credible pre-commitments to torture, but I think it ought to be surprising for classical utilitarians to believe that the EV of the far future is negative.