For a standard utilitarian, a benevolent superintelligence would create enough happiness (and not allow factory farming) to outweigh any current suffering due to the large length and size of the future.
For a suffering-focused altruist (such as myself), it’s not that simple, although in any case it mostly revolves around (i) the possibility of locally-originating long-term s-risks (rather than factory farming, if it ends near-term), and (ii) the ability of aligned ASIs to reduce s-events in unreachable parts of the world through acausal trade; see my shortform expected value of alignment over extinction for negative utilitarians
For a standard utilitarian, a benevolent superintelligence would create enough happiness (and not allow factory farming) to outweigh any current suffering due to the large length and size of the future.
For a suffering-focused altruist (such as myself), it’s not that simple, although in any case it mostly revolves around (i) the possibility of locally-originating long-term s-risks (rather than factory farming, if it ends near-term), and (ii) the ability of aligned ASIs to reduce s-events in unreachable parts of the world through acausal trade; see my shortform expected value of alignment over extinction for negative utilitarians