Good question. Does this also work in the opposite direction? ~ Worry less about catastrophic or existential risks because there’d be fewer animals in factory farms?
I have an intuition that any ASI that wipes out humans does the same to non-human animals though.
For a standard utilitarian, a benevolent superintelligence would create enough happiness (and not allow factory farming) to outweigh any current suffering due to the large length and size of the future.
For a suffering-focused altruist (such as myself), it’s not that simple, although in any case it mostly revolves around (i) the possibility of locally-originating long-term s-risks (rather than factory farming, if it ends near-term), and (ii) the ability of aligned ASIs to reduce s-events in unreachable parts of the world through acausal trade; see my shortform expected value of alignment over extinction for negative utilitarians
Good question. Does this also work in the opposite direction? ~ Worry less about catastrophic or existential risks because there’d be fewer animals in factory farms?
I have an intuition that any ASI that wipes out humans does the same to non-human animals though.
For a standard utilitarian, a benevolent superintelligence would create enough happiness (and not allow factory farming) to outweigh any current suffering due to the large length and size of the future.
For a suffering-focused altruist (such as myself), it’s not that simple, although in any case it mostly revolves around (i) the possibility of locally-originating long-term s-risks (rather than factory farming, if it ends near-term), and (ii) the ability of aligned ASIs to reduce s-events in unreachable parts of the world through acausal trade; see my shortform expected value of alignment over extinction for negative utilitarians