I think these would basically be just constant factors multiplying the whole impacts, assuming we remain near the peaks for far longer than we spend making significant moves towards the peaks.
The difference between intentionally optimizing for hedonistic welfare and a default with human-like minds could itself be on the scale of an existential catastrophe for a classical utilitarian, and more important than extinction, although it could also be far less tractable and not really an attractor state at all if it’s not stable/persistent. This could also generalize to other theories of welfare, just with different targets.
I think these would basically be just constant factors multiplying the whole impacts, assuming we remain near the peaks for far longer than we spend making significant moves towards the peaks.
The difference between intentionally optimizing for hedonistic welfare and a default with human-like minds could itself be on the scale of an existential catastrophe for a classical utilitarian, and more important than extinction, although it could also be far less tractable and not really an attractor state at all if it’s not stable/persistent. This could also generalize to other theories of welfare, just with different targets.