I strongly disagree. I think human extinction would be bad.
Not every utility function is equally desirable. For example, an ASI that maximizes the number of paperclips in the universe would be a bad outcome.
Thus, unless one adopts anthropocentric values, the utilitarian philosophy common in this forum (whether you approve of additivity or not) implies that it would be desirable for humans to develop ASI to exterminate humans as quickly and with as high a probability as possible, as opposed to the exact opposite goal that many people pursue.
Most people here do adopt anthropocentric values, in that they think human flourishing would be more desirable than a vast amount of paperclips.
Considering the way many people calculate animal welfare, I would have thought that many people here are not anthropocentric.
Lots of paperclips are a possibility, but perhaps ASIs could be designed to be much more creative and sensory than humans, and does that mean humans shouldn’t exist.
I strongly disagree. I think human extinction would be bad.
Not every utility function is equally desirable. For example, an ASI that maximizes the number of paperclips in the universe would be a bad outcome.
Most people here do adopt anthropocentric values, in that they think human flourishing would be more desirable than a vast amount of paperclips.
Considering the way many people calculate animal welfare, I would have thought that many people here are not anthropocentric.
Lots of paperclips are a possibility, but perhaps ASIs could be designed to be much more creative and sensory than humans, and does that mean humans shouldn’t exist.