So it sounds like this might be a predictive / empirical dispute about probabilities conditional on slowing AI and avoiding extinction, and the likely futures in each case, and not primarily an ethical theory dispute?
That is an excellent question. I think ethical theory matters a lot — see Power Laws of Value. But I also just think our superintelligent descendants are going to be pretty derpy and act on enlightened self-interest as they turn the stars into computers, not pursue very good things. And that might be somewhere where, e.g., @William_MacAskill and I disagree.
So it sounds like this might be a predictive / empirical dispute about probabilities conditional on slowing AI and avoiding extinction, and the likely futures in each case, and not primarily an ethical theory dispute?
That is an excellent question. I think ethical theory matters a lot — see Power Laws of Value. But I also just think our superintelligent descendants are going to be pretty derpy and act on enlightened self-interest as they turn the stars into computers, not pursue very good things. And that might be somewhere where, e.g., @William_MacAskill and I disagree.