I’ll just share that for me personally the case rests on expected value. I actually think there is a lot that we can do to make AI existential safety go better (governance if nothing else), and this is what I spend most of my time on. But the expected value of better futures seems far higher given the difference in size between the default post-human future and the best possible future.
So it sounds like this might be a predictive / empirical dispute about probabilities conditional on slowing AI and avoiding extinction, and the likely futures in each case, and not primarily an ethical theory dispute?
That is an excellent question. I think ethical theory matters a lot — see Power Laws of Value. But I also just think our superintelligent descendants are going to be pretty derpy and act on enlightened self-interest as they turn the stars into computers, not pursue very good things. And that might be somewhere where, e.g., @William_MacAskill and I disagree.
I’ll just share that for me personally the case rests on expected value. I actually think there is a lot that we can do to make AI existential safety go better (governance if nothing else), and this is what I spend most of my time on. But the expected value of better futures seems far higher given the difference in size between the default post-human future and the best possible future.
So it sounds like this might be a predictive / empirical dispute about probabilities conditional on slowing AI and avoiding extinction, and the likely futures in each case, and not primarily an ethical theory dispute?
That is an excellent question. I think ethical theory matters a lot — see Power Laws of Value. But I also just think our superintelligent descendants are going to be pretty derpy and act on enlightened self-interest as they turn the stars into computers, not pursue very good things. And that might be somewhere where, e.g., @William_MacAskill and I disagree.