It seems like most of the work is being done here:
If you think that AI won’t be smarter than humans but agree that we cannot perfectly control AI in the same way that we cannot perfectly control humans
If I were adopting my skeptic-hat, I don’t think I would buy that assumption. (Or like, sure, we can’t perfectly control AI, but your argument assumes that we are at least as unable to control AI as we are unable to control humans, which I wouldn’t buy.) AI systems are programs; programs are (kind of) determined entirely by their source code, which we perfectly control, why should they be as hard to control as humans? You wouldn’t make the same assumption for, say, Google Maps; what’s the difference?
It seems like most of the work is being done here:
If I were adopting my skeptic-hat, I don’t think I would buy that assumption. (Or like, sure, we can’t perfectly control AI, but your argument assumes that we are at least as unable to control AI as we are unable to control humans, which I wouldn’t buy.) AI systems are programs; programs are (kind of) determined entirely by their source code, which we perfectly control, why should they be as hard to control as humans? You wouldn’t make the same assumption for, say, Google Maps; what’s the difference?
So what would you pitch for skeptics look like? Just ask which assumptions they don’t buy, rebut and iterate?
Yup