Well, I looked it up and found a free pdf, and it turns out that Searle does consider this counterargument.
Why is it so important that the system be capable of consciousness? Why isn’t appropriate behavior enough? Of course for many purposes it is enough. If the computer can fly airplanes, drive cars, and win at chess, who cares if it is totally nonconscious? But if we are worried about a maliciously motivated superintelligence destroying us, then it is important that the malicious motivation should be real. Without consciousness, there is no possibility of its being real.
But I find the arguments that he then gives in support of this claim quite unconvincing / I don’t understand exactly what the argument is. Notice that Searle’s argument is based on comparing a spell-checking program on a laptop with human cognition. He claims that reflecting on the difference between the human and the program establishes that it would never make sense to attribute psychological states to any computational system at all. But that comparison doesn’t seem to show that at all.
And it certainly doesn’t show, as Searle thinks it does, that computers could never have the “motivation” to pursue misaligned goals, in the sense that Bostrom needs to establish that powerful AGI could be dangerous.
I should say—while Searle is not my favorite writer on these topics, I think these sorts of questions at the intersection of phil mind and AI are quite important and interesting, and it’s cool that you are thinking about them. (Then again, I *would *think that given my background). And it’s important to scrutinize the philosophical assumptions (if any) behind AI risk arguments.
Well, I looked it up and found a free pdf, and it turns out that Searle does consider this counterargument.
But I find the arguments that he then gives in support of this claim quite unconvincing / I don’t understand exactly what the argument is. Notice that Searle’s argument is based on comparing a spell-checking program on a laptop with human cognition. He claims that reflecting on the difference between the human and the program establishes that it would never make sense to attribute psychological states to any computational system at all. But that comparison doesn’t seem to show that at all.
And it certainly doesn’t show, as Searle thinks it does, that computers could never have the “motivation” to pursue misaligned goals, in the sense that Bostrom needs to establish that powerful AGI could be dangerous.
I should say—while Searle is not my favorite writer on these topics, I think these sorts of questions at the intersection of phil mind and AI are quite important and interesting, and it’s cool that you are thinking about them. (Then again, I *would *think that given my background). And it’s important to scrutinize the philosophical assumptions (if any) behind AI risk arguments.