Oh, I didn’t expect Pinker to hold that position; it’s quite disappointing. But it’s hopefully a topic we will see addressed in a future conversation with Sam Harris who should push back on the “AI cannot be a threat”-narrative.
Have you tweeted/mailed/whatnot him this response?
I agree, I found it surprising as well that he has taken this view. It seems like he has read a portion of Bostrom’s Global Catastrophic Risks and Superintelligence, has become familiar with the general arguments and prominent examples, but then has gone on to dismiss existential threats on reasons specifically addressed in both books.
He is a bit more concerned about nuclear threats than other existential threats, but I wonder if this is the availability heuristic at work given the historical precedent instead of a well-reasoned line of argument.
Great suggestion about Sam Harris—I think Steven Pinker and him had a live chat just the other day (March 14) so may have missed this opportunity. I’m still waiting for the audio to be uploaded on Sam’s podcast, but I wonder given Sam’s positions if he questions Pinker on this as well.
I think part of the problem is that he expressed a very dismissive stance towards AI/x-risk positions publicly, seemingly before he’d read anything about them. Now people have pushed back and pointed out his obvious errors and he’s had to at least somewhat read about what the positions are, but he doesn’t want to backtrack at all from his previous statement of extreme dismissiveness.
I agree and that appears the likely sequelae. I find it a bit disappointing that he went into this topic with his view already formed, and used the prominent contentious points and counterarguments to reinforce his preconceptions without becoming familiar with the detailed refutations already out there. It’s great to have good debate and opposing views presented, but his broad stroke dismissal makes it really difficult.
Sam Harris did ask Steven Pinker about AI safety. If anybody gets around listening to that, it starts at 1:34:30 and ends at 2:04, so that’s about 30 minutes about risks from AI. Harris wasn’t his best in that discussion and Pinker came off much more nuanced and evidence and reason based.
Oh, I didn’t expect Pinker to hold that position; it’s quite disappointing. But it’s hopefully a topic we will see addressed in a future conversation with Sam Harris who should push back on the “AI cannot be a threat”-narrative. Have you tweeted/mailed/whatnot him this response?
I agree, I found it surprising as well that he has taken this view. It seems like he has read a portion of Bostrom’s Global Catastrophic Risks and Superintelligence, has become familiar with the general arguments and prominent examples, but then has gone on to dismiss existential threats on reasons specifically addressed in both books.
He is a bit more concerned about nuclear threats than other existential threats, but I wonder if this is the availability heuristic at work given the historical precedent instead of a well-reasoned line of argument.
Great suggestion about Sam Harris—I think Steven Pinker and him had a live chat just the other day (March 14) so may have missed this opportunity. I’m still waiting for the audio to be uploaded on Sam’s podcast, but I wonder given Sam’s positions if he questions Pinker on this as well.
I think part of the problem is that he expressed a very dismissive stance towards AI/x-risk positions publicly, seemingly before he’d read anything about them. Now people have pushed back and pointed out his obvious errors and he’s had to at least somewhat read about what the positions are, but he doesn’t want to backtrack at all from his previous statement of extreme dismissiveness.
I agree and that appears the likely sequelae. I find it a bit disappointing that he went into this topic with his view already formed, and used the prominent contentious points and counterarguments to reinforce his preconceptions without becoming familiar with the detailed refutations already out there. It’s great to have good debate and opposing views presented, but his broad stroke dismissal makes it really difficult.
Sam Harris did ask Steven Pinker about AI safety. If anybody gets around listening to that, it starts at 1:34:30 and ends at 2:04, so that’s about 30 minutes about risks from AI. Harris wasn’t his best in that discussion and Pinker came off much more nuanced and evidence and reason based.
I agree with the characterization of the discussion, but regardless, you can find it here: https://www.youtube.com/watch?v=H_5N0N-61Tg&t=86m12s