In his review of Nick Bostrom’s Superintelligence, philosopher John Searle (creator of the ‘Chinese Room’ thought experiment) seems to attack many of the fundamental assumptions and conclusions of Bostrom’s (and, I think most EAs’) approach to thinking about AI.
If Searle is right, it would perhaps imply that many, many EAs are wasting a lot of time and energy at the moment.
Does anyone know if Nick Bostrom has replied to Searle’s arguments?
What do EA Forum readers think about Searle’s arguments?
Searle’s review is paywalled, but it’s super easy to register for the site and view it for free.
(Meta-point: I’m just jumping in to my reading on this topic. If this is well-trodden ground, apologies—and I would appreciate any links to cannonical reading on these debates—thank you!)
Searle vs Bostrom: crucial considerations for EA AI work?
In his review of Nick Bostrom’s Superintelligence, philosopher John Searle (creator of the ‘Chinese Room’ thought experiment) seems to attack many of the fundamental assumptions and conclusions of Bostrom’s (and, I think most EAs’) approach to thinking about AI.
If Searle is right, it would perhaps imply that many, many EAs are wasting a lot of time and energy at the moment.
Does anyone know if Nick Bostrom has replied to Searle’s arguments?
What do EA Forum readers think about Searle’s arguments?
Searle’s review is paywalled, but it’s super easy to register for the site and view it for free.
(Meta-point: I’m just jumping in to my reading on this topic. If this is well-trodden ground, apologies—and I would appreciate any links to cannonical reading on these debates—thank you!)