Good comment, but Drexler actually strikes me as both more moderate and more interesting on AI than just “same as Yudkowsky”. He thinks really intelligent AIs probably won’t be agents with goals at all (at least the first ones we build), and that this means that takeover worries of the Bostrom/Yudkowsky kind are overrated. It’s true that he doesn’t think the risks are zero, but if you look at the section titles of his FHI report, a lot of it is actually devoted to debunking various claims Bostrom/Yudkowksy make in support of the view that takeover risk is high: https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf
I don’t think this effects the point your making, it just seemed a bit unfair on Drexler if I didn’t mention this.
Good comment, but Drexler actually strikes me as both more moderate and more interesting on AI than just “same as Yudkowsky”. He thinks really intelligent AIs probably won’t be agents with goals at all (at least the first ones we build), and that this means that takeover worries of the Bostrom/Yudkowsky kind are overrated. It’s true that he doesn’t think the risks are zero, but if you look at the section titles of his FHI report, a lot of it is actually devoted to debunking various claims Bostrom/Yudkowksy make in support of the view that takeover risk is high: https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf
I don’t think this effects the point your making, it just seemed a bit unfair on Drexler if I didn’t mention this.