Good comment, but Drexler actually strikes me as both more moderate and more interesting on AI than just âsame as Yudkowskyâ. He thinks really intelligent AIs probably wonât be agents with goals at all (at least the first ones we build), and that this means that takeover worries of the Bostrom/âYudkowsky kind are overrated. Itâs true that he doesnât think the risks are zero, but if you look at the section titles of his FHI report, a lot of it is actually devoted to debunking various claims Bostrom/âYudkowksy make in support of the view that takeover risk is high: https://ââwww.fhi.ox.ac.uk/ââwp-content/ââuploads/ââReframing_Superintelligence_FHI-TR-2019-1.1-1.pdf
I donât think this effects the point your making, it just seemed a bit unfair on Drexler if I didnât mention this.
Good comment, but Drexler actually strikes me as both more moderate and more interesting on AI than just âsame as Yudkowskyâ. He thinks really intelligent AIs probably wonât be agents with goals at all (at least the first ones we build), and that this means that takeover worries of the Bostrom/âYudkowsky kind are overrated. Itâs true that he doesnât think the risks are zero, but if you look at the section titles of his FHI report, a lot of it is actually devoted to debunking various claims Bostrom/âYudkowksy make in support of the view that takeover risk is high: https://ââwww.fhi.ox.ac.uk/ââwp-content/ââuploads/ââReframing_Superintelligence_FHI-TR-2019-1.1-1.pdf
I donât think this effects the point your making, it just seemed a bit unfair on Drexler if I didnât mention this.