I think Eric has been strong about making reasoned arguments about the shape of possible future technologies, and helping people to look at things for themselves.
I guess this is kind of my issue, right? He’s been quite strong at putting forth arguments about the shape of the future that were highly persuasive and yet turned out to be badly wrong.[1] I’m concerned that this does not seem to have his affected his epistemic authority in these sort of circles.
You may not be “defering” to drexler, but you are singling out his views as singularly important (you have not made similar posts about anybody else[2]). There are hundreds of people discussing AI at the moment, a lot of them with a lot more expertise, and a lot of whom have not been badly wrong about the shape of the future.
Anyway, I’m not trying to discount your arguments either, I’m sure you have found stuff in valuable. But if this post is making a case for reading Drexler despite him being difficult, I’m allowed to make the counterargument.
Yep, I guess I’m into people trying to figure out what they think and which arguments seem convincing, and I think that it’s good to highlight sources of perspectives that people might find helpful-according-to-their-own-judgement for that. I do think I have found Drexler’s writing on AI singularly helpful on my inside-view judgements.
That said: absolutely seems good for you to offer counterarguments! Not trying to dismiss that (but I did want to explain why the counterargument wasn’t landing for me).
I guess this is kind of my issue, right? He’s been quite strong at putting forth arguments about the shape of the future that were highly persuasive and yet turned out to be badly wrong.[1] I’m concerned that this does not seem to have his affected his epistemic authority in these sort of circles.
You may not be “defering” to drexler, but you are singling out his views as singularly important (you have not made similar posts about anybody else[2]). There are hundreds of people discussing AI at the moment, a lot of them with a lot more expertise, and a lot of whom have not been badly wrong about the shape of the future.
Anyway, I’m not trying to discount your arguments either, I’m sure you have found stuff in valuable. But if this post is making a case for reading Drexler despite him being difficult, I’m allowed to make the counterargument.
In answer to your footnote: If more than one of those things occurs in the next thirty years, I will eat a hat.
If this is the first in a series, feel free to discount this.
Yep, I guess I’m into people trying to figure out what they think and which arguments seem convincing, and I think that it’s good to highlight sources of perspectives that people might find helpful-according-to-their-own-judgement for that. I do think I have found Drexler’s writing on AI singularly helpful on my inside-view judgements.
That said: absolutely seems good for you to offer counterarguments! Not trying to dismiss that (but I did want to explain why the counterargument wasn’t landing for me).