I think Eric has been strong about making reasoned arguments about the shape of possible future technologies, and helping people to look at things for themselves. I wouldn’t have thought of him (even before looking at this link[1]) as particularly good on making quantitative estimates about timelines; which in any case is something he doesn’t seem to do much of.
Ultimately I am not suggesting that you defer to Drexler. I am suggesting that you may find reading his material as a good time investment for spurring your own thoughts. This is something you can test for yourself (I’m sure that it won’t be a good fit for everyone).
And while I do think it’s interesting, I’m wary of drawing too strong conclusions from that for a couple of reasons:
If, say, all this stuff now happened in the next 30 years, so that he was in some sense just off by a factor of two, how would you think his predictions had done? It seems to me this would be mostly a win for him; and I do think that it’s quite plausible that it will mostly happen within 30 years (and more likely still within 60).
That was 30 years ago; I’m sure that he is in some ways a different person now.
I think Eric has been strong about making reasoned arguments about the shape of possible future technologies, and helping people to look at things for themselves.
I guess this is kind of my issue, right? He’s been quite strong at putting forth arguments about the shape of the future that were highly persuasive and yet turned out to be badly wrong.[1] I’m concerned that this does not seem to have his affected his epistemic authority in these sort of circles.
You may not be “defering” to drexler, but you are singling out his views as singularly important (you have not made similar posts about anybody else[2]). There are hundreds of people discussing AI at the moment, a lot of them with a lot more expertise, and a lot of whom have not been badly wrong about the shape of the future.
Anyway, I’m not trying to discount your arguments either, I’m sure you have found stuff in valuable. But if this post is making a case for reading Drexler despite him being difficult, I’m allowed to make the counterargument.
Yep, I guess I’m into people trying to figure out what they think and which arguments seem convincing, and I think that it’s good to highlight sources of perspectives that people might find helpful-according-to-their-own-judgement for that. I do think I have found Drexler’s writing on AI singularly helpful on my inside-view judgements.
That said: absolutely seems good for you to offer counterarguments! Not trying to dismiss that (but I did want to explain why the counterargument wasn’t landing for me).
I think Eric has been strong about making reasoned arguments about the shape of possible future technologies, and helping people to look at things for themselves. I wouldn’t have thought of him (even before looking at this link[1]) as particularly good on making quantitative estimates about timelines; which in any case is something he doesn’t seem to do much of.
Ultimately I am not suggesting that you defer to Drexler. I am suggesting that you may find reading his material as a good time investment for spurring your own thoughts. This is something you can test for yourself (I’m sure that it won’t be a good fit for everyone).
And while I do think it’s interesting, I’m wary of drawing too strong conclusions from that for a couple of reasons:
If, say, all this stuff now happened in the next 30 years, so that he was in some sense just off by a factor of two, how would you think his predictions had done? It seems to me this would be mostly a win for him; and I do think that it’s quite plausible that it will mostly happen within 30 years (and more likely still within 60).
That was 30 years ago; I’m sure that he is in some ways a different person now.
I guess this is kind of my issue, right? He’s been quite strong at putting forth arguments about the shape of the future that were highly persuasive and yet turned out to be badly wrong.[1] I’m concerned that this does not seem to have his affected his epistemic authority in these sort of circles.
You may not be “defering” to drexler, but you are singling out his views as singularly important (you have not made similar posts about anybody else[2]). There are hundreds of people discussing AI at the moment, a lot of them with a lot more expertise, and a lot of whom have not been badly wrong about the shape of the future.
Anyway, I’m not trying to discount your arguments either, I’m sure you have found stuff in valuable. But if this post is making a case for reading Drexler despite him being difficult, I’m allowed to make the counterargument.
In answer to your footnote: If more than one of those things occurs in the next thirty years, I will eat a hat.
If this is the first in a series, feel free to discount this.
Yep, I guess I’m into people trying to figure out what they think and which arguments seem convincing, and I think that it’s good to highlight sources of perspectives that people might find helpful-according-to-their-own-judgement for that. I do think I have found Drexler’s writing on AI singularly helpful on my inside-view judgements.
That said: absolutely seems good for you to offer counterarguments! Not trying to dismiss that (but I did want to explain why the counterargument wasn’t landing for me).