What are some historical examples of a group (like AI Safety folk) getting something incredibly wrong about an incoming Technology? Bonus question: what led to that group getting it so wrong? Maybe there is something to learn here.
In the 90′s and 2000′s, many people such as Eric Drexler were extremely worried about nanotechnology and viewed it as an existential threat through the “gray goo” scenario. Yudkowsky predicted drexler style nanotech would occur by 2010, using very similar language to what he is currently saying about AGI.
It turned out they were all being absurdly overoptimistic about how soon the technology would arrive, and the whole drexlerite nanotech project flamed out by the end of the 2000′s and has pretty much not progressed since. I think a similar dynamic playing out with AGI is less likely, but still very plausible.
Do you have links to people being very worried about gray goo stuff?
(Also, the post you link to makes this clear, but this was a prediction from when Eliezer was a teenager, or just turned 20, which does not make for a particularly good comparison, IMO)
This is probably a good exercise. I do want to point out a common bias about getting existential risks wrong. If someone was right about doomsday, we would not be here to discuss it. That is a huge survivorship bias. Even catestrophic events which lessen the number of people are going to be systemically underestimated. This phenomenon is the anthropic shadow which is relevant to an analysis like this.
Yeah, Case Studies as Research need to be treated very carefully (i.e. they can still be valuable exercises but the analyser needs to be aware of their weaknesses)
There were many predictions about AI and AGI in the past (maybe mostly last century) that were very wrong. I think I read about it in Superintelligence. A quick Google search shows this article which probably talks about that.
Cultured meat predictions were overly optimistic, although many of those predictions might have been companies hyping up their products to attract investors. There’s also probably a selection bias where the biggest cultured meat optimisits are the ones who become cultured meat experts and make predictions
What are some historical examples of a group (like AI Safety folk) getting something incredibly wrong about an incoming Technology? Bonus question: what led to that group getting it so wrong? Maybe there is something to learn here.
In the 90′s and 2000′s, many people such as Eric Drexler were extremely worried about nanotechnology and viewed it as an existential threat through the “gray goo” scenario. Yudkowsky predicted drexler style nanotech would occur by 2010, using very similar language to what he is currently saying about AGI.
It turned out they were all being absurdly overoptimistic about how soon the technology would arrive, and the whole drexlerite nanotech project flamed out by the end of the 2000′s and has pretty much not progressed since. I think a similar dynamic playing out with AGI is less likely, but still very plausible.
Do you have links to people being very worried about gray goo stuff?
(Also, the post you link to makes this clear, but this was a prediction from when Eliezer was a teenager, or just turned 20, which does not make for a particularly good comparison, IMO)
I hope you’re right. Thanks for the example, it seems like a good one.
This is probably a good exercise. I do want to point out a common bias about getting existential risks wrong. If someone was right about doomsday, we would not be here to discuss it. That is a huge survivorship bias. Even catestrophic events which lessen the number of people are going to be systemically underestimated. This phenomenon is the anthropic shadow which is relevant to an analysis like this.
Yeah, Case Studies as Research need to be treated very carefully (i.e. they can still be valuable exercises but the analyser needs to be aware of their weaknesses)
There were many predictions about AI and AGI in the past (maybe mostly last century) that were very wrong. I think I read about it in Superintelligence. A quick Google search shows this article which probably talks about that.
Thanks!
Cultured meat predictions were overly optimistic, although many of those predictions might have been companies hyping up their products to attract investors. There’s also probably a selection bias where the biggest cultured meat optimisits are the ones who become cultured meat experts and make predictions
https://pessimistsarchive.org/