Do you see evidence from 2020 technology that such technology could be developed by 2038, with even a low probability?
Of course, even a longer development timeline could end with many of the same problems. But it seems likely that these problems are smaller-scale than those we would expect to see from misaligned artificial intelligence. We already have examples of countries where one or more of guns, surveillance, and drugs run rampant, and I donāt immediately see the connection to catastrophic risk.
Itās unclear to me whether nanotechnology really makes it much easier for humans to harm each other, or whether a superintelligent AI would become much more threatening with this technology than without it (especially since it would presumably be easy enough to build in a future advanced society, whether or not humans had built it first).
Your questions are good ones to ask, and similar to questions being asked about AI in many EA-affiliated research institutions. Iām not an expert in that space, but you might be interested in subscribing to the Alignment Newsletter if you arenāt already and want a good sample of the work being done.
After further thought, I decided 2038 was probably at least a few years too early for the highly general-purpose nanotechnology I described. Still, people may be able to go a long way with precursor technologies that canāt build arbitrary nanostructures, but can still build an interesting variety of nanostructures.
Meanwhile I would be surprised if a superintelligent AGI emerged before 2050āthough if it does, I expect it to be dangerously misaligned. But I have little specific knowledge I could use to estimate nanotech timelines accurately, and my uncertainty on AGI is even greater because the design space of minds is so unknown ā AFAIK not just to me but to everyone. This AI alignment newsletter might well improve my understanding of AGI risk, but then again, if there were a ānanotech risks newsletterā, maybe it would teach me how nanotech is incredibly dangerous too.
Do you see evidence from 2020 technology that such technology could be developed by 2038, with even a low probability?
Of course, even a longer development timeline could end with many of the same problems. But it seems likely that these problems are smaller-scale than those we would expect to see from misaligned artificial intelligence. We already have examples of countries where one or more of guns, surveillance, and drugs run rampant, and I donāt immediately see the connection to catastrophic risk.
Itās unclear to me whether nanotechnology really makes it much easier for humans to harm each other, or whether a superintelligent AI would become much more threatening with this technology than without it (especially since it would presumably be easy enough to build in a future advanced society, whether or not humans had built it first).
Your questions are good ones to ask, and similar to questions being asked about AI in many EA-affiliated research institutions. Iām not an expert in that space, but you might be interested in subscribing to the Alignment Newsletter if you arenāt already and want a good sample of the work being done.
After further thought, I decided 2038 was probably at least a few years too early for the highly general-purpose nanotechnology I described. Still, people may be able to go a long way with precursor technologies that canāt build arbitrary nanostructures, but can still build an interesting variety of nanostructures.
Meanwhile I would be surprised if a superintelligent AGI emerged before 2050āthough if it does, I expect it to be dangerously misaligned. But I have little specific knowledge I could use to estimate nanotech timelines accurately, and my uncertainty on AGI is even greater because the design space of minds is so unknown ā AFAIK not just to me but to everyone. This AI alignment newsletter might well improve my understanding of AGI risk, but then again, if there were a ānanotech risks newsletterā, maybe it would teach me how nanotech is incredibly dangerous too.