After further thought, I decided 2038 was probably at least a few years too early for the highly general-purpose nanotechnology I described. Still, people may be able to go a long way with precursor technologies that can’t build arbitrary nanostructures, but can still build an interesting variety of nanostructures.
Meanwhile I would be surprised if a superintelligent AGI emerged before 2050—though if it does, I expect it to be dangerously misaligned. But I have little specific knowledge I could use to estimate nanotech timelines accurately, and my uncertainty on AGI is even greater because the design space of minds is so unknown — AFAIK not just to me but to everyone. This AI alignment newsletter might well improve my understanding of AGI risk, but then again, if there were a “nanotech risks newsletter”, maybe it would teach me how nanotech is incredibly dangerous too.
After further thought, I decided 2038 was probably at least a few years too early for the highly general-purpose nanotechnology I described. Still, people may be able to go a long way with precursor technologies that can’t build arbitrary nanostructures, but can still build an interesting variety of nanostructures.
Meanwhile I would be surprised if a superintelligent AGI emerged before 2050—though if it does, I expect it to be dangerously misaligned. But I have little specific knowledge I could use to estimate nanotech timelines accurately, and my uncertainty on AGI is even greater because the design space of minds is so unknown — AFAIK not just to me but to everyone. This AI alignment newsletter might well improve my understanding of AGI risk, but then again, if there were a “nanotech risks newsletter”, maybe it would teach me how nanotech is incredibly dangerous too.