I think you should be substantially more optimistic about the effects of aligned AGI. Once we have aligned AGI, this basically means high end cognitive labor becomes very cheap, as once an AI system is trained, it is relatively cheap to deploy it en masse. Some of these AI scientists would presumably work on making AI’s at least cheaper if not more capable, which limits to a functionally infinite supply of high end scientists. Given a functionally infinite supply of high end scientists, we will quickly discover basically everything that can be discovered through parallelizable scientific labor which is, if not everything, I think at least quite a few things (e. g. I have pretty high confidence that we could solve aging, develop extremely good vaccines to prevent against biorisk, etc.). Moreover, this is only a lower bound; I think AGI will probably relatively quickly become significantly smarter than the smartest human, so we will probably do even better than the aforementioned scenario.
To me, “aligned” does a lot of work here. Like yes, if it’s perfectly aligned and totally general, the benefits are mind boggling. But maybe we just get a bunch of AI that are mostly generating pretty good/safe outputs, but a few outputs here and there lower the threshold required for random small groups to wreak mass destruction, and then at least one of those groups blows up the biome.
But yeah given the premise we get AGI that mostly does what we tell it to, and we don’t immediately tell it to do anything stupid, I do think it’s very hard to predict what will happen but it’s gonna be wild (and indeed possibly really good).
I think you should be substantially more optimistic about the effects of aligned AGI. Once we have aligned AGI, this basically means high end cognitive labor becomes very cheap, as once an AI system is trained, it is relatively cheap to deploy it en masse. Some of these AI scientists would presumably work on making AI’s at least cheaper if not more capable, which limits to a functionally infinite supply of high end scientists. Given a functionally infinite supply of high end scientists, we will quickly discover basically everything that can be discovered through parallelizable scientific labor which is, if not everything, I think at least quite a few things (e. g. I have pretty high confidence that we could solve aging, develop extremely good vaccines to prevent against biorisk, etc.). Moreover, this is only a lower bound; I think AGI will probably relatively quickly become significantly smarter than the smartest human, so we will probably do even better than the aforementioned scenario.
To me, “aligned” does a lot of work here. Like yes, if it’s perfectly aligned and totally general, the benefits are mind boggling. But maybe we just get a bunch of AI that are mostly generating pretty good/safe outputs, but a few outputs here and there lower the threshold required for random small groups to wreak mass destruction, and then at least one of those groups blows up the biome.
But yeah given the premise we get AGI that mostly does what we tell it to, and we don’t immediately tell it to do anything stupid, I do think it’s very hard to predict what will happen but it’s gonna be wild (and indeed possibly really good).