To me, “aligned” does a lot of work here. Like yes, if it’s perfectly aligned and totally general, the benefits are mind boggling. But maybe we just get a bunch of AI that are mostly generating pretty good/safe outputs, but a few outputs here and there lower the threshold required for random small groups to wreak mass destruction, and then at least one of those groups blows up the biome.
But yeah given the premise we get AGI that mostly does what we tell it to, and we don’t immediately tell it to do anything stupid, I do think it’s very hard to predict what will happen but it’s gonna be wild (and indeed possibly really good).
To me, “aligned” does a lot of work here. Like yes, if it’s perfectly aligned and totally general, the benefits are mind boggling. But maybe we just get a bunch of AI that are mostly generating pretty good/safe outputs, but a few outputs here and there lower the threshold required for random small groups to wreak mass destruction, and then at least one of those groups blows up the biome.
But yeah given the premise we get AGI that mostly does what we tell it to, and we don’t immediately tell it to do anything stupid, I do think it’s very hard to predict what will happen but it’s gonna be wild (and indeed possibly really good).