That could happen. I would emphasise that I’m not talking about whether we should have digital minds at all, just when we get them (before or after AGI). The benefit in making AGI safer looms larger to me than the risk of bad actors—and the threat of such bad actors would lead us to police compute resources more thoroughly than we do now.
Digital people may be less predictable, especially if “enhanced”, I think that the trade-off is still pretty good here in that they almost entirely approximate human values versus AI systems which (by default) do not at all.
That could happen. I would emphasise that I’m not talking about whether we should have digital minds at all, just when we get them (before or after AGI). The benefit in making AGI safer looms larger to me than the risk of bad actors—and the threat of such bad actors would lead us to police compute resources more thoroughly than we do now.
Digital people may be less predictable, especially if “enhanced”, I think that the trade-off is still pretty good here in that they almost entirely approximate human values versus AI systems which (by default) do not at all.