There’s another very large disadvantage to speeding up research here—once we have digital minds, it might be fairly trivial for bad actors to create many instances of minds in states of extreme suffering (for reasons such as sadism). This seems like a dominant consideration to me, to the extent that I’d support any promising non-confrontational efforts to slow down research into WBE, despite the benefits to individuals that would come about from achieving digital immortality.
I also think digital people (especially those whose cognition is deliberately modified from that of baseline humans, to e.g. increase “power”) are likely to act in unpredictable ways—because of errors in the emulation process, or the very different environment they find themselves in relative to biological humans. So digital people could actually be less trustworthy than biological people, at least in the earlier stages of their deployment.
That could happen. I would emphasise that I’m not talking about whether we should have digital minds at all, just when we get them (before or after AGI). The benefit in making AGI safer looms larger to me than the risk of bad actors—and the threat of such bad actors would lead us to police compute resources more thoroughly than we do now.
Digital people may be less predictable, especially if “enhanced”, I think that the trade-off is still pretty good here in that they almost entirely approximate human values versus AI systems which (by default) do not at all.
There’s another very large disadvantage to speeding up research here—once we have digital minds, it might be fairly trivial for bad actors to create many instances of minds in states of extreme suffering (for reasons such as sadism). This seems like a dominant consideration to me, to the extent that I’d support any promising non-confrontational efforts to slow down research into WBE, despite the benefits to individuals that would come about from achieving digital immortality.
I also think digital people (especially those whose cognition is deliberately modified from that of baseline humans, to e.g. increase “power”) are likely to act in unpredictable ways—because of errors in the emulation process, or the very different environment they find themselves in relative to biological humans. So digital people could actually be less trustworthy than biological people, at least in the earlier stages of their deployment.
That could happen. I would emphasise that I’m not talking about whether we should have digital minds at all, just when we get them (before or after AGI). The benefit in making AGI safer looms larger to me than the risk of bad actors—and the threat of such bad actors would lead us to police compute resources more thoroughly than we do now.
Digital people may be less predictable, especially if “enhanced”, I think that the trade-off is still pretty good here in that they almost entirely approximate human values versus AI systems which (by default) do not at all.