I think that it is possible that whole brain emulation (WBE) will be developed before AGI and that there are s-risks associated with WBE. It seems to me that most people in the s-risk community work on AI risks.
Do you know of any research that deals specifically with the prevention of s-risks from WBE? Since an emulated mind should resemble the original person, it should be difficult to tweak the code of the emulation such that extreme suffering is impossible. Although this may work for AGI, you need probably a different strategy for emulated minds.
Yea, WBE risk seems relatively neglected, maybe because of the really high expectations for AI research in this community. The only article I know talking about it is this paper by Anders Sandberg from FHI. He makes the interesting point that similar incentives that allow animal testing in today’s world could easily lead to WBE suffering. In terms of preventing suffering his main takeaway is:
Principle of assuming the most (PAM): Assume that any emulated system could have the same mental properties as the original system and treat it correspondingly.
The other best practices he mentions, like perfectly blocking pain receptors, would be helpful but only become a real solution with a better theory of suffering.
I think that it is possible that whole brain emulation (WBE) will be developed before AGI and that there are s-risks associated with WBE. It seems to me that most people in the s-risk community work on AI risks.
Do you know of any research that deals specifically with the prevention of s-risks from WBE? Since an emulated mind should resemble the original person, it should be difficult to tweak the code of the emulation such that extreme suffering is impossible. Although this may work for AGI, you need probably a different strategy for emulated minds.
Yea, WBE risk seems relatively neglected, maybe because of the really high expectations for AI research in this community. The only article I know talking about it is this paper by Anders Sandberg from FHI. He makes the interesting point that similar incentives that allow animal testing in today’s world could easily lead to WBE suffering. In terms of preventing suffering his main takeaway is:
The other best practices he mentions, like perfectly blocking pain receptors, would be helpful but only become a real solution with a better theory of suffering.