In that context, this seems maybe like just a pathway for reducing long-term-risks from malevolent actors? Or, are you thinking more of Age of Em or something else which Hanson wrote?
Sorry, you’re right; the link I provided earlier isn’t very relevant (that was the only EA Forum article on WBE I could find). I was thinking something along the lines of what Hanson wrote. Especially the economic and legal issues (this and the last 3 paragraphs in this; there are other issues raised in the same Wiki article as well). Also Bostrom raised significant concerns in Superintelligence, Ch. 2 that if WBE was the path to the first AGI invented, there is significant risk that unfriendly AGI will be created (see the last set of bullet points in this).
A cause candidate: risks from whole brain emulation
https://forum.effectivealtruism.org/posts/LpkXtFXdsRd4rG8Kb/reducing-long-term-risks-from-malevolent-actors#Whole_brain_emulation
In that context, this seems maybe like just a pathway for reducing long-term-risks from malevolent actors? Or, are you thinking more of Age of Em or something else which Hanson wrote?
Sorry, you’re right; the link I provided earlier isn’t very relevant (that was the only EA Forum article on WBE I could find). I was thinking something along the lines of what Hanson wrote. Especially the economic and legal issues (this and the last 3 paragraphs in this; there are other issues raised in the same Wiki article as well). Also Bostrom raised significant concerns in Superintelligence, Ch. 2 that if WBE was the path to the first AGI invented, there is significant risk that unfriendly AGI will be created (see the last set of bullet points in this).
Ok, cheers, will add.