A complication: Whole-brain emulation seeks to instantiate human minds, which are conscious by default, in virtual worlds. Any suffering involved in that can presumably be edited away if I go by what Robin Hanson wrote in Age of Em. Hanson also thinks that this might be a more likely first route for HLAI, which suggests that may be the “lazy solution”, compared to mathematically-based AGI. However, in the S-riskstalk at EAG Boston, an example of s-risk was something like this.
Analogizing like this isn’t my idea of a first-principle argument, and therefore what I’m saying is not airtight either, considering the levels of uncertainty for paths to AGI.
A complication: Whole-brain emulation seeks to instantiate human minds, which are conscious by default, in virtual worlds. Any suffering involved in that can presumably be edited away if I go by what Robin Hanson wrote in Age of Em. Hanson also thinks that this might be a more likely first route for HLAI, which suggests that may be the “lazy solution”, compared to mathematically-based AGI. However, in the S-risks talk at EAG Boston, an example of s-risk was something like this.
Analogizing like this isn’t my idea of a first-principle argument, and therefore what I’m saying is not airtight either, considering the levels of uncertainty for paths to AGI.