I agree with you that pure software AGI is very likely to happen sooner than brain emulation.
I’m wondering about your scenario for the farther future, near the point when humans start to retire from all jobs. I think that at this point, many humans would be understandably afraid of the idea that AIs could take over. People are not stupid and many are obsessed with security. At this point, brain emulation would be possible. It seems to me that there would therefore be large efforts in making those emulations competitive with pure software AI in important ways (not all ways of course, but some important ones, involving things like judgment). Possibly involving regulation to aid this process. Of course it is just a guess, but it seems likely to me that this would work to some extent. However, this may stretch the definition of what we currently consider a human in some ways.
I agree with you that pure software AGI is very likely to happen sooner than brain emulation.
I’m wondering about your scenario for the farther future, near the point when humans start to retire from all jobs. I think that at this point, many humans would be understandably afraid of the idea that AIs could take over. People are not stupid and many are obsessed with security. At this point, brain emulation would be possible. It seems to me that there would therefore be large efforts in making those emulations competitive with pure software AI in important ways (not all ways of course, but some important ones, involving things like judgment). Possibly involving regulation to aid this process. Of course it is just a guess, but it seems likely to me that this would work to some extent. However, this may stretch the definition of what we currently consider a human in some ways.