I don’t expect human brain emulations to be competitive with pure software AI. The main reason is that by the time we have the ability to simulate the human brain, I expect our AIs will already be better than humans at almost any cognitive task. We still haven’t simulated the simplest of organisms, and there are some good a priori reasons to think that software is easier to improve than brain emulation technology.
I definitely think we could try to merge with AIs to try to keep up with the pace of the world in general, but I don’t think this approach would allow us to surpass ordinary software progress.
I agree with you that pure software AGI is very likely to happen sooner than brain emulation.
I’m wondering about your scenario for the farther future, near the point when humans start to retire from all jobs. I think that at this point, many humans would be understandably afraid of the idea that AIs could take over. People are not stupid and many are obsessed with security. At this point, brain emulation would be possible. It seems to me that there would therefore be large efforts in making those emulations competitive with pure software AI in important ways (not all ways of course, but some important ones, involving things like judgment). Possibly involving regulation to aid this process. Of course it is just a guess, but it seems likely to me that this would work to some extent. However, this may stretch the definition of what we currently consider a human in some ways.
I don’t expect human brain emulations to be competitive with pure software AI. The main reason is that by the time we have the ability to simulate the human brain, I expect our AIs will already be better than humans at almost any cognitive task. We still haven’t simulated the simplest of organisms, and there are some good a priori reasons to think that software is easier to improve than brain emulation technology.
I definitely think we could try to merge with AIs to try to keep up with the pace of the world in general, but I don’t think this approach would allow us to surpass ordinary software progress.
I agree with you that pure software AGI is very likely to happen sooner than brain emulation.
I’m wondering about your scenario for the farther future, near the point when humans start to retire from all jobs. I think that at this point, many humans would be understandably afraid of the idea that AIs could take over. People are not stupid and many are obsessed with security. At this point, brain emulation would be possible. It seems to me that there would therefore be large efforts in making those emulations competitive with pure software AI in important ways (not all ways of course, but some important ones, involving things like judgment). Possibly involving regulation to aid this process. Of course it is just a guess, but it seems likely to me that this would work to some extent. However, this may stretch the definition of what we currently consider a human in some ways.