I wish there was discussion about a longer pause (e.g. multi-decade), to allow time for human genetic enhancement to take effect. Does @CarlShulman support that, and why or why not?
Also I’m having trouble making sense of the following. What kind of AI disaster is Carl worried about, that’s only a disaster for him personally, but not for society?
But also, I’m worried about disaster at a personal level. If AI was going to happen 20 years later, that would better for me. But that’s not the way to think about it for society at large.
I wish there was discussion about a longer pause (e.g. multi-decade), to allow time for human genetic enhancement to take effect. Does @CarlShulman support that, and why or why not?
Also I’m having trouble making sense of the following. What kind of AI disaster is Carl worried about, that’s only a disaster for him personally, but not for society?