Thanks. :) I’m personally not one of those transhumanists who welcome the transition to weird posthuman values. I would prefer for space not to be colonized at all in order to avoid astronomically increasing the amount of sentience (and therefore the amount of expected suffering) in our region of the cosmos. I think there could be some common ground, at least in the short run, between suffering-focused people who don’t want space colonized in general and existential-risk people who want to radically slow down the pace of AI progress. If it were possible, the Butlerian Jihad solution could be pretty good both for the AI doomers and the negative utilitarians. Unfortunately, it’s probably not politically possible (even domestically much less internationally), and I’m unsure whether half measures toward it are net good or bad. For example, maybe slowing AI progress in the US would help China catch up, making a competitive race between the two countries more likely, thereby increasing the chance of catastrophic Cold War-style conflict.
Interesting point about most mutants not being very successful. That’s a main reason I tend to imagine that the first AGIs who try to overpower humans, if any, would plausibly fail.
I think there’s some difference in the case of intelligence at the level of humans and above, versus other animals, in adaptability to new circumstances, because human-level intelligence can figure out problems by reason and doesn’t have to wait for evolution to brute-force its way into genetically based solutions. Humans have changed their environments dramatically from the ancestral ones without killing themselves (yet), based on this ability to be flexible using reason. Even the smarter non-human animals display some amount of this ability (cf. the Baldwin effect). (A web search shows that you’ve written about the Baldwin effect and how being smarter leads to faster evolution, so feel free to correct/critique me.)
If you mean that posthumans are likely to be fragile at the collective level, because their aggregate dynamics might result in their own extinction, then that’s plausible, and it may happen to humans themselves within a century or two if current trends continue.
Thanks. :) I’m personally not one of those transhumanists who welcome the transition to weird posthuman values. I would prefer for space not to be colonized at all in order to avoid astronomically increasing the amount of sentience (and therefore the amount of expected suffering) in our region of the cosmos. I think there could be some common ground, at least in the short run, between suffering-focused people who don’t want space colonized in general and existential-risk people who want to radically slow down the pace of AI progress. If it were possible, the Butlerian Jihad solution could be pretty good both for the AI doomers and the negative utilitarians. Unfortunately, it’s probably not politically possible (even domestically much less internationally), and I’m unsure whether half measures toward it are net good or bad. For example, maybe slowing AI progress in the US would help China catch up, making a competitive race between the two countries more likely, thereby increasing the chance of catastrophic Cold War-style conflict.
Interesting point about most mutants not being very successful. That’s a main reason I tend to imagine that the first AGIs who try to overpower humans, if any, would plausibly fail.
I think there’s some difference in the case of intelligence at the level of humans and above, versus other animals, in adaptability to new circumstances, because human-level intelligence can figure out problems by reason and doesn’t have to wait for evolution to brute-force its way into genetically based solutions. Humans have changed their environments dramatically from the ancestral ones without killing themselves (yet), based on this ability to be flexible using reason. Even the smarter non-human animals display some amount of this ability (cf. the Baldwin effect). (A web search shows that you’ve written about the Baldwin effect and how being smarter leads to faster evolution, so feel free to correct/critique me.)
If you mean that posthumans are likely to be fragile at the collective level, because their aggregate dynamics might result in their own extinction, then that’s plausible, and it may happen to humans themselves within a century or two if current trends continue.
Brian—that all seems reasonable. Much to think about!