I want to distinguish between two potential claims:
When two distinct populations live alongside each other, sometimes the less intelligent population dies out as a result of competition and violence with the more intelligent population.
When two distinct populations live alongside each other, by default, the more intelligent population generally develops convergent instrumental goals that lead to the extinction of the other population, unless the more intelligent population is value-aligned with the other population.
I think claim (1) is clearly true and is supported by your observation that Neanderthals’ went extinct, but I intended to argue against claim (2) instead. (Although, separately, I think the evidence that Neanderthals’ were less intelligent than homo sapiens is rather weak.)
Despite my comment above, I do not actually have much sympathy towards the claim that humans can’t possibly go extinct, or that our species is definitely going to survive over the very long run in a relatively unmodified form, for the next billion years. (Indeed, perhaps like the Neanderthals, our best hope to survive in the long-run may come from merging with the AIs.)
It’s possible you think claim (1) is sufficient in some sense to establish some important argument. For example, perhaps all you’re intending to argue here is that AI is risky, which to be clear, I agree with.
On the other hand, I think that claim (2) accurately describes a popular view among EAs, albeit with some dispute over what counts as a “population” for the purpose of this argument, and what counts as “value-aligned”. While important, claim (1) is simply much weaker than claim (2), and consequently implies fewer concrete policy prescriptions.
I think it is important to critically examine (2) even if we both concede that (1) is true.
I want to distinguish between two potential claims:
When two distinct populations live alongside each other, sometimes the less intelligent population dies out as a result of competition and violence with the more intelligent population.
When two distinct populations live alongside each other, by default, the more intelligent population generally develops convergent instrumental goals that lead to the extinction of the other population, unless the more intelligent population is value-aligned with the other population.
I think claim (1) is clearly true and is supported by your observation that Neanderthals’ went extinct, but I intended to argue against claim (2) instead. (Although, separately, I think the evidence that Neanderthals’ were less intelligent than homo sapiens is rather weak.)
Despite my comment above, I do not actually have much sympathy towards the claim that humans can’t possibly go extinct, or that our species is definitely going to survive over the very long run in a relatively unmodified form, for the next billion years. (Indeed, perhaps like the Neanderthals, our best hope to survive in the long-run may come from merging with the AIs.)
It’s possible you think claim (1) is sufficient in some sense to establish some important argument. For example, perhaps all you’re intending to argue here is that AI is risky, which to be clear, I agree with.
On the other hand, I think that claim (2) accurately describes a popular view among EAs, albeit with some dispute over what counts as a “population” for the purpose of this argument, and what counts as “value-aligned”. While important, claim (1) is simply much weaker than claim (2), and consequently implies fewer concrete policy prescriptions.
I think it is important to critically examine (2) even if we both concede that (1) is true.