Again, this is just one salient example, but: Do you find it unrealistic that a top human level persuasion skills (think interchangeably Mao, Sam Altman and FDR depending on the audience) together with 1 million times ordinary communication bandwidth (i.e. entertaining this amount of conversations) would enable you to take over the world?
Or would you argue that AI is never going to get to that level?
Suppose we brought back Neanderthals, genetically engineered them to be smarter and stronger than us, and integrated them into our society. As their numbers grew, it became clear that, if they teamed up against all of humanity, they could beat us in a one-on-one fight.
In this scenario — taking the stated facts as a given — I’d still be pretty skeptical of the suggestion that there is a substantial chance that humanity will go extinct at the hands of the Neanderthals (at least in the near-to-medium term). Yes, the Neanderthals could kill all of us they wanted to, but they likely won’t want to for a number of reasons. And my skepticism here goes beyond a belief that they’d be “aligned” with us. They may in fact have substantially different values from homo sapiens, on average, and yet I still don’t think we’d likely go extinct merely because of that.
From this perspective, within the context of the scenario described, I think it would be quite reasonable and natural to ask for a specific plausible account that illustrates why humanity would go extinct if they continued on their current course with the Neanderthals. It’s reasonable to ask the same thing about AI.
Note that the comment you’re replying to says “take over the world” not extinction.
I think extinction is unlikely conditional on takeover (and takeover seems reasonably likely).
Neanderthal take over doesn’t seem very bad from my perspective, so probably I’m basically fine with that. (Particularly if we ensure that some basic ideas are floating around in Neanderthal culture like “maybe you should be really thoughtful and careful with what you do with the cosmic endowment”.)
Again, this is just one salient example, but: Do you find it unrealistic that a top human level persuasion skills (think interchangeably Mao, Sam Altman and FDR depending on the audience) together with 1 million times ordinary communication bandwidth (i.e. entertaining this amount of conversations) would enable you to take over the world? Or would you argue that AI is never going to get to that level?
Suppose we brought back Neanderthals, genetically engineered them to be smarter and stronger than us, and integrated them into our society. As their numbers grew, it became clear that, if they teamed up against all of humanity, they could beat us in a one-on-one fight.
In this scenario — taking the stated facts as a given — I’d still be pretty skeptical of the suggestion that there is a substantial chance that humanity will go extinct at the hands of the Neanderthals (at least in the near-to-medium term). Yes, the Neanderthals could kill all of us they wanted to, but they likely won’t want to for a number of reasons. And my skepticism here goes beyond a belief that they’d be “aligned” with us. They may in fact have substantially different values from homo sapiens, on average, and yet I still don’t think we’d likely go extinct merely because of that.
From this perspective, within the context of the scenario described, I think it would be quite reasonable and natural to ask for a specific plausible account that illustrates why humanity would go extinct if they continued on their current course with the Neanderthals. It’s reasonable to ask the same thing about AI.
Note that the comment you’re replying to says “take over the world” not extinction.
I think extinction is unlikely conditional on takeover (and takeover seems reasonably likely).
Neanderthal take over doesn’t seem very bad from my perspective, so probably I’m basically fine with that. (Particularly if we ensure that some basic ideas are floating around in Neanderthal culture like “maybe you should be really thoughtful and careful with what you do with the cosmic endowment”.)
I agree, but the original comment said “In particular, I’m interested in accounts of the “how” of AI extinction”.
I think he’s partly asking for “takeover the world” to be operationalized a bit.