Oh, I can see why it is ambiguous. I meant whether it is easier to attack or defend, which is separate from the “power” attackers have and defenders have.
”What incentive is there to destroy the world, as opposed to take it over? If you destroy the world, aren’t you sacrificing yourself at the same time?”
Some would be willing to do that if they can’t take it over.
What reason is there to think that AI will shift the offense-defense balance absurdly towards offense? I admit such a thing is possible, but it doesn’t seem like AI is really the issue here. Can you elaborate?
I think main abstract argument for why this is plausible is that AI will change many things very quickly and in a high variance way. And some human processes will lag behind heavily.
This could plausibly (though not obviously) lead to offense dominance.
I’m not going to fully answer this question, b/c I have other work I should be doing, but I’ll toss in one argument. If different domains (cyber, bio, manipulation, ect.) have different offense-defense balances a sufficiently smart attacker will pick the domain with the worst balance. This recurses down further for at least some of these domains where they aren’t just a single thing, but a broad collection of vaguely related things.
Oh, I can see why it is ambiguous. I meant whether it is easier to attack or defend, which is separate from the “power” attackers have and defenders have.
”What incentive is there to destroy the world, as opposed to take it over? If you destroy the world, aren’t you sacrificing yourself at the same time?”
Some would be willing to do that if they can’t take it over.
What reason is there to think that AI will shift the offense-defense balance absurdly towards offense? I admit such a thing is possible, but it doesn’t seem like AI is really the issue here. Can you elaborate?
I think main abstract argument for why this is plausible is that AI will change many things very quickly and in a high variance way. And some human processes will lag behind heavily.
This could plausibly (though not obviously) lead to offense dominance.
I’m not going to fully answer this question, b/c I have other work I should be doing, but I’ll toss in one argument. If different domains (cyber, bio, manipulation, ect.) have different offense-defense balances a sufficiently smart attacker will pick the domain with the worst balance. This recurses down further for at least some of these domains where they aren’t just a single thing, but a broad collection of vaguely related things.